content
stringlengths
71
484k
url
stringlengths
13
5.97k
ABSTRACT: Palaeontological studies on exosqueletal disarticulated remains of chondrichthyans have focused on teeth and only less interest has been paid to scales due their limited taxonomic and systematic significance. However, classical works linking the morphology and the function of the squamation in extant sharks suggest that, despite their limited taxonomic value, the study of isolated scales can be a useful tool for palaeoenvironmental and palaeoecological inferences. Following this idea, we have analyzed the fossil record of shark scales from two Middle Triassic sections of the Iberian Chain (Spain), identifying different functional types by means of a morphometric discriminant analysis. From a total of 1136 isolated chondrichthyan scales, 25% were identified as abrasion resistant scales, 62% as drag reduction scales and 13% as scales of generalized functions. The elevated proportion of abrasion resistant scales suggests that this chondrichthyan palaeocommunity was highly dominated by benthic sharks that lived over a hard sea floor. However, one of the stratigraphical levels studied (He-20), presents statistically significant differences from the others, showing a lower percentage of abrasion resistant scales and a larger percentage of drag reduction scales. This level can be linked with storm episodes that could introduce remains of bentho-pelagic or pelagic forms in the inner platform.. Finally, partial correlation analysis between relative abundances of functional scale types and tooth-based taxa from the same sections provide positive correlation between teeth of Hybodus and Pseudodalatias and drag reduction scales, and teeth of Prolatodon and abrasion strength scales.
http://sharkyear.com/2014/morphometric-discriminant-analysis-of-isolated-chondrichthyan-scales-for-palaeoecological-inferences.html
Mental health is not simply the absence of mental illness; rather it is a distinct entity representing wellness. Models of wellbeing have been proposed that emphasize components of subjective wellbeing, psychological wellbeing, or a combination of both. A new 26-item scale of wellbeing (COMPAS-W) was developed in a cohort of 1669 healthy adult twins (18-61 years). The scale was derived using factor analysis of multiple scales of complementary constructs and confirmed using tests of reliability and convergent validity. Bivariate genetic modeling confirmed its heritability. From an original 89 items we identified six independent subcomponents that contributed to wellbeing. The COMPAS-W scale and its subcomponents showed construct validity against psychological and physical health behaviors, high internal consistency (average r=0.71, Wellbeing r=0.84), and 12-month test-retest reliability (average r=0.62, Wellbeing r=0.82). There was a moderate contribution of genetics to total Wellbeing (heritability h(2)=48%) and its subcomponents: Composure (h(2)=24%), Own-worth (h(2)=42%), Mastery (h(2)=40%), Positivity (h(2)=42%), Achievement (h(2)=32%) and Satisfaction (h(2)=43%). Multivariate genetic modeling indicated genetic variance was correlated across the scales, suggesting common genetic factors contributed to Wellbeing and its subcomponents. The COMPAS-W scale provides a validated indicator of wellbeing and offers a new tool to quantify mental health. Keywords: ERQ; Happiness; NEO-FFI; Resilience; SWLS; TWIN-E; WHOQOL. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
https://pubmed.ncbi.nlm.nih.gov/24863866/?dopt=Abstract
Abstract:In this research, students’ scientific attitude, computer anxiety, educational use of the Internet, academic achievement, and problematic use of the Internet are analyzed based on different variables (gender, parents’ educational level and daily access to the Internet). The research group involves 361 students from two middle schools which are located in the center of Konya. The “general survey method” is adopted in the research. In accordance with the purpose of the study, percentage, mean, standard deviation, independent samples t--‐test, ANOVA (variance) are employed in the study. A total of four scales are implemented. These four scales include a total of 13 sub-dimensions. The scores from these scales and their subscales are studied in terms of various variables. In the research, students’ scientific attitude, computer anxiety, educational use of the Internet, the problematic Internet use and academic achievement (gender, parent educational level, and daily access to the Internet) are investigated based on various variables and some significant relations are found. Keywords: Academic Achievement, scientific attitude, educational use of the internet, computer anxiety, problematic use of the internetProcedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1051 1 Assessing Pre-Service Teachers' Computer PhobiaLevels in terms of Gender and Experience, Turkish Sample Authors: Ö.F. Ursavas, H. Karal Abstract: In this study it is aimed to determine the level of preservice teachers- computer phobia. Whether or not computer phobia meaningfully varies statistically according to gender and computer experience has been tested in the study. The study was performed on 430 pre-service teachers at the Education Faculty in Rize/Turkey. Data in the study were collected through the Computer Phobia Scale consisting of the “Personal Knowledge Questionnaire", “Computer Anxiety Rating Scale", and “Computer Thought Survey". In this study, data were analyzed with statistical processes such as t test, and correlation analysis. According to results of statistical analyses, computer phobia of male pre-service teachers does not statistically vary depending on their gender. Although male preservice teachers have higher computer anxiety scores, they have lower computer thought scores. It was also observed that there is a negative and intensive relation between computer experience and computer anxiety. Meanwhile it was found out that pre-service teachers using computer regularly indicated lower computer anxiety. Obtained results were tried to be discussed in terms of the number of computer classes in the Education Faculty curriculum, hours of computer class and the computer availability of student teachers.
https://publications.waset.org/computer-anxiety-related-publications
The Evolution of Birds Birds, with their feathers and toothless bills, bipedal locomotion and flight form such a distinct class, it is hard to imagine that they derived from any other group of animals. But Archeopteryx provides an excellent example of an intermediate evolutionary form among the vertebrates. The fossil record of birds is slim, though, because they are so lightweight that they tend to float and decompose or be eaten by scavengers before they become entrapped in sediments. About 150 million years ago a feathered creature the size of a crow died and was trapped in sediments. Archeopteryx lithographica is considered to be a missing link between reptiles and birds because it has characteristics of both groups. Archeopteryx had a shoulder girdle, pelvic girdle, and legs similar in shape to modern birds. Its feathers were exactly like those of modern birds except in arrangement; it had paired clavicles to form a furcula (wishbone), a distinctly avian feature; and a foot with three digits forward and one back. Distinctly reptilian features include: small teeth in jaw sockets; a long tail of 20 vertebrae; six fused vertebrae (vs 11 in birds); free toe bones with claws on all digits; simple ribs with no lateral extensions; abdominal ribs. The bones of the wing and leg were relatively short and stout. Although it had a sternum (breasstbone), there was no keel where flight muscles attach. Thus it was a weak flier or a glider. It weighed about 2000 grams, was probably predatory, and may have been homeothermic. Is Archeopteryx the ancestor of all modern birds? Most likely, it is on a sideline of bird evolution although closely related to the mainstream of evolution. The Pterosaurs and pterodactyls were once considered ancestors of birds, and there are certain similarities such as pneumatic bones, but the pterosaurs had a wing membrane like bats and no feathers. Birds evolved from a group of small bipedal dinosaurs. Evolution of Flight We find a number of animals that are gliders or fliers. Several kinds of reptiles and amphibians developed the ability to fly with horizontal ribs stretched into sails, or with a membrane between the ribs and hand. The Malay frog flattens its body and stretches its limbs to glide. Even one tropical snake parachutes by flattening its body. Also flying squirrels, bats, phalanger (primate), fish, etc. Almost all of these vertebrate forms evolved from an arboreal ancestor (except for flying fish and pterosaurs that jumped off cliffs.) So it is likely that early reptiles in the bird-to-be stock climbed trees and jumped from them with elongated scales being used to lengthen the glide, scales having evolved a lengthened shape perhaps for protection or insulation as Archeopteryx was probably partially homeothermic. Selection for longer glides would have selected for longer scales/feathers. Archeopteryx probably climbed trees with its claws Evolution of Homeothermy Living reptiles are poikilothermic (“cold-blooded”) and so extinct reptiles have been regarded likewise. But both mammals and birds are homeothermic (“warm blooded”), so the transition must have occurred in the later reptiles. Geologic History Archeopteryx appeared 140-160 million years ago in the Jurassic Period of the Mesozoic Era. About 100 million years ago, birds had evolved into essentially modern-looking forms. Birds probably evolved a carina for flight and flightless birds lost them later. Roughly 1700 species of birds have been identified by fossil remains. Of this number, about 800 are extant and 900 extinct. So we have been able to identify about 10,000 species, which is perhaps only about 6% of all species that have ever existed- we just don’t have fossil evidence for the rest.The latest estimate of the total number of bird species that have ever lived is 166,000. More on evolution from PBS. Bird Evolution- Wikipedia.
https://ornithology.com/ornithology-lectures/evolution-birds/
Scale Classification Bases The Scale Classification Bases can be categorized on the following bases. - Subject orientation: In this, a scale is designed to measure the characteristics of the respondent who completes it or to estimate the stimulus object that is presented to the respondent. - Response form: In this, the scales can be classified as categorical or comparative. Categorical scales (rating scales) are used when a respondent scores some object without direct reference to other objects. Comparative scales (ranking scales) are used when the respondent is asked to compare two or more objects. - Degree of subjectivity: In this, the scale data is based on whether we measure subjective personal preferences or just make non-preference judgements. In the former case, the respondent is asked to select which person or solution he favors to be employed, whereas in the latter case he is simply asked to judge which person or solution will be more effective without reflecting any personal preference. - Scale properties: In this, the scales can be classified as nominal, ordinal, interval and ratio scales. Nominal scales merely classify without indicating order, distance or unique origin. Ordinal scales indicate magnitude relationships of ‘more than’ or ‘less than’, but indicate no distance or unique origin. Interval scales have both order and distance values, but no unique origin. Whereas, ratio scales possess all these features. - Number of dimensions: In this, the scales are classified as ‘uni-dimensional’ or ‘multi-dimensional’. In the former, only one attribute of the respondent or object is measured, whereas multi-dimensional scaling recognizes that an object might be described better by using the concept of an attribute space of ‘n’ dimensions, rather than a single-dimension continuum. - Scale construction techniques: This can be developed by the following five techniques. - Arbitrary approach: In this, the scales are developed on ad hoc basis. It is the most widely used approach. - Consensus approach: In this, a panel of judges evaluates the items chosen for inclusion in the instrument regarding whether they are relevant to the topic area and unambiguous in implication. - Item analysis approach: In this, a number of individual items are developed into a test that is given to a group of respondents. Post administering the test, total scores are evaluated, and the individual items are analyzed to determine which items discriminate between persons or objects with high and low total scores. - Cumulative scales: These are chosen on the basis of their conforming to some ranking of items with ascending and descending discriminating power. - Factor scales: This can be constructed on the basis of inter-correlations of items indicating a common factor accounts for the relationship between items.
https://www.reseapro.com/blog/tag/factor-scales/
By Alegria Olmedo, 7th April 2021 The minibus stopped outside the veterinary wing of Save Vietnam’s Wildlife’s premises in northern Viet Nam. There was no time to waste: aboard the bus were 23 pangolins, in crates. This last leg of their journey was from a nearby police station, but they had been shipped all the way from Malaysia a few days earlier. Whether their odyssey would terminate in Viet Nam, destined for local sale and consumption, or continue on to China in a few days’ time, was unclear. It was all hands on deck: vets, education officers, accountants and the Director of Save Vietnam’s Wildlife all rushed to unload the crates and bring the pangolins into the clinic. Initially I hung back, unsure if I would be allowed to enter the clinic and assuming I would be in the way. But quickly it became clear that whether I knew how to resuscitate pangolins or not was irrelevant. The 23 new arrivals needed to be removed from the crates, receive urgent care and be put in quarantine. Unsure but determined, I soon after found myself next to the in-house vet, holding a pangolin’s tail down, keeping it from curling to protect itself, looking away from the blood smeared across its belly so the vet could inject a tranquiliser. The pangolins were dehydrated and physically exhausted, and some were sick, and others injured. Only 18 survived. Pangolins are traded within South-east Asia, and from Africa to Asia, for consumption of their body parts. In Viet Nam and China, their meat is considered a delicacy, and their scales are used as medicine. My research on pangolin consumption in the southern Vietnamese city of Ho Chi Minh City had only started 3 months before this encounter with the 23 Sunda pangolins from Malaysia. Two months after my clumsy attempt to help in Save Vietnam’s Wildlife’s clinic, I was in Ho Chi Minh City training a group of local research assistants in a surveying technique to uncover sensitive behaviours. Our aim was to survey 1,200 Ho Chi Minh City residents to assess how many had consumed pangolin meat, scales and pangolin wine (a whole pangolin or pangolin parts or fluids soaked or mixed in rice wine) in the previous year. However, pangolins are protected in Viet Nam and selling them is illegal, so we knew people might not be willing to admit they had consumed these products. We therefore used the unmatched count technique, which allows researchers to enquire about people’s behaviour in a way that means they do not have to admit directly to having done something illegal. Despite previous studies having investigated the consumption of pangolin products, no one had previously used a technique that is appropriate for asking questions regarding a sensitive behaviour. This means that previous research may have potentially underreported the number of people who consume pangolin products. Back at my desk at the University of Oxford, I compared our unmatched count technique results with those from our direct questions and found our suspicions were correct: only a small number of people openly admitted to having consumed pangolin products thus, the unmatched count technique results elicited a higher number of consumers. Our results provided a more accurate picture of how many people consume pangolin meat, scales and wine. Unfortunately, this number is very high. At least 2 and 4% of Ho Chi Minh City residents had consumed pangolin meat and scales, respectively, in the last year. Considering populations of pangolins native to Viet Nam are in serious decline, it is likely the 23 pangolins trafficked from Malaysia were brought in to meet this high demand. Our findings contribute to the evidence that pangolin consumption in Viet Nam is probably unsustainable. This research also identified demographic characteristics associated with consumption and has informed further research into the contexts of pangolin meat consumption, as our study provided evidence this product’s consumption is not as sensitive as consumption of scales and it takes place in restaurants, where it can be researched more. It is our hope other researchers and practitioners will make use of our study to tackle this unsustainable consumption. The open access article Uncovering prevalence of pangolin consumption using a technique for investigating sensitive behaviour is available in Oryx—The International Journal of Conservation.
https://www.oryxthejournal.org/blog/pangolin-consumption-is-more-prevalent-than-past-studies-have-shown/
The X-axis is comprised of 14 scales. The first four ‘content scales’ judge validity of the test attempt and include: The 10 remaining scales known as ‘clinical scales’ are designed to measure for the presence of psychiatric syndromes, including: The Y-axis statistically standardizes the grading received on each scale in a range of T-scores from 0 to 120. A mean score is 50, and 82% of respondents are considered to be the normal population falling between 30 and 70. A T-score greater than 70 indicates psychopathy in that category. The existence of the MMPI has been concurrent with vast reforms in societal convention and increased understanding of behavioral health. Likewise, the instrument has been adapted to reflect such changes. Overarching criticisms to the original test center on its disparity in addressing psychopathy in social and ethnic minorities. This has been attributed to the original sample being a small group, mainly consisting of young rural Caucasian subjects from the Midwestern United States. Studies have established biases in which misunderstanding or failure to culturally identify with the content of questions have led to underreporting or overreporting of mental illness. These shortcomings led to the release of the MMPI-2 by James N. Butcher, W. Grant Dahlstrom, John R. Graham, Auke Tellegen, and Beverly Kaemme in 1989. This assessment retains the original total of 567 items with the same corresponding 14 scales with the original number of questions from the test. Test items were revised based upon a larger and more diverse sample size of 2600 attuned to a 6th-grade reading level. Gendered differences were replaced with a nongendered standardized scoring. Despite further advancements, the MMPI-2 is still the most commonly administered version and has been translated into over 40 languages. In 2003, 9 restructured clinical or ‘RC’ scales were introduced as a prospective replacement for the original clinical scales. These include: The RC scales were devised to provide a streamlined interpretation and less overlap with an increased focus on the growth in understanding within psychiatry over the past 70 years. Combinations of high-scoring categories are more representative of distinct psychiatric constructs rather than the nebulous findings of the original clinical scales tying the patient to a specific diagnosis. Arguments also exist that this information is limited in that it categorizes the responder rather than providing data on an individual patient within a personalized spectrum of behavior. The RC scales have been incorporated into the most current form of the MMPI, which is known as the Minnesota Multiphasic Personality Inventory-2 Revised Form, or the MMPI-2-RF, which was released in 2008 by Yossef Ben-Porath and Auke Telleger of the University of Minnesota. The MMPI-2-RF is composed of 338 items measured by 51 scales broken into 9 validity scales, 3 higher-order scales, the 9 RC scales, 23 specific problem scales, 2 interest scales, and 5 revised personality psychopathy scales. The 9 validity scales assess incongruent answering or deceptive test-taking and include: The 3 higher-order scales broadly categorize psychopathic presentation and include: The problem scales highlight responses consistent with the presence of specific psychopathic and psychosomatic presentations and include: The interest scales are designed to assess cognitive skills aptitude and learning preferences which include: The revised personality/psychopathology five scales are based on 107 distinct items and include: It has been suggested that while the MMPI-2-RF has many additional metrics, the reduction in question number limits the amount of information about psychiatric diseases to about 60% of the original test. There has also been considerable debate over whether the new metrics are inaccurate in detecting psychopathy. In separating genuine psychopathy from attempts to feign a diagnosis for personal incentive, some studies have noted the new validity scales to be overly sensitive to overreporting of symptoms to achieve a specific result. It has conversely been found that the L-r and K-r scales are particularly reliable at detecting underreporting of mental illness. Overall, literature has been supportive of the MMPI-2-RF in identifying the accuracy of reporting psychiatric information in those who complete it. The MMPI maintains an enduring presence in the field of mental health, and its current adaption has been widely evaluated by the standard of modern behavioral health practices. It continues to receive a widespread application as a threshold of determining the presence of psychopathy, as a means of constructing a differential diagnosis for mental health problems, and as a versatile test to achieve transferrable psychological data. These data points are themselves the indication of a category, and this gives behavioral health professionals a starting point to explore plausible diagnoses and initiate appropriate treatment. Completion may also offer therapeutic benefits to patients in reflecting upon their issues and improving personal understanding of their psychology. In addition to its predominant clinical application, an extensive body of research exists to assess the MMPI in all its versions for the use in criminology, population studies, and prediction of aptitude in a particular role. Several studies on the MMPI-2-RF have compared those with a criminal history to those who have undergone rehabilitation and found that high scores on externalizing scales were predictive of violent behavior. The MMPI-2-RF has also been used as prescreening of applicants for law enforcement for obtaining baseline mental health or flagging for aggressive tendencies. There has also been usage evaluating parenting suitability in custody battles over children, and in predicting the course of domestic disputes in couples. Interpretations of the test have also been used to establish criminal intent in defendants. A major consensus of the MMPI in its current form is that increased accessibility for being tested improves retention without compromising outcomes. Prevailing criticisms of the original format were the extensive span of questioning and difficulty of paper administration for both completion and grading, with efforts to provide a more efficient medium well-documented since the 1980s. When evaluating the use of tablet devices as compared to conventional forms of electronic administration using a home computer or laptop for taking the MMPI-RF-2, the difference in reliability of results between the two mediums was insignificant. There have also been motions to use the MMPI-2-RF to assess psychopathy utilizing an algorithm using a high score on a higher-order scale and then tailoring the remaining assessment to similar questions to the indicated higher order. To aid in administration to pediatric patients, an adolescent form exists known as the Minnesota Multiphasic Personality Inventory-Adolescent-Restructured Form (MMPI-A-RF). Clinical Pearls: The completion of the MMPI holds value in determining care throughout a variety of treatment considerations. The test should be administered by a licensed psychotherapist, usually a psychiatrist or clinical psychologist, with informed consent obtained through discussion of the risks and benefits of completion. Analysis of the results by the psychotherapist interpreting scoring should be attached with a working diagnosis to assess for treatment response. The presence of conditions associated with high-scoring categories will ultimately guide the necessity for pharmacological or non-pharmacological treatment options. This will, in turn, outline the need for referral to appropriate mental healthcare, from continuing outpatient follow-up to institutionalization with fully-staffed nursing and rehabilitative care. Transfer of care should involve appropriate discussion of MMPI data correlated with a summary of interventions. High-scoring in concerning scales such as suicidality highlights an existing need for acute observation or placement. The validity of symptoms should also be corroborated to demonstrate whether a patient is malingering or suffering from organic disorders requiring medical management by a treatment team. The bio-ethical implications of the MMPI should also be identified if the patient is completing the test in concurrence or stipulation with legal charges, and they should be counseled on what findings might hold concerning criminality. The basis for the use of test data in determining adherence has also been documented in a sample consisting of 471 psychiatric patients, with externalizing scales predictive of whether a patient will be more likely to terminate treatment. [Level 3] This illustrates the need for multi-level involvement in facilitating outreach and patient compliance.
https://www.statpearls.com/articlelibrary/viewarticle/25179/
Many proteins of the outer membrane of Gram-negative bacteria and of the outer envelope of the endosymbiotically derived organelles mitochondria and plastids have a β-barrel fold. Their insertion is assisted by membrane proteins of the Omp85-TpsB superfamily. These proteins are composed of a C-terminal β-barrel and a different number of N-terminal POTRA domains, three in the case of cyanobacterial Omp85. Based on structural studies of Omp85 proteins, including the five POTRA-domain-containing BamA protein of Escherichia coli, it is predicted that anaP2 and anaP3 bear a fixed orientation, whereas anaP1 and anaP2 are connected via a flexible hinge. We challenged this proposal by investigating the conformational space of the N-terminal POTRA domains of Omp85 from the cyanobacterium Anabaena sp. PCC 7120 using pulsed electron-electron double resonance (PELDOR, or DEER) spectroscopy. The pronounced dipolar oscillations observed for most of the double spin-labeled positions indicate a rather rigid orientation of the POTRA domains in frozen liquid solution. Based on the PELDOR distance data, structure refinement of the POTRA domains was performed taking two different approaches: 1) treating the individual POTRA domains as rigid bodies; and 2) using an all-atom refinement of the structure. Both refinement approaches yielded ensembles of model structures that are more restricted compared to the conformational ensemble obtained by molecular dynamics simulations, with only a slightly different orientation of N-terminal POTRA domains anaP1 and anaP2 compared with the x-ray structure. The results are discussed in the context of the native environment of the POTRA domains in the periplasm. 50 years of amino acid hydrophobicity scales : revisiting the capacity for peptide classification (2016) Background: Physicochemical properties are frequently analyzed to characterize protein-sequences of known and unknown function. Especially the hydrophobicity of amino acids is often used for structural prediction or for the detection of membrane associated or embedded β-sheets and α-helices. For this purpose many scales classifying amino acids according to their physicochemical properties have been defined over the past decades. In parallel, several hydrophobicity parameters have been defined for calculation of peptide properties. We analyzed the performance of separating sequence pools using 98 hydrophobicity scales and five different hydrophobicity parameters, namely the overall hydrophobicity, the hydrophobic moment for detection of the α-helical and β-sheet membrane segments, the alternating hydrophobicity and the exact ß-strand score. Results: Most of the scales are capable of discriminating between transmembrane α-helices and transmembrane β-sheets, but assignment of peptides to pools of soluble peptides of different secondary structures is not achieved at the same quality. The separation capacity as measure of the discrimination between different structural elements is best by using the five different hydrophobicity parameters, but addition of the alternating hydrophobicity does not provide a large benefit. An in silico evolutionary approach shows that scales have limitation in separation capacity with a maximal threshold of 0.6 in general. We observed that scales derived from the evolutionary approach performed best in separating the different peptide pools when values for arginine and tyrosine were largely distinct from the value of glutamate. Finally, the separation of secondary structure pools via hydrophobicity can be supported by specific detectable patterns of four amino acids. Conclusion: It could be assumed that the quality of separation capacity of a certain scale depends on the spacing of the hydrophobicity value of certain amino acids. Irrespective of the wealth of hydrophobicity scales a scale separating all different kinds of secondary structures or between soluble and transmembrane peptides does not exist reflecting that properties other than hydrophobicity affect secondary structure formation as well. Nevertheless, application of hydrophobicity scales allows distinguishing between peptides with transmembrane α-helices and β-sheets. Furthermore, the overall separation capacity score of 0.6 using different hydrophobicity parameters could be assisted by pattern search on the protein sequence level for specific peptides with a length of four amino acids. »Ein Ort, an dem die Menschen aus aller Welt studieren, lehren und forschen möchten« : der neue Universitätspräsident Prof. Enrico Schleiff über seine ersten Wochen im Amt und über Perspektiven der Goethe-Universität (2021) The Peptidoglycan-binding protein SjcF1 influences septal junction function and channel formation in the filamentous cyanobacterium Anabaena (2015) Filamentous, heterocyst-forming cyanobacteria exchange nutrients and regulators between cells for diazotrophic growth. Two alternative modes of exchange have been discussed involving transport either through the periplasm or through septal junctions linking adjacent cells. Septal junctions and channels in the septal peptidoglycan are likely filled with septal junction complexes. While possible proteinaceous factors involved in septal junction formation, SepJ (FraG), FraC, and FraD, have been identified, little is known about peptidoglycan channel formation and septal junction complex anchoring to the peptidoglycan. We describe a factor, SjcF1, involved in regulation of septal junction channel formation in the heterocyst-forming cyanobacterium Anabaena sp. strain PCC 7120. SjcF1 interacts with the peptidoglycan layer through two peptidoglycan-binding domains and is localized throughout the cell periphery but at higher levels in the intercellular septa. A strain with an insertion in sjcF1 was not affected in peptidoglycan synthesis but showed an altered morphology of the septal peptidoglycan channels, which were significantly wider in the mutant than in the wild type. The mutant was impaired in intercellular exchange of a fluorescent probe to a similar extent as a sepJ deletion mutant. SjcF1 additionally bears an SH3 domain for protein-protein interactions. SH3 binding domains were identified in SepJ and FraC, and evidence for interaction of SjcF1 with both SepJ and FraC was obtained. SjcF1 represents a novel protein involved in structuring the peptidoglycan layer, which links peptidoglycan channel formation to septal junction complex function in multicellular cyanobacteria. Nonetheless, based on its subcellular distribution, this might not be the only function of SjcF1. Hoffnung auf ein Sommersemester mit mehr Präsenz : Universitätspräsident Prof. Enrico Schleiff schaut optimistisch nach vorne (2022) The arabidopsis 2'-O-ribose-methylation and pseudouridylation landscape of rRNA in comparison to human and yeast (2021) Eukaryotic ribosome assembly starts in the nucleolus, where the ribosomal DNA (rDNA) is transcribed into the 35S pre-ribosomal RNA (pre-rRNA). More than two-hundred ribosome biogenesis factors (RBFs) and more than two-hundred small nucleolar RNAs (snoRNA) catalyze the processing, folding and modification of the rRNA in Arabidopsis thaliana. The initial pre-ribosomal 90S complex is formed already during transcription by association of ribosomal proteins (RPs) and RBFs. In addition, small nucleolar ribonucleoprotein particles (snoRNPs) composed of snoRNAs and RBFs catalyze the two major rRNA modification types, 2′-O-ribose-methylation and pseudouridylation. Besides these two modifications, rRNAs can also undergo base methylations and acetylation. However, the latter two modifications have not yet been systematically explored in plants. The snoRNAs of these snoRNPs serve as targeting factors to direct modifications to specific rRNA regions by antisense elements. Today, hundreds of different sites of modifications in the rRNA have been described for eukaryotic ribosomes in general. While our understanding of the general process of ribosome biogenesis has advanced rapidly, the diversities appearing during plant ribosome biogenesis is beginning to emerge. Today, more than two-hundred RBFs were identified by bioinformatics or biochemical approaches, including several plant specific factors. Similarly, more than two hundred snoRNA were predicted based on RNA sequencing experiments. Here, we discuss the predicted and verified rRNA modification sites and the corresponding identified snoRNAs on the example of the model plant Arabidopsis thaliana. Our summary uncovers the plant modification sites in comparison to the human and yeast modification sites. »Frankfurt Alliance soll Forschungsexzellenz besser vernetzen und Wissenschaftsstandort international attraktiver machen« (2021) Reprogramming of tomato leaf metabolome by the activity of heat stress transcription factor HsfB1 (2020) Plants respond to high temperatures with global changes of the transcriptome, proteome, and metabolome. Heat stress transcription factors (Hsfs) are the core regulators of transcriptome responses as they control the reprogramming of expression of hundreds of genes. The thermotolerance-related function of Hsfs is mainly based on the regulation of many heat shock proteins (HSPs). Instead, the Hsf-dependent reprogramming of metabolic pathways and their contribution to thermotolerance are not well described. In tomato (Solanum lycopersicum), manipulation of HsfB1, either by suppression or overexpression (OE) leads to enhanced thermotolerance and coincides with distinct profile of metabolic routes based on a metabolome profiling of wild-type (WT) and HsfB1 transgenic plants. Leaves of HsfB1 knock-down plants show an accumulation of metabolites with a positive effect on thermotolerance such as the sugars sucrose and glucose and the polyamine putrescine. OE of HsfB1 leads to the accumulation of products of the phenylpropanoid and flavonoid pathways, including several caffeoyl quinic acid isomers. The latter is due to the enhanced transcription of genes coding key enzymes in both pathways, in some cases in both non-stressed and stressed plants. Our results show that beyond the control of the expression of Hsfs and HSPs, HsfB1 has a wider activity range by regulating important metabolic pathways providing an important link between stress response and physiological tomato development.
https://publikationen.ub.uni-frankfurt.de/solrsearch/index/search/searchtype/authorsearch/author/Schleiff%2C+Enrico
Project Description: In genetic epidemiology, genome-wide association studies (GWAS) have identified thousands of single nucleotide polymorphisms (SNPs) associated with complex diseases such as lung cancer or rheumatoid arthritis. Genome chips, sequence data, and other -omics data (transcriptome, methylome, metabolome) are on highly different technological and biological scales, both within and across studies. These other ?omics data will generally be closer to the phenotype when compared to genomic data. Many current research developments are driven by the integration of high-dimensional genotyping or sequencing data with information derived from other data on the same samples or of sources external to the study samples to be analyzed. We will investigate the genes, biological pathways, or even whole genomic regions. In particular, we will consider networks and interactions, also across other -omics scales. As preparatory work, SNPs have to be assigned to genes and genes have to be assigned to the considered pathways. Then, all SNPs assigned to a gene or ultimately assigned to a network will be analyzed together. This greatly reduces the high dimensionality of GWAS. Moreover, this enhances their power by using biological network information. The goal is to integrate gene- and pathway-level information with previous knowledge or simultaneously across scales via kernels, GAMLSS, or other adjustable methods. On the transcriptome and methylome scales, expression or methylation quantitative trait data (eQT, meQT) measuring the abundance of transcripts (for genes) or methylation (specific genome sites), are usually correlated with disease. They are repeatedly associated with SNPs, yielding so called eQTL or meQTL for a given locus. Combining such data directly with the SNP data may improve the power to detect causally relevant loci influencing, e.g. the development of cancer. The most likely application in the development of statistical methods in this project is lung cancer. In this field, strong collaboration already exists with the "Transdisciplinary Research in Cancer of the Lung" (TRICL) Consortium and the International Lung Cancer Consortium (ILCCO). Data are available upon request across several GWASs worldwide. Thus, a particular focus of this project will be on considering networks and interactions using information on other -omics scales also in the context of the meta-analysis of these studies.
https://www.uni-goettingen.de/en/421243.html
Autor: Poortinga, Ype H.; Klieme, Eckhard: Titel: The history and current status of testing across cultures and countries Quelle: In: Leong, Frederick T. L.;Bartram, Dave;Cheung, Fanny M.;Geisinger, Kurt F.;Iliescu, Dragos (Hrsg.): The ITC international handbook of testing and assessment New York, NY : Oxford University Press (2016) , 14-28 Sprache: Englisch Dokumenttyp: 4. Beiträge in Sammelwerken; Sammelband (keine besondere Kategorie) Schlagwörter: Bildungsforschung, Empirische Forschung, Geschichte (Histor), Internationaler Vergleich, Konzeption, Kulturdifferenz, Leistungsmessung, Leistungstest, Psychometrie, Qualität, Schülerleistung, Testkonstruktion, Testverfahren, Vergleichende Bildungsforschung Abstract(original): There are a few hundred countries and many more culturally distinct groups identified by cultural anthropologists (Human Relations Area Files; Murdock et al., 2008). In many of these groups, psychometric instruments have been applied for at least half a century. There are hundreds of psychological tests and scales, of which dozens are widely used. The permutation of peoples, tests and historical time defines the global success of the testing enterprise. It also defines the scope of this chapter, making clear that our treatment of the topic can only be scant. We will focus on a few general themes, leaving aside, for the most part, specific countries or cultures and specific tests. In the first section we look at the early use of tests cross-culturally and the growing awareness that test scores are not likely to have the same meaning across cultures. In the second section we address the resulting issues of inequivalence or imcomparability and the methods of analysis that were devised to deal with these issues. The third section covers more recent history, in our view characterized by concern about inequivalence, but also by a pragmatic approach to the use of tests with test takers who differ in the language they speak and in cultural background. The chapter ends with a section in which we address some inherent difficulties in international test use and point to important achievements over the period of almost a century. (DIPF/Orig.) DIPF-Abteilung: Bildungsqualität und Evaluation Notizen:
https://www.dipf.de/en/research/publications/publications-data-base/detail?string=dld_set.html%3FFId%3D36618
Forest Plot Network / Forest Dynamics / Botanical Collections / Functional Traits / Soil Properties & Topography Extensive network of forest plots sampled as part of the Madidi Project. A major effort of The Madidi Project has been to survey a network of forest plots. The network consists of 50 permanent 1-ha plots, and 442 temporary 0.1ha plots. The network covers an extensive elevational gradient ranging from 250 to 4,350 m. In temporary plots, all woody plants with a diameter at breast height (DBH) equal or greater than 2.5 cm have been measured and identified; in permanent plots, the DHB cut-off is 10 cm. Additionally, trees in permanent plots have been mapped. These data provide a detailed knowledge of the distribution of tree species at various spatial scales in the Madidi region. This dataset is unparalleled in the tropics in terms of its elevational range, spatial extent, replication, and exceptionally high taxonomic resolution. Daniel Alanes re-measuring a tree in one of the Madidi Project's forest plots. In 2011, we started to re-sample all permanent plots 7-8 years after their establishment. Ten plots have already been re-surveyed, with another 8 expected for 2014. Preliminary data show significant rates of mortality and recruitment (~12%), providing enough statistical power to detect dynamics in our data. Leaf variation in Pourouma guianensis from the Project's Tintaya plot. During plot re-surveys, the Madidi Project is measuring plant traits known to be important for distribution and co-existence of species along environmental gradients at both large spatial scales (i.e., across elevations) and within local communities. For 5 individuals of each species in each plot, we are collecting replicated measures of leaf area, leaf size, leaf nitrogen content, leaf number, wood density, vessel diameter, vessel density, and growth rate. At the population level, we are measuring mortality and recruitment rates. Finally, we are using field and herbarium data to obtain information on seed size and dispersal mode. We have already collected trait information in the field for 1,640 individuals and 385 species. Principal component analysis and elevational variation in soil resources among temporary plots. The Madidi Project has collected information on the variation of various soil properties among plots along the elevational gradient. These soil variables reflect various dimensions of variation in texture and resources. Soils show considerable variation across and within elevations in Madidi. We have plans to expand these data with complementary information on soil variation at smaller spatial scales. Within each permanent plot, we will create high-resolution (~10m) maps of soil-resources using CTFS-SIGEO soil protocols. We plan to take 20 soil samples and measure AL, Fe, K, Mg, Mn, Na, total N (ammonium + nitrate), P, base saturation, electrical conductivity, and pH. We have already collected information about topographic variation within each permanent plot. Tatiana (left) and Leslie (right) working with dried herbarium specimens. Additionally to the surveys of forest plots, the Madidi Project has made an extensive number of additional collections to document the woody and non-woody flora of the Madidi region. Madidi Pages Home Staff Students Data Publications Acknowledgements Contact Us Thank You! The Garden wouldn't be the Garden without our Members, Donors and Volunteers.
https://www.missouribotanicalgarden.org/plant-science/plant-science/south-america/the-madidi-project/data-madidi.aspx
Special Issue: Large-scale behavioural models of land use change Guest editors: Calum Brown, Karlsruhe Institute of Technology; Tatiana Filatova, University of Twente; Birgit Müller, Helmholtz Centre for Environmental Research – UFZ; Derek Robinson, University of Waterloo SESMO is launching a call for a special issue on large-scale behavioural models of land use change. We are calling for contributions of scholars intending to present their latest research results on this topic. Submissions can be sent at any time, up until the end of December 2021. Outline: Human activity is fundamentally reshaping the dynamics of the Earth System, with consequences that pose existential challenges to societies and ecosystems. Efforts to address these challenges of the Anthropocene era increasingly rely on computational models that simulate the cross-scale interactions of social, economic and environmental processes. For instance, land use and land use change, from field to global scales, result from decisions taken by individuals and shaped by social institutions, which rely on natural systems dynamics at various scales. However, today’s analyses of future changes in the Earth System provide scant detail about the basic processes underlying these changes. Human agency is reduced to economic determinism and scenario-based assumptions in selecting between land use options, while ecosystem dynamics are approximated at highly aggregate levels that obscure crucial interactions. These shortcomings seriously undermine the search for realistic, robust strategies to mitigate or adapt to global environmental change. A number of different approaches have been proposed for better understanding and modelling of cross-scale dynamics in coupled social-ecological systems. In particular, a clear need has been identified by the research community for a new generation of large-scale (continental to global) land use models that are based on human behavior, agency and behaviorally-rich representation of decision-making processes. Such models could be linked with large-scale biophysical models as well as mechanistic ecological models, but must first overcome the difficulties of identifying and simulating key cross-scale socio-ecological processes. Particularly challenging is the question of how to upscale locally-based models of human decision-making or whether to try and create “models of everywhereâ€. Only once these methodological challenges have been overcome will we be able to identify realistic pathways to sustainability that account for fundamental processes in human and natural systems in uncertain future conditions. For this Special Issue, we welcome contributions dedicated to the better understanding and modelling of temporal or spatial scales in land use dynamics. These contributions can present theoretical or empirical analyses, methodological contributions, or relevant model developments, and will together build towards a robust agenda for future research in this field. Articles in the Special Issue could focus on: - Case study-based empirical research on land use dynamics, explicitly tackling different social scales; - Methodological contributions on the investigation and modelling of cross-scale dynamics (up-scaling and down-scaling methods); - Modelling of land use dynamics across scales and at large (continental-global) scales accounting for human agency; - New methods to link models covering different scales and human or natural systems; - Approaches to integrate behaviourally rich representation of human agency in large-scale models; - Representation of an interplay between individual decisions and social institutions (formal or informal) in land use change models; - Up-scaling of heterogeneity of individual decision strategies and local institutional contexts from case studies to larger geographical scales; The Special Issue is supported by the joint GLP/AIMES Working Group on large scale behavioural models of land use change (https://glp.earth/how-we-work/working-groups/large-scale-behavioural-models-land-use-change) and the Human Dimensions Focus Research Group of the CSDMS (https://csdms.colorado.edu/wiki/Anthropocene_Focus_Research_Group) Timeline: The deadline for the submission of papers has been extended to the end of December 2021. Note, any papers submitted before the deadline will be processed immediately.
https://sesmo.org/announcement/view/21
The absolute value is the positive magnitude of a particular number or variable and is indicated by two vertical lines: \(\left|-5\right| = 5\). In the case of a variable absolute value (\(\left|a\right| = 5\)) the value of a can be either positive or negative (a = -5 or a = 5). The greatest common factor (GCF) is the greatest factor that divides two integers. The least common multiple (LCM) is the smallest positive integer that is a multiple of two or more integers.
https://www.asvabtestbank.com/arithmetic-reasoning/t/74/p/practice-test/837591/5
In this example, you will about C++ program to find LCM (Lowest Common Multiple) using two different methods. You will also learn to find LCM using GCD. LCM (Lowest Common Multiple) of two numbers is the smallest positive integer that is perfectly divisible by the given numbers. Maximum value between two numbers is computed using conditional operator which is store in maxValue. Every time maxValue is checked whether it is perfectly divisible by both numbers or not. If maxVaule is perfectly divisible, it is the LCM otherwise maxValue is increased by 1. Let’s implement the formula in C++ program.
http://www.trytoprogram.com/cpp-examples/cplusplus-program-to-find-lcm/
# Riesel number In mathematics, a Riesel number is an odd natural number k for which k × 2 n − 1 {\displaystyle k\times 2^{n}-1} is composite for all natural numbers n (sequence A101036 in the OEIS). In other words, when k is a Riesel number, all members of the following set are composite: If the form is instead k × 2 n + 1 {\displaystyle k\times 2^{n}+1} , then k is a Sierpinski number. ## Riesel Problem In 1956, Hans Riesel showed that there are an infinite number of integers k such that k × 2 n − 1 {\displaystyle k\times 2^{n}-1} is not prime for any integer n. He showed that the number 509203 has this property, as does 509203 plus any positive integer multiple of 11184810. The Riesel problem consists in determining the smallest Riesel number. Because no covering set has been found for any k less than 509203, it is conjectured to be the smallest Riesel number. To check if there are k < 509203, the Riesel Sieve project (analogous to Seventeen or Bust for Sierpinski numbers) started with 101 candidates k. As of April 2021, 57 of these k had been eliminated by Riesel Sieve, PrimeGrid, or outside persons. The remaining 44 values of k that have yielded only composite numbers for all values of n so far tested are The most recent elimination was in April 2021, when 206039 × 213104952 − 1 was found to be prime by Ryan Propper. This number is 3,944,989 digits long. As of August 2022, PrimeGrid has searched the remaining candidates up to n = 12,800,000. ## Known Riesel numbers The sequence of currently known Riesel numbers begins with: ## Covering set A number can be shown to be a Riesel number by exhibiting a covering set: a set of prime numbers that will divide any member of the sequence, so called because it is said to "cover" that sequence. The only proven Riesel numbers below one million have covering sets as follows: 509203 × 2 n − 1 {\displaystyle 509203\times 2^{n}-1} has covering set {3, 5, 7, 13, 17, 241} 762701 × 2 n − 1 {\displaystyle 762701\times 2^{n}-1} has covering set {3, 5, 7, 13, 17, 241} 777149 × 2 n − 1 {\displaystyle 777149\times 2^{n}-1} has covering set {3, 5, 7, 13, 19, 37, 73} 790841 × 2 n − 1 {\displaystyle 790841\times 2^{n}-1} has covering set {3, 5, 7, 13, 19, 37, 73} 992077 × 2 n − 1 {\displaystyle 992077\times 2^{n}-1} has covering set {3, 5, 7, 13, 17, 241}. ## The smallest n for which k · 2n − 1 is prime Here is a sequence a ( k ) {\displaystyle a(k)} for k = 1, 2, .... It is defined as follows: a ( k ) {\displaystyle a(k)} is the smallest n ≥ 0 such that k ⋅ 2 n − 1 {\displaystyle k\cdot 2^{n}-1} is prime, or -1 if no such prime exists. Related sequences are OEIS: A050412 (not allowing n = 0), for odd ks, see OEIS: A046069 or OEIS: A108129 (not allowing n = 0) ## Simultaneously Riesel and Sierpiński A number may be simultaneously Riesel and Sierpiński. These are called Brier numbers. The five smallest known examples are 3316923598096294713661, 10439679896374780276373, 11615103277955704975673, 12607110588854501953787, 17855036657007596110949, ... (A076335). ## The dual Riesel problem The dual Riesel numbers are defined as the odd natural numbers k such that |2n - k| is composite for all natural numbers n. There is a conjecture that the set of this numbers is the same as the set of Riesel numbers. For example, |2n - 509203| is composite for all natural numbers n, and 509203 is conjectured to be the smallest dual Riesel number. The smallest n which 2n - k is prime are (for odd ks, and this sequence requires that 2n > k) The odd ks which k - 2n are all composite for all 2n < k (the de Polignac numbers) are The unknown values of ks are (for which 2n > k) ## Riesel number base b One can generalize the Riesel problem to an integer base b ≥ 2. A Riesel number base b is a positive integer k such that gcd(k − 1, b − 1) = 1. (if gcd(k − 1, b − 1) > 1, then gcd(k − 1, b − 1) is a trivial factor of k×bn − 1 (Definition of trivial factors for the conjectures: Each and every n-value has the same factor)) For every integer b ≥ 2, there are infinitely many Riesel numbers base b. Example 1: All numbers congruent to 84687 mod 10124569 and not congruent to 1 mod 5 are Riesel numbers base 6, because of the covering set {7, 13, 31, 37, 97}. Besides, these k are not trivial since gcd(k + 1, 6 − 1) = 1 for these k. (The Riesel base 6 conjecture is not proven, it has 3 remaining k, namely 1597, 9582 and 57492) Example 2: 6 is a Riesel number to all bases b congruent to 34 mod 35, because if b is congruent to 34 mod 35, then 6×bn − 1 is divisible by 5 for all even n and divisible by 7 for all odd n. Besides, 6 is not a trivial k in these bases b since gcd(6 − 1, b − 1) = 1 for these bases b. Example 3: All squares k congruent to 12 mod 13 and not congruent to 1 mod 11 are Riesel numbers base 12, since for all such k, k×12n − 1 has algebraic factors for all even n and divisible by 13 for all odd n. Besides, these k are not trivial since gcd(k + 1, 12 − 1) = 1 for these k. (The Riesel base 12 conjecture is proven) Example 4: If k is between a multiple of 5 and a multiple of 11, then k×109n − 1 is divisible by either 5 or 11 for all positive integers n. The first few such k are 21, 34, 76, 89, 131, 144, ... However, all these k < 144 are also trivial k (i. e. gcd(k − 1, 109 − 1) is not 1). Thus, the smallest Riesel number base 109 is 144. (The Riesel base 109 conjecture is not proven, it has one remaining k, namely 84) Example 5: If k is square, then k×49n − 1 has algebraic factors for all positive integers n. The first few positive squares are 1, 4, 9, 16, 25, 36, ... However, all these k < 36 are also trivial k (i. e. gcd(k − 1, 49 − 1) is not 1). Thus, the smallest Riesel number base 49 is 36. (The Riesel base 49 conjecture is proven) We want to find and proof the smallest Riesel number base b for every integer b ≥ 2. It is a conjecture that if k is a Riesel number base b, then at least one of the three conditions holds: All numbers of the form k×bn − 1 have a factor in some covering set. (For example, b = 22, k = 4461, then all numbers of the form k×bn − 1 have a factor in the covering set: {5, 23, 97}) k×bn − 1 has algebraic factors. (For example, b = 9, k = 4, then k×bn − 1 can be factored to (2×3n − 1) × (2×3n + 1)) For some n, numbers of the form k×bn − 1 have a factor in some covering set; and for all other n, k×bn − 1 has algebraic factors. (For example, b = 19, k = 144, then if n is odd, then k×bn − 1 is divisible by 5, if n is even, then k×bn − 1 can be factored to (12×19n/2 − 1) × (12×19n/2 + 1)) In the following list, we only consider those positive integers k such that gcd(k − 1, b − 1) = 1, and all integer n must be ≥ 1. Note: k-values that are a multiple of b and where k−1 is not prime are included in the conjectures (and included in the remaining k with red color if no primes are known for these k-values) but excluded from testing (Thus, never be the k of "largest 5 primes found"), since such k-values will have the same prime as k / b. Conjectured smallest Riesel number base n are (start with n = 2)
https://en.wikipedia.org/wiki/Riesel_number
AI’s white guy problem isn’t going away — from technologyreview.com by Karen Hao A new report says current initiatives to fix the field’s diversity crisis are too narrow and shallow to be effective. Excerpt: The numbers tell the tale of the AI industry’s dire lack of diversity. Women account for only 18% of authors at leading AI conferences, 20% of AI professorships, and 15% and 10% of research staff at Facebook and Google, respectively. Racial diversity is even worse: black workers represent only 2.5% of Google’s entire workforce and 4% of Facebook’s and Microsoft’s. No data is available for transgender people and other gender minorities—but it’s unlikely the trend is being bucked there either. This is deeply troubling when the influence of the industry has dramatically grown to affect everything from hiring and housing to criminal justice and the military. Along the way, the technology has automated the biases of its creators to alarming effect: devaluing women’s résumés, perpetuating employment and housing discrimination, and enshrining racist policing practices and prison convictions. Along these lines, also see: ‘Disastrous’ lack of diversity in AI industry perpetuates bias, study finds — from by theguardian.com by Kari Paul Report says an overwhelmingly white and male field has reached ‘a moment of reckoning’ over discriminatory systems Excerpt: Lack of diversity in the artificial intelligence field has reached “a moment of reckoning”, according to new findings published by a New York University research center. A “diversity disaster” has contributed to flawed systems that perpetuate gender and racial biases found the survey, published by the AI Now Institute, of more than 150 studies and reports. The AI field, which is overwhelmingly white and male, is at risk of replicating or perpetuating historical biases and power imbalances, the report said. Examples cited include image recognition services making offensive classifications of minorities, chatbots adopting hate speech, and Amazon technology failing to recognize users with darker skin colors. The biases of systems built by the AI industry can be largely attributed to the lack of diversity within the field itself, the report said.
http://danielschristian.com/learning-ecosystems/2019/04/20/ais-white-guy-problem-isnt-going-away-hao/
Machine learning holds the possibility of improving racial health inequalities by compensating for human bias and structural racism. However, unanticipated racial biases may enter during model design, training, or implementation and perpetuate or worsen racial inequalities if ignored. Pre-existing racial health inequalities could be codified into medical care by machine learning without clinicians being aware. To illustrate the importance of a commitment to antiracism at all stages of machine learning, we examine machine learning in predicting severe sepsis in Black children, focusing on the impacts of structural racism that may be perpetuated by machine learning and difficult to discover. To move toward antiracist machine learning, we recommend partnering with ethicists and experts in model development, enrolling representative samples for training, including socioeconomic inputs with proximate causal associations to racial inequalities, reporting outcomes by race, and committing to equitable models that narrow inequality gaps or at least have equal benefit. |Original language||English (US)| |Pages (from-to)||129-132| |Number of pages||4| |Journal||Journal of Pediatrics| |Volume||247| |DOIs| |State||Published - Aug 2022| Bibliographical notePublisher Copyright: © 2022 Elsevier Inc.
https://experts.umn.edu/en/publications/the-risk-of-coding-racism-into-pediatric-sepsis-care-the-necessit
Google’s AI for mammograms doesn’t account for racial differences A short article examining Google’s new AI for mammograms. It has hopes of replacing human radiologists with faster and more accurate diagnosis. However, there are worries over its accuracy in spotting cancer in diverse racial and ethnic populations, both due to white focused data sets and inherent biases within the healthcare system. AI reveals differences in appearance of cancer tissue between racial populations Artificial Intelligence technologies are being used to understand potential differences in prostate cancer tissues between racial populations; cancer tissues manifest differently in black and white patients. This research is revealing the racial bias in AI systems used to diagnose prostate cancer. Algorithmic models are trained on data from majority white populations, which means that prostate […] Limiting racial disparities and bias for wearable devices in health science research Consumer wearables are devices used for tracking activity, sleep and other health-related outcomes, intended to help people reach their wellness goals. However these wearables are less accurate for people with darker skin tones, this is especially worrying as these devices, and their data are being utilised in health-related research. Since skin tones affect algorithmic output. Studies find bias in AI models that recommend treatments and diagnose diseases Machine learning models for healthcare hold promise in improving medical treatments by improving predictions of care and mortality, however their black box nature, and bias in training data sets leaves them vulnerable to instead hinder the effectiveness of critical care. This article explores various research studies to explore bias within AI healthcare technologies. Dissecting racial bias in an algorithm used to manage the health of populations Research article examining racial bias in health algorithms. The research shows that a widely used algorithm, which affects millions of patients, exhibits a significant racial bias. There is evidence that Black patients, who are assigned the same level of risk as white patients by the algorithm, are actually much sicker. This racial discrimination is reducing […] New Study Blames Algorithm For Racial Discrimination This article examines a tool created by Optum, which was designed to identify high-risk patients with untreated chronic diseases, in order to redistribute medical resources to those who need them most. Research has shown this algorithm to be biased; it was less likely to admit black people than white people who were equally sick to […] Artificial Intelligence in Healthcare: The Need for Ethics The advent of AI promises to revolutionise the way we think about medicine and healthcare, but who do we hold accountable when automated procedures go awry? In this talk, Varoon focuses on the lack of affordable medicines within healthcare and the concerns over racial bias being brought into the healthcare system. AI, Medicine, and Bias: Diversifying Your Dataset is Not Enough Using the example of machine learning in medicine as an example, Rachel Thomas examines examples of racial bias within the AI technologies driving modern-day medicines and treatments. Rachel Thomas argues that whilst the diversity of your data set, and performance of your model across different demographic groups is important, this is only a narrow slice […] Bias + Artificial Intelligence (in Medicine) Talk by Rachel Thomas on the prevalence of bias within AI-based technology used in medicine. AI has the potential to remove human biases in the healthcare system, however its integration within medicine could also amplify the existing biases. Racial Bias in Science and Medicine: Who’s Included?
https://weandai.org/tag/toolkit-resource/page/4/
A new MIT-wide effort launched by the Institute for Data, Systems, and Society uses social science and computation to address systemic racism. Institute for Data, Systems, and Society Fotini Christia is the Ford International Professor in the Social Sciences in the Department of Political Science, associate director of the Institute for Data, Systems, and Society (IDSS), and director of the Sociotechnical Systems Research Center (SSRC). Her research interests include issues of conflict and cooperation in the Muslim world, and she has conducted fieldwork in Afghanistan, Bosnia, Iran, the Palestinian Territories, Syria, and Yemen. She has co-organized the IDSS Research Initiative on Combatting Systemic Racism (ICSR), which works to bridge the social sciences, data science, and computation by bringing researchers from these disciplines together to address systemic racism across housing, health care, policing, education, employment, and other sectors of society. Q: What is the IDSS/ICSR approach to systemic racism research? A: The Research Initiative on Combatting Systemic Racism (ICSR) aims to seed and coordinate cross-disciplinary research to identify and overcome racially discriminatory processes and outcomes across a range of U.S. institutions and policy domains. Building off the extensive social science literature on systemic racism, the focus of this research initiative is to use big data to develop and harness computational tools that can help effect structural and normative change toward racial equity. The initiative aims to create a visible presence at MIT for cutting-edge computational research with a racial equity lens, across societal domains that will attract and train students and scholars. The steering committee for this research initiative is composed of underrepresented minority faculty members from across MIT’s five schools and the MIT Schwarzman College of Computing. Members will serve as close advisors to the initiative as well as share the findings of our work beyond MIT’s campus. MIT Chancellor Melissa Nobles heads this committee. Q: What role can data science play in helping to effect change toward racial equity? A: Existing work has shown racial discrimination in the job market, in the criminal justice system, as well as in education, health care, and access to housing, among other places. It has also underlined how algorithms could further entrench such bias — be it in training data or in the people who build them. Data science tools can not only help identify, but also contribute to, proposing fixes on racially inequitable outcomes that result from implicit or explicit biases in governing institutional practices in the public and private sector, and more recently from the use of AI and algorithmic methods in decision-making. To that effect, this initiative will produce research that explores and collects the relevant big data across domains, while paying attention to the ways such data are collected, and focus on improving and developing data-driven computational tools to address racial disparities in structures and institutions that have reproduced racially discriminatory outcomes in American society. The strong correlation between race, class, educational attainment, and various attitudes and behaviors in the American context can make it extremely difficult to rule out the influence of confounding factors. Thus, a key motivation for our research initiative is to highlight the importance of causal analysis using computational methods, and focus on understanding the opportunities of big data and algorithmic decision-making to address racial inequities and promote racial justice — beyond de-biasing algorithms. The intent is to also codify methodologies on equity-informed research practices and produce tools that are clear on the quantifiable expected social costs and benefits, as well as on the downstream effects on systemic racism more broadly. Q: What are some ways that the ICSR might conduct or follow-up on research seeking real-world impact or policy change? A: This type of research has ethical and societal considerations at its core, especially as they pertain to historically disadvantaged groups in the U.S., and will be coordinated with and communicated to local stakeholders to drive relevant policy decisions. This initiative intends to establish connections to URM [underrepresented minority] researchers and students at underrepresented universities and to directly collaborate with them on these research efforts. To that effect, we are leveraging existing programs such as the MIT Summer Research Program (MSRP). To ensure that our research targets the right problems bringing a racial equity lens with an interest to effect policy change, we will also connect with community organizations in minority neighborhoods who often bear the brunt of the direct and indirect effects of systemic racism, as well as with local government offices that work to address inequity in service provision in these communities. Our intent is to directly engage IDSS students with these organizations to help develop and test algorithmic tools for racial equity.
https://oge.mit.edu/oge_news/3-questions-fotini-christia-on-racial-equity-and-data-science/
A new and unorthodox approach to deal with discriminatory bias in Artificial Intelligence is needed. As it is explored in detail, the current literature is a dichotomy with studies originating from the contrasting fields of study of either philosophy and sociology or data science and programming. SwissCognitive Guest Blogger: Lorenzo Belenguer It is suggested that there is a need instead for an integration of both academic approaches, and needs to be machine-centric rather than human-centric applied with a deep understanding of societal and individual prejudices. This article is a novel approach developed into a framework of action: a bias impact assessment to raise awareness of bias and why, a clear set of methodologies as shown in a table comparing with the four stages of pharmaceutical trials, and a summary flowchart. Finally, this study concludes the need for a transnational independent body with enough power to guarantee the implementation of those solutions. Bias leading to discriminatory outcomes are gaining attention in the AI industry. The let’s-drop-a-model-into-the-system-and-see-how-it-goes is no longer viable. AI has grown with such ubiquity into our daily lives and can, and do, have such dramatic effects in society, that an effective framework of actions to mitigate bias should be compulsory. The most disadvantaged groups tend to be the most affected. If we aim for a more equal and fairer society, we need to stop looking the other way and standardise a set of methodologies. As I explore with more detail in my paper published in the Springer Nature Journal, AI and Ethics, industries with a long history of applied ethics can greatly assist, such as the pharmaceutical industry. The reader will grasp a better understanding by starting from the flowchart included in this article. The model is inspired by the four stages that a pharmaceutical company will conduct before launching a new medicine and its regulatory follow up. Finally, the whole process is monitored by an independent body like the FDA in the US before being allowed to reach the market. Harm is minimised and as soon as detected, removed. It includes a compensatory scheme if negligence is proven, as we are witnessing with the overprescription of opioid drugs in the US. Before we start, an awareness of individual and societal prejudices is paramount. I would add a good understanding of the protected groups’ concept. Machines can be biased, because we are and we live in a society that is biased. This is one of the reasons why anthropology is gaining predominance in Ethics AI – especially since historical data is one of the main sources of data to feed ML models. The first phase would consist of testing the system in a closed environment while checking the quality of the data, and how it has been collected, to train the models. And the first round of detecting bias by specialised algorithms such as FairTest or AIF360. In the second phase, the system is tested secure open environment, and a second round of detecting bias is conducted again by specialised algorithms such as FairTest or AIF360. By the second stage, we are better positioned to unearth its possible flaws and discriminatory outcomes. As the first and second phases, we are ready to conduct a bias impact assessment in the third phase. Impact assessments are as old as humankind when a hunter would assess an environment to spot any risks and benefits. They can be very helpful in clearly identifying the main stakeholders, their interests and their position of power when blocking or allowing necessary changes and the short- and long-term impacts. If we want to mitigate bias in an algorithmic model, the first step is to be aware of the biases and why they occur. The bias impact assessment does that, and that is why its relevance. It is helpful to provide a list of essential values to facilitate a robust analysis to detect bias, as provided by the EU white paper on Trustworthy AI, 2019 p. 14. They are respect for human autonomy, prevention of harm, fairness and explicability. Those values are further explained in my paper (link provided in the first paragraph). Once the tests are passed, the AI system can be deployed fully accessible by its users, either a specific group of professionals, such as the HR department, or the general public, for example, credit rating when applying for a mortgage. Finally, the fourth phase is implemented by close monitoring the four values, rapid assessment feedback from its users and a compensating scheme when caused harm can be proven. At this stage, an independent body, on a transnational level is possible, is needed with enough power to guarantee the implementation of those safeguarding methodologies, and their enforcement when not. The time for voluntary cooperation is over, and the time for action is now. About the Author: Lorenzo Belenguer is a visual artist and an AI Ethics researcher. Belenguer holds an MA in Artificial Intelligence & Philosophy, and a BA (Hons) in Economics and Business Science.
https://swisscognitive.ch/2022/04/26/what-can-ai-learn-from-the-pharmaceutical-industry-to-solve-bias-five-solutions-that-might-help/
Implicit and explicit biases are among the many factors that contribute to disparities in health and health care.1 Explicit biases, attitudes and assumptions that we recognize as part of our personal belief systems can be assessed directly through self-reports. Explicit, openly racist, sexist and homophobic attitudes often underlie discriminatory actions. Implicit biases, on the other hand, are attitudes and beliefs about race, ethnicity, age, ability, gender or other characteristics that operate outside of our awareness and can only be measured. indirectly. Implicit bias surreptitiously influences judgment and can, without intention, contribute to discriminatory behavior.2 A person may hold explicit egalitarian beliefs while harboring implicit attitudes and stereotypes that contradict their conscious beliefs. Moreover, our individual biases operate within larger social, cultural and economic structures whose biased policies and practices perpetuate systemic racism, sexism and other forms of discrimination. In medicine, discriminatory practices and policies based on bias not only have a negative effect on patient care and the medical training environment, but also limit the diversity of the health workforce, lead to an inequitable distribution of funding for research and can hinder career advancement. A review of studies involving doctors, nurses, and other healthcare professionals found that implicit racial bias among healthcare providers is associated with diagnostic uncertainty and, for black patients, with negative assessments of their clinical interactions, less patient-centeredness, poor communication with provider, insufficient pain treatment, opinions of black patients as less medically adherent than white patients, and other adverse effects.1 These biases are learned from cultural exposure and internalized over time: In one study, 48.7% of American medical students surveyed said they had been exposed to negative comments about black patients by attending physicians or residents, and these students demonstrated significantly greater implicit racial bias in Year 4 than they had in Year 1.3 A review of the literature on implicit bias reduction, which looked at the evidence for many approaches and strategies, found that methods such as exposure to counter-stereotypical examples, acknowledgment and understanding of others’ perspectives and appeals to egalitarian values have not led to a reduction in implicit biases.2 Indeed, no intervention aimed at reducing implicit bias has been shown to have lasting effects. Therefore, it makes sense for healthcare organizations to forego bias reduction interventions and instead focus on eliminating discriminatory behaviors and other harms caused by implicit bias. Although pervasive, implicit biases are hidden and hard to recognize, especially in and of themselves. It can be assumed that we all have implicit biases, but individual and organizational actions can combat the damage caused by these attitudes and beliefs. Awareness of biases is a step towards behavior change. There are a number of ways to increase our awareness of personal biases, including taking the Harvard Implicit Association Tests, paying close attention to our own faulty assumptions, and thinking critically about the biased behavior we engage in or that we suffer. Gonzalez and her colleagues offer 12 tips for teaching the recognition and management of implicit bias; these include creating a safe environment, presenting the science of implicit bias and evidence of its influence on clinical care, using critical thinking exercises, and engaging learners in exercises and skill-building activities in which they have to accept their discomfort.4 Education about implicit biases and ways to manage their harms should be part of health system-wide efforts to standardize knowledge in this area and recognize and manage biases. Research conducted at the Center for Health Workforce Studies at the University of Washington (UW) School of Medicine (where I work) evaluated whether a short online course on implicit biases in the clinical and learning environment would increase bias awareness in a national sample of academics. clinicians. The course was found to significantly increase bias awareness among clinicians, regardless of their personal or practice characteristics or the strength of their implicit racial and gender biases.5 Evaluation is ongoing of the course’s lasting effects on clinicians’ awareness of bias and their reports of subsequent behavior change. Beyond awareness, examples of actions that clinicians can take immediately to manage the effects of implicit bias include practicing a positive, mindful formal and informal role model; undergo active bystander training to learn how to manage or interrupt microaggressions and other harmful incidents; and training to eliminate negative patient descriptions and stigmatizing words in case notes and direct patient communications. Academic medical center faculty may develop educational materials with inclusive and diverse images and examples and may strive to use inclusive language in all written and oral communications. At the organizational level, the cornerstone of institutional bias management initiatives should be a comprehensive and ongoing program of interactive Diversity, Equity and Inclusion (DEI) skills-building education that incorporates recognition and management of implicit biases for all employees and trainees. throughout a health system. Organizations need to collect data to monitor equity. Organizations can also implement best practices to increase workforce diversity (https://diversity.nih.gov/); recognize commitment to antibias education and practices as necessary and meritorious criteria in their policy of professionalism; and create hiring, evaluation, and promotion policies that recognize and credit candidates for their DEI activities. Many US health agencies have codified these practices, but not all have. Some healthcare organizations have developed bias reporting systems. For example, UW School of Medicine and UW Medicine have established an online tool for the target or observer of a biased incident to report concerns (https://depts.washington.edu/hcequity/bias-reporting-tool/). These incidents are then assessed by a trained incident response team who gather more information and escalate the issue to an existing system, such as the Human Resources department, or refer the incident for further investigation and appropriate follow-up. . Because transparency is key, UW Medicine publishes a quarterly report on the number of incidents of bias that have occurred, the groups (faculty, patient, caregiver, staff, student, trainee, visitor, or some combination) that have been affected by the incidents, the groups that committed them, the locations of the incidents reported and the themes or types of incidents reported. An initial assessment of the data collected by the reporting tool identified four priority areas for immediate institutional intervention: biases affecting pain management, response to microaggressions and implicit biases, biased comments or actions of patients towards the members of the medical team, and the opportunities to make our institution more inclusive. These elements are now priorities in our prejudice management action plan. Innovative research is underway on strategies to interrupt the effects of implicit biases in healthcare. Indiana University researchers are developing objective blood biomarkers of pain severity to open the door to accurate pain management (https://pubmed.ncbi.nlm.nih.gov/30755720/). These objective measures hold promise for reducing subjectivity and the intrusion of implicit bias in pain assessment. Harvard researchers have proposed methods to minimize unintended biases embedded in artificial intelligence algorithms that lead to health inequities (https://www.hsph.harvard.edu/ecpe/how-to-prevent-algorithmic-bias-in-health-care/). Researchers from UW (biomedical informatics and medical education) and the University of California, San Diego (computer science) are collaboratively developing technology to help address implicit biases in clinical care; the tool being developed will automatically detect non-verbal social cues that convey clinicians’ implicit bias in real-time interactions with patients and provide accurate feedback to the clinician or clinician-in-training so that an individualized program to develop communication skills can be designed (https://www.unbiased.health/). American health care organizations vary widely in the extent to which they have embraced the need to address the effects of implicit bias. The steps outlined here can help health care systems and clinicians begin or continue the process of reducing, and ultimately eliminating, the harm caused by implicit biases in health care.
https://edwardsbrandtiowarealty.com/addressing-implicit-biases-in-health-care/
I am an assistant professor at CMU's LTI department. My research focuses on endowing NLP systems with social intelligence and social commonsense, and understanding social inequality and bias in language. Before this, I was a Postdoc/Young Investigator at the Allen Institute for AI (AI2), working on project Mosaic. I received my PhD from the University of Washington where I was advised by Noah Smith and Yejin Choi, and have interned at AI2 working on social commonsense reasoning, and at Microsoft Research working on deep learning models for understanding human cognition. [bio for talks] July 2022 👨🏼🏫: I'll be attending NAACL and giving a talk about Annotators with Attitudes during session 5A: "Ethics, Bias, Fairness 1" between 14:15 – 15:45 PST Tuesday July 12 April 2022 : Giving a keynote talk at the UserNLP: User-centered Natural Language Processing Workshop collocated with the WebConf 2022 on my research! Video coming soon. April 2022 👨🏼🏫: I gave a talk at UPenn's Computational Linguistics Lunch (CLunch) on Detecting and Rewriting Social Biases in Language. April 2022 📄: Excited that we have two papers accepted to NAACL 2022 in ☔ Seattle 🏔: our preprint on annotator variation in toxicity labelling: Annotators with Attitudes: How Annotator Beliefs And Identities Bias Toxic Language Detection, and our new work on steering agents to do the "right thing" in text games with reinforcement learning: Aligning to Social Norms and Values in Interactive Narratives February 2022 📄: Got two papers accepted to ACL 2022 in 🍀 Dublin 🍀: our paper on generating hate speech datasets with GPT-3: TOXIGEN: Controlling Language Models to Generate Implied and Adversarial Toxicity, and our paper on distilling reactions to headlines to combat misinformation: Misinfo Reaction Frames: Reasoning about Readers' Reactions to News Headlines LTI PhD student Language can perpetuate social biases and toxicity against oppressed or marginalized groups. I want to investigate new ways of representing and detecting such harmful content in text (e.g., Social Bias Frames) or in conversations (e.g., with ToxiChat). Additionally, I want to harness NLP systems to combat stereotypical or harmful statements in language, through controllable text generation (e.g., with DExperts) or controllable text debiasing (e.g., with PowerTransformer). In the future, I want to make this technology more context-aware and human-centric, e.g., by incorporating power differentials between speaker and listener, and studying human-in-the-loop methods for toxicity detection or text debiasing. Through theory-of-mind, Humans are trivially able to reason about other people's intents and reactions to everyday situations. I am interested in studying how AI systems can do this type of social commonsense reasoning. For example, this requires giving models knowledge of social commensense (e.g., with Event2Mind or ATOMIC, and methods like CoMET) or social acceptibility (Social Chemistry). Additionally, this requires creating benchmarks for measuring models' social commonsense abilities (e.g., with Social IQa, or Story Commonsense). In the future, I want to keep investigating this elusive goal of machine social commonsense. Additionally, I want to explore positive applications of this research, e.g., for therapeutic setting or for helping people with cognitive disabilities. AI and NLP systems unfortunately encode social biases and stereotypes. I'm passionate about analyzing and diagnosing the potential negative societal impacts of these systems. For example, I've uncovered severe racial bias in hate speech detection datasets and models, and subsequently analyzed whether robustness methods for NLP can mitigate them, as well as understanding the psychological attitudes that cause over- and under-detection of content as toxic. Additionally, I've scrutinized recent pretrained language models and their training data with respect to biases, toxicity, and fake news (e.g., measuring GPT-2 and GPT-3's neural toxic degeneration, and documenting the English C4 Webtext Crawl). In the future, I plan to keep diagnosing and mitigating the ethical, fairness, and representation issues in AI systems, especially from a human-centric perspective of end-users and other stakeholders.
http://maartensap.com/
Position Statement on Online Algorithmic Bias and Federal Anti-Hacking Law Submitted by Baldeep Singh Gill and Ritansha Lakshmi, Research Interns on Online Algorithmic Bias and Federal Anti-Hacking Law. Introduction British Data Scientist and Mathematician, Clive Humby coined the phrase “Data is the new oil” and affirmed that data, if left unrefined, cannot be used effectively. Humans may deplete the oil reservoirs one day, but that is not the case with data. We believe that data is an indispensable resource for the digital economy and the Fourth Industrial Revolution. To process data and get the desired results, algorithms are used which result in smart automated decisions. The increased use of algorithms has raised profound concern that such automated choices will produce discriminatory results and algorithmic bias. Computer Fraud and Abuse Act (CFAA) Enacted in 1986, the Computer Fraud and Abuse Act (CFAA) is the United States’ federal legislation for cybersecurity and cyber fraud. CFAA has been amended several times and is now believed to be completely unruly. Exceedingly broad and ambiguous provisions of the CFAA are deeply troubling. Under the Act, several activities can be identified as a crime, leading to overcriminalisation. Internet activist Aaron Swartz’s death explains the atrocious impact of federal law. Sandvig v. Barr In a recent development on CFAA, the American Civil Liberties Union (ACLU) on behalf of computer science academicians challenged the constitutionality of 18 U.S.C. §1030 of the CFAA and the former being a restriction on the Right to Speech (First Amendment of United States Constitution). The District Court of Columbia astoundingly ruled that academic research into online algorithms to uncover whether the former results in racial, gender, age or other discrimination doesn’t breach America’s anti-hacking law, CFAA. The federal court further approved the proposed research plans and concluded that such research does not attract felony charges under CFAA. ACLU’s Speech, Privacy, and Technology Project staff attorney, Esha Bhandari accurately enunciate that, “This decision helps ensure companies can be held accountable for civil rights violations in the digital era”. We believe Investigative journalism is a public service and the same should not fear the federal prosecution threat under CFAA. Investigating online websites’ algorithms would rule out the discriminatory and rights-violating data practices of such websites. The future of AI requires independent research and ruling of this decision allow people to learn about new forms of discrimination and ensure that company to be held accountable and civil rights protection remain effective in the digital era. Algorithms can overcome the harmful effects of cognitive biases and put the employers in an unbiased position towards the potential employees. Equivalent academic studies will provide constructive insights for developing non-biased algorithms to avoid racial or other discriminations.
https://www.isail.in/post/position-statement-on-online-algorithmic-bias-and-federal-anti-hacking-law
|(Image source: PixLoger from Pixabay| A lot has been made of the the potential of AI algorithms to exhibit racially biased outcomes based on the data given to them. Now, a new report from New York University's AI Now Institute has also highlighted diversity issues among the engineers themselves who are creating AI. The AI Now Institute, an interdisciplinary institute focused on researching the social implications of artificial intelligence, focuses on four key domains: rights and liberties, labor and automation, bias and inclusion, and safety and critical infrastructure. The report, “Discriminating Systems: Gender, Race, and Power in AI,” highlights what its authors call a “diversity crisis” in the AI sector across gender and race.According to the AI Now Institute the report “is the culmination of a year-long pilot study examining the scale of AI’s current diversity crisis and possible paths forward,” and “draws on a thorough review of existing literature and current research working on issues of gender, race, class, and artificial intelligence. “The review was purposefully scoped to encompass a variety of disciplinary and methodological perspectives, incorporating literature from computer science, the social sciences, and humanities. It represents the first stage of a multi-year project examining the intersection of gender, race, and power in AI, and will be followed by further studies and research articles on related issues. “ Among the report's key findings: - Black and Latinx workers are substantially underrepresented in the tech workforce. Black workers are substantially underrepresented in the AI sector. For example, only 2.5% of Google’s workforce is black, while Facebook and Microsoft are each at 4%. Latinx workers fair only slightly better, making up 3.6% of Google's workforce, 5% at Facebook, and 6% at Microsoft. - Woman are underrepresented as AI research staff. Women comprise only 15% of AI research staff at Facebook and 10% at Google. There is no public data on trans workers or other gender minorities as AI research staff. - Women authors are underrepresented at AI conferences. Only 18% of authors at leading AI conferences are women. - Computer science as a whole is experiencing a historic low point for diversity. The report found that women make up only 18% of computer science majors in the US as of 2015 (down from 37% in 1984). Women currently make up 24.4% of the computer science workforce, and their median salaries are 66% of those of their male counterparts. The numbers become even more alarming when race is taken into account. The proportion of bachelors degrees in engineering awarded to black women declined 11% from the year 2000 to 2015. “The number of women and people of color decreased at the same time that the tech industry was establishing itself as a nexus of wealth and power,” the report states. “This is even more significant when we recognize that these shocking diversity figures are not reflective of STEM as a whole: in fields outside of computer science and AI, racial and gender diversity has shown a marked improvement.” The Pipeline Isn't Enough Based its findings, the AI Now Institute is calling for the AI sector to drastically shift how it addresses diversity. However, it also cautions that fixing the school-to-industry pipeline and focusing on “women in tech” efforts will not be enough. “The overwhelming focus on ‘women in tech’ is too narrow and likely to privilege white women over others,” according to the report. The overwhelming focus on ‘women in tech’ is too narrow and likely to privilege white women over others. “Despite many decades of ‘pipeline studies’ that assess the flow of diverse job candidates from school to industry, there has been no substantial progress in diversity in the AI industry,” the report reads. “The focus on the pipeline has not addressed deeper issues with workplace cultures, power asymmetries, harassment, exclusionary hiring practices, unfair compensation, and tokenization that are causing people to leave or avoid working in the AI sector altogether.” On the technological side, the report also calls for the AI sector to reevaluate the use of AI for the “classification, detection, and prediction of race and gender.” “Systems that use physical appearance as a proxy for character or interior states are deeply suspect, including AI tools that claim to detect sexuality from headshots, predict ‘criminality’ based on facial features, or assess worker competence via ‘micro-expressions,' ” the report states. “Such systems are replicating patterns of racial and gender bias in ways that can deepen and justify historical inequality. The commercial deployment of these tools is cause for deep concern.” Systems that use physical appearance as a proxy for character or interior states are deeply suspect. The Need for Transparency For its part, the AI Now Institute makes several recommendations on addressing both diversity in the AI workforce as well as bias in the algorithms themselves. In terms of diversity, the report says transparency is key and recommends companies increase transparency with regards to salaries and compensation, harassment and discrimination reports, and hiring practices. It also calls for a change in hiring practices, including targeted recruitment to increase diversity and a commitment to increase “the number of people of color, women, and other underrepresented groups at senior leadership levels of AI companies across all departments.” Addressing bias in AI systems will be a different, but equally important, challenge. In addition to transparency around when, where, and how AI systems are deployed and why they are being used, the institute also calls for more rigorous testing across the entire lifecycle of AI systems to ensure there is continuous monitoring for evidence of bias or discrimination. Furthermore, there should be risk assessments done to evaluate whether certain systems should be designed at all. “The field of research on bias and fairness needs to go beyond technical debiasing to include a wider social analysis of how AI is used in context. This necessitates including a wider range of disciplinary expertise,” the report says. The Feedback Loop Evidence of the both the technological and diversity issues in the AI sector have been widely reported in recent years. A 2013 study conducted by Harvard University professor Latanya Sweeney found that searching two popular search engines for names commonly associated with blacks (i.e DeShawn, Darnell, and Jermaine) yielded Google AdSense ads related to arrests in the majority of cases. By contrast, white-associated names such as Geoffrey, Jill, and Emma yielded more neutral results A 2016 investigation by ProPublica found that an AI tool being used by American judges to evaluate the risk of an accused committing another crime in the future was “racially biased and inaccurate,” and was almost twice as likely to label African-Americans as a higher risk than whites. Among various studies, AI Now Institute's report also cites a 2019 study conducted by researchers at Northeastern University that found that Facebook's ad delivery service was delivering targeted ads to users based on racial and gender stereotypes. In job ads for example, supermarket cashier positions were disproportionately shown to females, taxi driver positions were shown to blacks, and lumber industry jobs were shown disproportionately to white males. Issues of race and gender inequality at some of the top tech companies known for creating AI technologies have also found their way into recent headlines. As of this writing Microsoft is in the middle of a class action gender discrimination lawsuit by workers alleging that the company has failed to seriously address hundreds of harassment and discrimination complaints. Uber, which has made no secret of its autonomous car development efforts, is currently under federal investigation for gender discrimination. A 2017 audit of Google's pay practices by the US Department of Labor found a difference of up to six to seven standard deviations between pay for men and women in nearly every job category at the company. And a widely publicized 2018 blog post by Mark Luckie, a former manager at Facebook, accused the company of workplace discrimination against its black employees. This is not to say that companies are not aware of these issues. At its 2019 I/O Developer Conference, Google announced that it was actively researching methods to eliminate potential biases in algorithms it employs for image recognition and other tasks. According to the AI Now Institute, issues with workplace diversity and bias in AI systems commingle into what it calls a “discrimination feedback loop” in which AI continues to exhibit problematic biases and discrimination, much in in part due to a lack of input and consideration from engineers from underrepresented racial and gender groups. “Discrimination and inequity in the workplace have significant material consequences, particularly for the underrepresented groups who are excluded from resources and opportunities,” the report says. “...These patterns of discrimination and exclusion reverberate well beyond the workplace into the wider world. Industrial AI systems are increasingly playing a role in our social and political institutions, including in education, healthcare, hiring, and criminal justice. Therefore, we need to consider the relationship between the workplace diversity crisis and the problems with bias and discrimination in AI systems.” Chris Wiltz is a Senior Editor at Design News covering emerging technologies including AI, VR/AR, blockchain, and robotics. | | ESC BOSTON IS BACK! The nation's largest embedded systems conference is back with a new education program tailored to the needs of today's embedded systems professionals, connecting you to hundreds of software developers, hardware engineers, start-up visionaries, and industry pros across the space. Be inspired through hands-on training and education across five conference tracks. Plus, take part in technical tutorials delivered by top embedded systems professionals. Click here to register today!
https://www.designnews.com/electronics-test/theres-diversity-crisis-ai-industry/199574341060780
Investment Appraisal Techniques The CIMA P2 paper introduces some basic investment appraisal techniques, that are later expanded on in the P3 Risk Management paper with some more advanced techniques. But first, let’s look at the logic and calculations behind these more basic investment appraisal techniques. NPV – Net Present Value This calculation and methodology will become a staple in your P2 and P3 studies. NPV is used to evaluate long term investment decision making and the financial viability of a proposed project. It considers the time value of money in an investment project i.e. receiving $100k in two years’ time won’t be as valuable as it is today. Therefore, a discount rate is applied based on the company required rate of return (also known as cost of capital). The result will be the future cash flow will be converted to its present value to aid the investment decision making process. The sum of the initial investment cost is offset by the future discounted cash flows to give a final result – the Net Present Value of the project. If the figure is negative then the incoming cash flows associated with the investment are not enough to cover the initial costs – and the project will not go ahead. If the figure is positive then the project will be accepted. The NPV values represent the final result in monetary terms and is a great way to aid the decision making process. Rather than evaluating a project based on a percentage or ratio. What’s more, and this is important, that a project with a positive NPV will maximise shareholder’s wealth whereas if a project is evaluated based on profit margins or other ratios, this is no guarantee to increase shareholder wealth, unlike NPV. Let’s look at an example. - Company XYZ have the opportunity to invest in a project that will cost $120,000 - The company will see cash returns of $30,000, $50,000, $35,000 and $30,000 in the following 4 years. - The cost of capital for XYZ is 9% - Calculate the NPV of the project and advise if it should go ahead. On the face of the initial set of figures, you’d expect the project to go ahead as the project brings in $145,000 and costs only $120,000 but if we consider the time value of money and discount the inflows using the discount rate (cost of capital of 9%), then the total inflows over the four years would only be worth $117,870, therefore giving a negative NPV and rejecting the project. IRR – Internal Rate of Return The IRR is the discount rate at what point the NPV of a project breaks even (becomes 0). Why do we need to calculate the IRR? Well, it’s usual for several reasons. For one, it will tell you exactly at what point you will break even – so if the project has an IRR of 11.76% and the company’s cost of capital is 13% then we know that the NPV will be negative and the project should be rejected – on the other side of the coin, if our cost of capital is 8% then the project will result in a positive NPV and should be accepted. To calculate the IRR, we use a formula that requires two different discount (cost of capital rates) to then find the IRR. The official formula is here: Let’s use the original NPV example above to calculate the IRR of a project. We already have calculated the NPV using a cost of capital of 9% and it’s resulted in a NPV -2130, so let’s also use a cost of capital of 7% so we are in a position to work out the IRR – remember, this is point at which the NPV is zero or breaks even. A quick look at NPV of both scenarios, we can see the IRR is somewhere between 7-9% – as 9% results in a negative NOV and 7% gives us a positive NPV, let’s find out exactly. Using the IRR formula, we can plug in the numbers. Therefore, giving us an IRR of 8.19%. Which means if the IRR is higher than the cost of capital, then the project should be accepted. But if the IRR is lower than then cost of capital then it should be rejected on the basis it will have a negative NPV i.e. 8.19% IRR Vs 9% Cost of capital = 2130 NPV LOSS in above example MIRR – Modified Internal Rate of Return Now we understand NPV and IRR, we have the foundation to look at the Modified Internal Rate of Return (MIRR). The problem with the IRR calculation is that it is assumed that the positive cash inflows are re-invested at the same rate of the IRR. However, the reality is that cash inflows will more than likely be invested at the company’s cost of capital. So MIRR was developed around this more prudent approach. Therefore, the MIRR will always be lower than the IRR – but the same logic would apply. A reminder of; If the MIRR is higher than the cost of capital, the project should be accepted. If the MIRR is lower than the cost of capital, the project should be rejected. The formula for calculating the MIRR is; Let’s take a look at the steps involved to calculate the MIRR. - Calculate the terminal value of the cash inflows - Calculate the present value of cash outflows - Calculate the MIRR using the formula - The terminal value of the cash inflows is calculated by multiplying the cash flow by the cost of capital over the years invested. i.e. in year one, the 30k inflow will benefit by 7% interest for the next three years – 30,000*1.07*1.07*1.07=-36,751 - The present value of the cash outflow, is simple. It’s the initial outflow in year one which was 120,000 - Now we can enter the numbers into the MIRR formula. So; - the 4th root of (161,446/120,000) -1 means; - the 4th root of 1.3453858 -1 = 0.851677 - which means we have an MIRR 8.52% As we can see, the cost of capital is 7% and the MIRR calculated is 8.51% so we should accept the project on this basis.
https://thecimastudent.com/2018/10/05/investment-appraisal-techniques/
The discount rate is adjusted for risk based on the projected liquidity of the company, as well as the risk of default from other parties. For projects overseas, currency risk and geographical risk are items to consider. Is the discount rate the same as the risk free rate? At its most basic level, the discount rate represents the rate (usually expressed as a percentage) used to determine the present value of a future cash flow. … In other words, the discount rate equals the risk free rate + the required rate of return. Does discount rate include risk premium? A very common example of risky investment is the real estate. Risk adjusted discount rate is representing required periodical returns by investors for pulling funds to the specific property. It is generally calculated as a sum of risk free rate and risk premium. What is a risk discount rate? A risk-adjusted discount rate is the rate obtained by combining an expected risk premium with the risk-free rate during the calculation of the present value of a risky investment. A risky investment is an investment such as real estate or a business venture that entails higher levels of risk. What does a discount rate represent? The discount rate is the interest rate used to determine the present value of future cash flows in a discounted cash flow (DCF) analysis. This helps determine if the future cash flows from a project or investment will be worth more than the capital outlay needed to fund the project or investment in the present. What is a good discount rate? Usually within 6-12%. For investors, the cost of capital is a discount rate to value a business. Don’t forget margin of safety. A high discount rate is not a margin of safety. What is a risk free discount rate? The risk-free rate represents the interest an investor would expect from an absolutely risk-free investment over a specified period of time. The real risk-free rate can be calculated by subtracting the current inflation rate from the yield of the Treasury bond matching your investment duration. How do I calculate a discount rate? How to calculate discount rate. There are two primary discount rate formulas – the weighted average cost of capital (WACC) and adjusted present value (APV). The WACC discount formula is: WACC = E/V x Ce + D/V x Cd x (1-T), and the APV discount formula is: APV = NPV + PV of the impact of financing. How do you choose a discount rate? In other words, the discount rate should equal the level of return that similar stabilized investments are currently yielding. If we know that the cash-on-cash return for the next best investment (opportunity cost) is 8%, then we should use a discount rate of 8%. What is the appropriate discount rate for NPV? It’s the rate of return that the investors expect or the cost of borrowing money. If shareholders expect a 12% return, that is the discount rate the company will use to calculate NPV. If the firm pays 4% interest on its debt, then it may use that figure as the discount rate. Typically the CFO’s office sets the rate. Who sets the discount rate? The discount rate is the interest rate on secured overnight borrowing by depository institutions, usually for reserve adjustment purposes. The rate is set by the Boards of Directors of each Federal Reserve Bank. Discount rate changes also are subject to review by the Board of Governors of the Federal Reserve System. What is the difference between interest rate and discount rate? An interest rate is an amount charged by a lender to a borrower for the use of assets. Discount Rate is the interest rate that the Federal Reserve Banks charges to the depository institutions and to commercial banks on its overnight loans. What is today’s discount rate? Federal discount rate |This week||Month ago| |Federal Discount Rate||0.25||0.25| What does higher discount rate mean? In general, a higher the discount means that there is a greater the level of risk associated with an investment and its future cash flows. Discounting is the primary factor used in pricing a stream of tomorrow’s cash flows. How does discount rate affect interest rates? Setting a high discount rate tends to have the effect of raising other interest rates in the economy since it represents the cost of borrowing money for most major commercial banks and other depository institutions. … When too few actors want to save money, banks entice them with higher interest rates. Is higher or lower discount rate better? Relationship Between Discount Rate and Present Value When the discount rate is adjusted to reflect risk, the rate increases. Higher discount rates result in lower present values. This is because the higher discount rate indicates that money will grow more rapidly over time due to the highest rate of earning.
https://simplyfrugalliving.com/sales/question-does-discount-rate-include-risk.html
Forums › Ask ACCA Tutor Forums › Ask the Tutor ACCA FM Exams › Cost of early settlement - This topic has 1 reply, 2 voices, and was last updated 8 months ago by John Moffat. - AuthorPosts - June 6, 2022 at 9:37 pm #657596Ocean2k20Member - Topics: 11 - Replies: 0 - ☆ Hi, For past questions, when calculating the cost of an early settlement, we calculate the old receivables, new receivables and calculate the difference between old finance cost and new finance cost (including the reduced revenue as a cost of discount) I remember learning the formula (1+(x/100-x))^(12/n) where n is the number of months we are receiving money earlier – does this give the percentage cost of discount and could/should this be used in the above calculation? Thank you,June 7, 2022 at 8:12 am #657641John MoffatKeymaster - Topics: 56 - Replies: 51585 - ☆☆☆☆☆ If it a simple discount (such as currently receivables take 2 months to pay and they will get a discount if they pay in 1 month) then we use the formula you quote. If it is a more complicated situation (such as some taking 2 months to pay and some taking 3 months to pay) then the formula is not relevant and we do it the way you describe in your first paragraph. I explain both situations in my free lectures on the management of receivables and payables. - AuthorPosts - You must be logged in to reply to this topic.
https://opentuition.com/topic/cost-of-early-settlement/
In other words, it is the expected compound annual rate of return that will be earned on a project or investment. In theory, any project with an IRR greater than its cost of capital should be profitable. In planning investment projects, firms will often establish arequired rate of return to determine the minimum acceptable return percentage that the investment in question must earn to be worthwhile. Internal Rates of Return shall be determined from the date the “cash in” is deemed to occur to the date the “cash out” is deemed to occur. Basically, the https://accountingcoaching.online/ is a rate that’s equal to the net present value of the future cash flows of a certain project, up until it reaches zero. The IRR assumes that the cash flows are reinvested at the internal rate of return when they are received. During calculation, IRR uses the order in which the values appear to determine the order of the cash flow. The IRR measure is an annual percentage return that is calculated by comparing the benefits from a spend decision against the costs and expressing this result as an annual compounded % rate. A similar everyday example is the annual rate of return on a loan, or a quoted interest rate on a credit card – but in reality, the calculation of IRR is not at all straightforward. Limitations Of The Irr IRR is often used to compare similar investments, or in capital planning and budgeting. Internal Rate of Return Fill out the quick form below and we’ll email you our free IRR calculator. Furthermore, IRR can be compared to prevailing return rates in the securities market. In a situation where a firm cannot find any project with IRR surpassing the returns which can be gotten in the financial market, it may just decide on investing it’s retained revenue into the market. Despite the fact that IRR is a metric that appeals to many, it should be utilized alongside with NPV for clarity of the value represented by a prospective project a firm might undertake. IRR must be calculated through a process of iteration where the discount rate is changed in the formula until the PV formula (left-hand side) equals 0. Spreadsheet software with solvers and financial calculators prove helpful for this analysis. 5 13 Internal Rate Of Return Halving the time to get the benefits should reasonably be expected to double the answer but because IRR is a compounded rate it is actually even greater. 20% yield is more quickly achieved within the first 6 months but because IRR is always expressed as an annual percentage equivalent return then the calculation must convert this into a 12-month valuation / expression. As such, expectation for an annual valuation is that another 20% will be achieved in the second 6 months. Not only that, but there will be an extra 20% of the first 20% i.e., 4% making the total 44%. Internal Rate of Return when used for business case decisions is a measure of the annual % rate of profitability on a project or solution when compared to the original amount spent or invested. It is particularly useful in comparing the relative merits of different projects which all have different IRR values. As a rule, the higher the IRR % the greater return a project delivers to the investor or business owner. - This means that ROI reflects what has already occurred, while IRR is a projection of what will happen. - But with IRR you calculate the actual return provided by the project’s cash flows, then compare that rate of return with your company’s hurdle rate . - Many companies use their weighted average cost of capital as their base hurdle rate. - I have no business relationship with any company whose stock is mentioned in this article. - CAGR uses only an initial investment and a final payout in order to provide an estimate for the annual rate of return. While IRR is difficult to calculate manually, the CAGR is easily calculated by hand. When viewed from the lens of net present value, the discount rate represents either the cost of capital to make an investment or the expected rate of return on an investment. Therefore, IRR is a discount rate and represents the rate required to drive the net present value of future cash flows to zero. Suppose you as the investor are weighing two different potential investments, both of which may positively help your business. You are hoping that, over a three-year period, a new piece of machinery will allow your workers to produce widgets more efficiently, but you are not sure which new machine will be best. One machine costs $500,000 for a three-year lease, and another machine costs $400,000, also for a three-year lease. You want to calculate the IRR for each project to help determine which machine to purchase. Internal Rate Of Return Irr In Shark every single benefit calculator creates its own individual monthly cashflow. As we have highlighted above, this timing information is important when gaining customer sponsorship because a delayed benefit is easier to support than one which is claimed to start immediately. A CFO will need to know the IRR answer and will be able to quickly identify the correct IRR calculation, so it is important to calculate it correctly. - The internal rate of return indicates a project or investment’s efficiency. - For example, an annual IRR will require cash flows that occur annually and a monthly IRR will require cash flows that occur monthly. - Fill out the quick form below and we’ll email you our free IRR calculator. - Finally, by Descartes’ rule of signs, the number of internal rates of return can never be more than the number of changes in sign of cash flow. - If you need help determining whether a new investment is a smart move or not, consider contacting a financial analyst or advisor. - There are examples where the replicating fixed rate account encounters negative balances despite the fact that the actual investment did not. A project can be deemed worthwhile merely by determining if its internal rate of return is more than costs. Internal rate of return is the percentage of returns that a project will generate within a period to cover its initial investment. 3 The Internal Rate Of Return Since we have a good visual of the project financially, we can now set up our equation. Teichroew, D., Robicheck, A., and Montalbano, M., Mathematical analysis of rates of return under certainty, Management Science Vol. Lohmann, J.R., “The IRR, NPV and the fallacy of the reinvestment rate assumptions”. Sturm’s theorem can be used to determine if that equation has a unique real solution. - The next step in the economic analysis is to estimate, how valuable it is. - Several methods can be used when seeking to identify an expected return, but IRR is often ideal for analyzing the potential return of a new project that a company is considering undertaking. - Internal rate of return is the percentage of returns that a project will generate within a period to cover its initial investment. - The only way to ensure client sponsorship for the business case is to value costs and benefits in the months they occur – this is handled transparently and seamlessly in Shark. (That is, of course, assuming this is the sole basis for the decision. One of the disadvantages of using IRR is that all cash flows are assumed to be reinvested at the same discount rate, although in the real world these rates will fluctuate, particularly with longer-term projects. How To Calculate Irr The actual internal rate of return obtained may vary from the theoretical value calculated. Nonetheless, the highest value will surely provide the best growth rate among all. When comparing investments, making an implicit assumption that cash flows are reinvested at the same IRR would lead to false conclusions. The initial investment is listed as a negative in order for Excel to know to calculate it against the expected annual return. To do so, one must first determine the value of the initial investment in a project and the yield of that investment over time. It’s important when investing to understand how your money works for you. Individual investors usually think about their minimum acceptable rate of return, or discount rate, in terms of their opportunity cost of capital. The opportunity cost of capital is what an investor could earn in the marketplace on an investment of similar size and risk. Corporate investors usually calculate a minimum acceptable rate of return based on the weighted average cost of capital. CAGR stands for compound annual growth rate, and it is a measure of the return on an investment over a given period of time. The difference between IRR and CAGR is that IRR is suitable for more complicated investments and projects, such as those having differing cash outflows and cash inflows. Once this hurdle is surpassed, the project with the highest IRR would be the wiser investment, all other things being equal . Internal Rate of Returnmeans the discount rate that makes the net present value of all cash payments equal zero. For purposes of determining the Internal Rate of Return, any dividends, distributions or payments other than in cash shall be deemed to have no value. For this purpose, Capital Contributions and Distributions shall be assumed to have occurred as of the end of the month in which such Capital Contribution or Distribution takes place. For purposes of determining the Internal Rates of Return hereunder, calculations shall be denominated and calculated in US Dollars. Irr Equation In her free time, Emma likes to travel, shop, run and drink coffee. Before we dig in and break down the IRR formula, it’s important to know that no one expects you to calculate these formulas by hand. But it clarifies what the formula represents and the steps it takes to get there. Based on your answer to Note 8.17 “Review Problem 8.2”, use trial and error to approximate the IRR for this investment proposal. We selected cell H28 to calculate the IRR, so this is where the IRR function is input. Notice that the resulting IRR of 10.72 percent shown in cell H28 is very close to our approximation of slightly less than 11 percent shown in Figure 8.5 “Finding the IRR for Jackson’s Quality Copies”. Let’s use the Jackson’s Quality Copies example presented at the beginning of the chapter to illustrate how Excel can be used to calculate the NPV and IRR. Internal Rate Of Return Irrdefined With Formula And Examples Examples of this type of project are strip mines and nuclear power plants, where there is usually a large cash outflow at the end of the project. One possible investment objective is to maximize the total net present value of projects. Let’s say a company’s hurdle rate is 12%, and one-year project A has an IRR of 25%, whereas five-year project B has an IRR of 15%. If the decision is solely based on IRR, this will lead to unwisely choosing project A over B. Using IRR exclusively can lead you to make poor investment decisions, especially if comparing two projects with different durations. A company is deciding whether to purchase new equipment that costs $500,000. Management estimates the life of the new asset to be four years and expects it to generate an additional $160,000 of annual profits. Consequently MIRR can be defined as the discount rate that causes the present value of a project’s terminal value to equal the present value of cost. The MIRR concept is fairly complicated and will only make more sense with examples. This is one of the main reasons that IRR is used more frequently in the real world, that is, since MIRR is not completely understood by a lot of managers. Internal rate of return is the minimum discount rate that management uses to identify what capital investments or future projects will yield an acceptable return and be worth pursuing.
http://www.sindicatologistica.com/index.php/2022/05/17/how-to-calculate-internal-rate-of-return-irr/
LJH Bhd used the cost method to record its intangible assets and estimated the useful life of the development costs to be five years beginning year 2017. Due to the low demand in the market for the new products, LJH Bhd estimated that the recoverable amount for the development costs was only RM450,000 at the end of the year 2018. Required: a) Calculate the amount of development costs that can be capitalized as at 31 December 2017. B)Prepare the relevant journal entries in year 2017 and 2018. c)Briefly explain the appropriate accounting treatment if there is no foreseeable limit to the period over which the development cost is expected to generate net cash flows for LJH Bhd according to MFRS138 Intangible Assets. Experts are waiting 24/7 to provide step-by-step solutions in as fast as 30 minutes!* Q: A $14,400, 30-day, 12% note recorded on November 21 is not paid by the maker at maturity. The journa... A: Notes receivable: Notes receivable refers to a written promise for the amounts to be received within... Q: Define the term Annual operating cost:? A: MEANING OF OPERATING COST The sum total of operating expenses and cost of goods sold is called Opera... Q: Define the following: (a) direct materials, (b) indirect materials, (c) direct labor, (d) indirect l... A: Cost: Cost can be defined as the cash and cash equivalent which is incurred against the products o... Q: Day Inc. has 6,541 shares of 5%, $100 par value cumulative preferred stock and 90,851 shares of $1 p... A: Dividend on cumulative preferred stock is calculated as a percentage of its par value. Q: What are some of the possible reasons that actual results may differ from what had been budgeted at ... A: Budget: Budget is an effective tool to present the estimated actions for future time period. It in... Q: Describe the two aspects of adequate disclosure in preparing and presenting of Islamic accounting in... A: Adequate Disclosures: In accounting, there is a need for adequate disclosures as these provide a cle... Q: On January 1, a company issued and sold a $410,000, 4%, 10-year bond payable, and received proceeds ... A: Discount = Face value - Issue price Discount = $410,000 - $405,000 Discount = $5,000 Q: Give an example of managerial accounting information that could help a manager make the following de... A: The managerial accounting information that could help the manager in making the decision as below, Q: Why does the balanced scorecard include financial performance measures as well as measures of how we... A: Balanced Score Card: Balanced Score Card is a tool used for determining the impact of the major busi...
https://www.bartleby.com/questions-and-answers/12.-during-the-year-2017-ljh-bhd-incurred-and-paid-the-following-costs-rm-salaries-and-wages-paid-to/6516ccc6-948b-4044-8635-e4e78a55b36e
Here we will use the increased percentage value formula to get the new Amount for the data set. Use the Formula in D3 cell As you can see in the above snapshot first New Amount after increment is 7700. Copy the formula in the remaining cells to get the New Amount for the rest of the data. New Amount can be calculated using the above procedure Hope you understood how to calculate Calculate New or updated Amount, Increase by percentage in Excel. Explore more articles on Mathematical formulation in Excel here. Mention your queries in the comment box below. We will help you with it. Related Article: How to Calculate Profit margin percentage in Excel How to use the Percentage Decrease Change in Excel How to Calculate Percentage of Total in Excel How to Calculate Percentage Discount in Excel Popular Articles: 50 Excel Shortcuts to Increase Your Productivity How to use the VLOOKUP Function in Excel How to use the COUNTIF function in Excel The applications/code on this site are distributed as is and without warranties or liability. In no event shall the owner of the copyrights, or the authors of the applications/code be liable for any loss of profit, any problems or any damage resulting from the use or evaluation of the applications/code.
https://www.exceltip.com/mathematical-functions/increase-by-percentage.html
The study of exoplanets has matured considerably in the last 10 years. During this time, the majority of the over 4,000 exoplanets that are currently known to us were discovered. It was also during this time that the process has started to shift from the process of discovery to characterization. What's more, next-generation instruments will allow for studies that will reveal a great deal about the surfaces and atmospheres of exoplanets. This naturally raises the question: what would a sufficiently-advanced species see if they were studying our planet? Using multi-wavelength data of Earth, a team of Caltech scientists was able to construct a map of what Earth would look like to distant alien observers. Aside from addressing the itch of curiosity, this study could also help astronomers reconstruct the surface features of "Earth-like" exoplanets in the future. The study that describes the team's findings, titled "Earth as an Exoplanet: A Two-dimensional Alien Map", recently appeared in the journal Science Mag and is scheduled for publication in The Astrophysical Journal Letters. The study was led by Siteng Fan and included multiple researchers from the California Institute of Technology's Division of Geological and Planetary Sciences (GPS) and the NASA Jet Propulsion Laboratory. When looking for potentially habitable planets beyond our Solar System, scientists are forced to take the indirect approach. Given that most exoplanets cannot be observed directly to learn of their atmospheric composition or surface features (AKA Direct Imaging), scientists must be satisfied with indications that show how "Earth-like" a planet is. As Fan told Universe Today via email, this reflects the limitations that astronomers and exoplanet studies are currently forced to contend with: "Firstly, current exoplanet studies have not figured out what the least requirements are for habitability. There are some proposed criterions, but we are not sure if they are either sufficient or necessary. Secondly, even with these criterions, current observation techniques are not good enough to confirm the habitability, especially on Earth-like exoplanets due to the difficulty of detecting and constraining them." Given that Earth is the only planet we know of that is capable of supporting life, the team theorized that remote observations of Earth could act as a proxy for a habitable exoplanet as observed by a distant civilization. "Earth is the only planet we know that contains life," said Fan. "Studying what the Earth looks like to distant observers would give us the direction of how to find potential habitable exoplanets." One of the most important elements of Earth's climate (and which is critical to all life on its surface) is the water cycle, which has three distinct phases. These include the presence of water vapor in the atmosphere, clouds of condensed water and ice particles, and the presence of bodies of water on the surface. Therefore, the presence of these could be considered potential indications of habitability and even indications of life (aka. biosignatures) that could be observed from a distance. Ergo, being able to identify surface features and clouds on exoplanets would be essential in order to place constraints on their habitability. To determine what Earth would look like to distant observers, the team compiled 9740 images of Earth that were taken by NASA's Deep Space Climate Observatory (DSCOVR) satellite. The images were taken every 68 to 110 minutes over a two year period (2016 and 2017) and managed to capture light reflected from Earth's atmosphere at multiple wavelengths. Fan and his colleagues then combined the images to form a 10-point reflection spectrum plotted over time, which were then integrated over the Earth's disk. This effectively reproduced what Earth might look like to anobserver many light-years away if they were to observe Earth over a two year period. "We found that the second principal component of Earth's light curve is strongly correlated to the land fraction of the illuminated hemisphere (r^2=0.91)," Fan said. "Combining with the viewing geometry, reconstructing the map becomes a linear regression problem." After analyzing the resulting curves and comparing them with the original images, the research team discovered which parameters of the curves corresponded to land and cloud cover. They then picked out the parameters that most closely related to land area and adjusted it to the 24-hour rotation of the Earth, which gave them a contoured map (shown above) that represented what Earth's light curve would look like from light years away. The black lines represent the surface feature parameter and correspond roughly to the coastlines of the major continents. These are further colored in green to provide a rough representation of Africa (center), Asia (top right), North and South America (left), and Antarctica (bottom). What lies in between represents the Earth's oceans, with the shallower sections denoted in red and the deeper ones in blue. These kinds of representations, when applied to the light curves of distant exoplanets, could allow astronomers to assess whether an exoplanet has the oceans, clouds, and icecaps – all necessary elements of an "Earth-like" (aka. habitable) exoplanet. As Fan concluded: "The analysis of light curves in this work have implications for determining geological features and climate systems on exoplanet. We found that the variation of light curve of Earth is dominated by clouds and land/ocean, which are both crucial to the life on Earth. Therefore, Earth-like exoplanets which harbor this kind of features would be more likely to host life." In the near-future, next-generation instruments like the James Webb Space Telescope (JWST) will allow for the most detailed exoplanet surveys to date. In addition, ground-based instruments that are coming online in the next decade – like the Extremely Large Telescope (ELT), the Thirty Meter Telescope (TMT), and the Giant Magellan Telescope (GMT) – are expected to enable direct imaging studies of smaller, rocky planets that orbit closer to their stars. Aided by studies that help to resolve surface features and atmospheric conditions, astronomers may be finally able to say with confidence which exoplanets are habitable and which ones aren't. With luck, the discovery of an Earth 2.0 (or several Earths for that matter) could be right around the corner! This article was originally published by Universe Today. Read the original article.
https://www.sciencealert.com/if-aliens-examined-earth-as-an-exoplanet-here-s-what-they-d-see
Astrophysicists at the University of Copenhagen have made a discovery about distant galaxies by analysing star populations beyond the Milky Way. For as long as... Studying the Westerlund 1 star cluster could help solve space mysteries Researchers at the University of Alicante are examining Westerlund 1 – a star cluster in the inner regions of the Milky Way – to... UK and Australia collaboration: Enhancing UK space command capabilities Northern Space and Security Limited (NORSS) has announced an ARTISM contract that will create a minimum of 13 jobs and enhance the UK's space... The exoplanet missions set to revolutionise science ETH Zurich's Professor Sascha P Quanz explores the future exoplanet missions that will search for life beyond our Solar System. In June 2021, a senior... Scientific advancements made by observing two black holes colliding Observing two black holes colliding has provided astronomers with a novel tool to measure black holes in distant galaxies. How did researchers observe these two... SpaceX Crew-3 astronauts return safely to Earth NASA’s SpaceX Crew-3 astronauts have made a safe return to Earth aboard the Dragon Endurance spacecraft, completing the journey to the International Space Station. NASA’s... Groundbreaking progress made in analysing the type Ia supernovae Analysing the type Ia supernovae has led scientists from the RIKEN Cluster for Pioneering Research to discover how supernovae would evolve over thousands of... Life on Mars: Analysing extra-terrestrial samples in Europe Dr Rain Irshad, Autonomous Systems Lead at RAL Space, discusses what Europe’s first analysis facility for extra-terrestrial samples could tell us about Mars. Understanding more... Spacecraft utilised x-rays from pulsars to navigate in deep space Researchers from the University of Illinois Urbana-Champaign have developed a way that a spacecraft can use pulsar star signals to navigate in space. The remnants... Discovering exoplanets using Artificial Intelligence Astronomers have applied Artificial Intelligence to image recognition in order to predict the effect of interactions between planets, making it possible to discover exoplanets. The... Earth’s atmosphere could be a source of lunar water In a new study, scientists believe that another source of lunar water could involve the Earth’s magnetosphere. Hydrogen and oxygen ions escaping from Earth’s upper... Solar energy could be used to power a crewed mission to Mars A research team have analysed different ways of generating power, deciding that solar energy could be used to power a mission to Mars. It can... Classifying exoplanet atmospheres to open new field of study Researchers have identified and classified populations of similar exoplanet atmospheres in order to improve our knowledge of planetary formation. An international team of researchers examined... Astrophysical plasma study benefits from new soft X-ray transition energies The new standard for X-ray transition energies set for neon, carbon dioxide, and sulphur hexafluoride paves a way to achieving a high accuracy analysis... The number of UK space jobs are continuing to expand Despite global impacts from the COVID-19 pandemic, 3,000 new jobs have been created in the UK space sector in one year. According to new figures... Developing cutting-edge software for the Square Kilometre Array Observatory UK institutions have been granted £15m to develop the ‘brain’ of the Square Kilometre Array Observatory, which is the biggest radio telescope in the... Astronomers have observed the most distant galaxy candidate yet At 13.5 billion light-years from Earth, astronomers have discovered the most distant galaxy candidate yet. At a staggering distance of 13.5 billion light-years from Earth,... Baby universes could present as primordial black holes The Kavli Institute believe that primordial black holes, which formed in the early Universe, could be hidden ‘baby universes’. Closely analysing Jupiter’s origin A research team from University of Zurich (UZH) and the National Centre of Competence in Research (NCCR) has utilised computer modelling systems to analyse... Mars’ climate history to determine potential habitability A collaborative research team, led by Purdue University, has discovered that mounts of ice in craters provide a new insight into Mars’ climate history. Newly...
https://www.innovationnewsnetwork.com/tag/space-exploration/
Probing Alien Worlds: NASA's Pandora Mission Builds on UArizona Research Tools and methods developed at the University of Arizona will help scientists study the atmosphere of exoplanets as part of NASA's Pandora mission concept. In the quest for habitable planets beyond our own, NASA is studying a mission concept called Pandora, which could eventually help decode the atmospheric mysteries of distant worlds in our galaxy. One of four low-cost astrophysics missions selected for further concept development under NASA's new Pioneers program, Pandora would study approximately 20 stars and exoplanets – planets outside of our solar system – to provide precise measurements of exoplanetary atmospheres. One of the co-investigators on the Pandora mission is Daniel Apai, an associate professor of astronomy and planetary science who heads a major NASA-funded research program called "Alien Earths," dedicated to finding which nearby planets are likely to host habitable worlds. His group has developed powerful tools and methods to create some of the first maps of the atmospheres of exoplanets and brown dwarfs. "Equipped with the tools and methods we have developed, we will now use data from Pandora to build on advanced data analysis methods," said Apai, who leads Pandora's exoplanets science working group. "This will allow us to push the boundaries of high-precision atmospheric characterization of fascinating new worlds." The Pandora mission concentrates on studying the atmospheres of stars and their planets by surveying planets as they cross in front of – or transit – their host stars. To accomplish this, Pandora would take advantage of a proven technique called transit spectroscopy, which involves measuring the amount of starlight filtering through a planet's atmosphere, and splitting it into bands of color known as a spectrum. These colors encode information that helps scientists identify gases present in the planet's atmosphere, and can help determine if a planet is rocky with a thin atmosphere like Earth or if it has a thick gas envelope like Neptune. The Pandora mission would seek to determine atmospheric compositions by observing planets and their host stars simultaneously in visible and infrared light over long periods. Most notably, Pandora would examine how variations in a host star’s light impacts measurements of exoplanet atmospheres. This so-called stellar contamination remains a substantial problem in identifying the atmospheric makeup of planets orbiting stars covered in starspots – the equivalent of the more familiar sunspots – which can cause brightness variations as a star rotates. Stellar contamination is a sticking point that complicates precise observations of exoplanets, according to Pandora co-investigator Benjamin Rackham, who obtained his doctoral degree in Apai's research group and is now a 51 Pegasi b Postdoctoral Fellow at the Massachusetts Institute of Technology in Cambridge. "Pandora would help build the necessary tools for disentangling stellar and planetary signals, allowing us to better study the properties of both starspots and exoplanetary atmospheres,” Rackham said. "Understanding how to disentangle the signals from planetary atmospheres and from those of their host stars is a key step toward studying the atmospheres of potentially habitable worlds," Apai added. Pandora is a small satellite mission known as a SmallSat, one of three such orbital missions receiving the green light from NASA to move into the next phase of development in the Pioneers program. SmallSats are low-cost spaceflight missions that allow the agency to advance scientific exploration and increase access to space. Pandora would operate in sun-synchronous low-Earth orbit, which always keeps the sun directly behind the satellite. This orbit minimizes light changes on the satellite and allows Pandora to obtain data over extended periods. The mission is focused on trying to understand how stellar activity affects measurements of exoplanet atmospheres, which will lay the groundwork for future exoplanet missions aiming to find planets with Earth-like atmospheres. Synergy in Space Joining forces with NASA's larger missions, Pandora would operate concurrently with the James Webb Space Telescope, slated for launch later this year. Webb will provide the ability to study the atmospheres of exoplanets as small as Earth with unprecedented precision, and Pandora would seek to expand the telescope's research and findings by observing the host stars of previously identified planets over longer periods. Missions such as NASA's Transiting Exoplanet Survey Satellite, Hubble Space Telescope, and the retired Kepler and Spitzer spacecraft have given scientists astonishing glimpses at these distant worlds, and laid a strong foundation in exoplanetary knowledge. These missions, however, have yet to fully address the stellar contamination problem, the magnitude of which is uncertain in previous studies of exoplanetary atmospheres. Pandora seeks to fill these critical gaps in NASA's understanding of planetary atmospheres and increase the capabilities in exoplanet research. "Pandora is the right mission at the right time because thousands of exoplanets have already been discovered, and we are aware of many that are amenable to atmospheric characterization that orbit small active stars," said Jessie Dotson, an astrophysicist at NASA's Ames Research Center in California's Silicon Valley and the deputy principal investigator for Pandora. "The next frontier is to understand the atmospheres of these planets, and Pandora would play a key role in uncovering how stellar activity impacts our ability to characterize atmospheres. It would be a great complement to Webb's mission." A Launch Pad for Exploration NASA's Pioneers program, which consists of SmallSats, payloads attached to the International Space Station, and scientific balloon experiments, fosters innovative space and suborbital experiments for early-to-mid-career researchers through low-cost, small hardware missions. Under this new program, Pandora would operate on a five-year timeline with a budget cap of $20 million.
https://news.arizona.edu/story/probing-alien-worlds-nasas-pandora-mission-builds-uarizona-research
Sep. 16, 2020—During a summer of exoplanet discoveries, astronomer Keivan Stassun helps identify new worlds and expand understanding of our own. Vanderbilt, The Ohio State University are joint Founding Members of satellite mission ‘Twinkle’ to find potentially habitable worlds around nearby stars Jul. 23, 2020—Following the NASA Transiting Exoplanet Survey Satellite mission, Vanderbilt astronomer Keivan Stassun leads the next phase of discovery to find atmospheres like our own. Rare study of Earth-sized planet uses technique pioneered by Vanderbilt professor Aug. 19, 2019—A groundbreaking study, using data from NASA and a technique pioneered by a Vanderbilt professor, is giving humankind a glimpse at a distant exoplanet with a size similar to Earth and a surface which may resemble Mercury or Earth’s Moon. Located nearly 49 light-years from Earth, the planet known as LHS 3844b was first discovered... Vanderbilt astronomers continue international effort to map and analyze universe in greater detail than ever Nov. 16, 2017—Vanderbilt astronomers will carry out detailed studies of nearby stars orbited by planets with the potential to harbor or sustain life. Astronomers discover exoplanet hotter than most stars Jun. 5, 2017—Astronomers at Vanderbilt and Ohio State have discovered a planet like Jupiter zipping around its host star every day, boiling at temperatures hotter than most stars with a giant cometary tail. Puffy planet provides opportunity for testing alien worlds for signs of life May. 18, 2017—Astronomers from Vanderbilt, Lehigh and Ohio State universities have discovered a “puffy planet" with the density of Styrofoam that is an excellent test-bed for probing exoplanets for signs of life. Little telescope discovers metal-poor cousin of famous planet Jun. 5, 2013—A scientific team led by University of Louisville doctoral student Karen Collins has discovered a hot Saturn-like planet in another solar system 700 light years away. The discovery was made using inexpensive ground-based telescopes, including one specially designed to detect exoplanets and jointly operated by astronomers at Ohio State University and Vanderbilt University. Discovery of the smallest exoplanets: The Barnard’s star connection Jan. 11, 2012—The smallest exoplanets yet discovered orbit a dwarf star almost identical to Barnard’s star, one of the Sun’s nearest neighbors. The similarity helped the astronomers calculate the size of the distant planets.
https://news.vanderbilt.edu/tag/exoplanet/
Scientists have listed the molecules that are present in gases which they believe would be found on other planets with life as we know it. This forms part of a new approach aimed at maximizing the chances of finding exoplanets orbiting nearby stars that support life. Telescopes could remotely detect biosignature gases emitted by exoplanetary life forms. An exoplanet is a planet outside our Solar Systems (they are often referred to as simply ‘planets’). Biosignature gases may be different elsewhere Scientists from MIT in the USA and the Rufus Scientific Ltd. in England explained in the journal Astrobiology (citation below) that these gases could well have very different compositions in other planets from those in our atmosphere. Illustration of the functional approach to creating the list for all small, stable, volatile compounds, for molecules with N ≤ 6 non-H atoms. (Image: Astrobiology) MIT’s department of Earth, Atmospheric and Planetary Sciences wrote on its website: “Although a few biosignature gases are prominent in Earth’s atmospheric spectrum (familiar among them O2, CH4, N2O), life on Earth is known to produce thousands of different molecules and scientists theorize that some may be able to accumulate at similar or higher levels on exo-Earths (e.g., dimethyl sulfide and CH3Cl) depending on the exo-Earth ecology and surface and atmospheric chemistry.” The authors propose that all stable and potential volatile molecules should be considered as possible biosignature gases. They say they have laid the groundwork for detecting such gases by carrying out a massive search for molecules with six or fewer non-hydrogen atoms. S. Seager, J.J. Petkowski and W. Bains from MIT and Rufus Scientific Ltd. believe this exhaustive list of small molecules may help improve our understanding of the limits of the biochemistry of Earth. Comparison of molecules produced by life forms as found in published databases and found by a manual literature search. For CNOPSH compounds (C – carbon, N – nitrogen, O – oxygen, P – phosphorus, S – sulfur, and H – hydrogen) only. (Image: Astrobiology) Nancy Y. Kiang, PhD, Senior Editor of Astrobiology and a scientist at NASA’s Goddard Institute for Space Studies, said regarding this work: “This work reminds me of Darwin’s voyage aboard The Beagle, exploring the vast diversity of life by sailing around the world. In the search for life beyond our planet, we are currently at a similarly exciting, early but rapidly evolving stage of exploration as the discovery of exoplanets accelerates.” “Instead of netting strange creatures from the bottom of the sea, the authors here have searched and found thousands of curious, potentially biogenic gas molecules. These will inspire a new body of research into identifying also larger molecules, investigating their origin and fate here, and their potential expression on exoplanets as signs of life.” In an Abstract in the journal, the authors wrote: “The list can be used to study classes of chemicals that might be potential biosignature gases, considering their accumulation and possible false positives on exoplanets with atmospheres and surface environments different from Earth’s.” “The list can also be used for terrestrial biochemistry applications, some examples of which are provided. We provide an online community usage database to serve as a registry for volatile molecules including biogenic compounds.” Citation: “Toward a List of Molecules as Potential Biosignature Gases for the Search for Life on Exoplanets and Applications to Terrestrial Biochemistry,” Seager S., Bains W., and Petkowski J.J. Astrobiology. April 2016, ahead of print. DOI:10.1089/ast.2015.1404.
https://marketbusinessnews.com/gases-increase-likelihood-life-planets-listed-scientists/134018/
Scientists have discovered 18 Earth-sized planets beyond the solar system, including one of the smallest known so far and another that could offer conditions friendly to life. The exoplanets are so small that previous surveys had overlooked them, said researchers at the Max Planck Institute for Solar System Research (MPS) in Germany.The study, published in the journal Astronomy & Astrophysics, re-analysed a part of the data from NASA’s Kepler Space Telescope with a new and more sensitive method that they developed. The team estimates that the new method has the potential of finding more than 100 additional exoplanets in the Kepler mission’s entire data set. Somewhat more than 4,000 planets orbiting stars outside our solar system are known so far. Of these so-called exoplanets, about 96 per cent are significantly larger than our Earth, most of them more comparable with the dimensions of the gas giants Neptune or Jupiter. This percentage likely does not reflect the real conditions in space, however, since small planets are much harder to track down than big ones. Moreover, small worlds are fascinating targets in the search for Earth-like, potentially habitable planets outside the solar system. The 18 newly discovered worlds fall into the category of Earth-sized planets. The smallest of them is only 69 per cent of the size of the Earth; the largest is barely more than twice the Earth’s radius. Common search algorithms were not sensitive enough, researchers said.In their search for distant worlds, scientists often use the so-called transit method to look for stars with periodically recurring drops in brightness. If a star happens to have a planet whose orbital plane is aligned with the line of sight from Earth, the planet occults a small fraction of the stellar light as it passes in front of the star once per orbit. Small planets, however, present scientists with immense challenges.
https://www.daily-sun.com/printversion/details/395363/2019/05/27/Earthsized-exoplanets-discovered
Sascha Quantz, an astrophysicist at the Swiss Federal Institute of Technology, said that humans will discover extraterrestrial life within the next 25 years. According to the scientist, more than 5,000 exoplanets are already known to man, dozens of which are likely to have life. At the same time, science does not stand still, and astronomers daily find new objects far beyond our solar system. Researchers have to discover many more exoplanets to find the very “alien Earth” where there are living beings. According to scientists, there are more than 100 billion stars in the Milky Way galaxy, and each has at least one companion planet. Many of them are located at the same distance from their stars as the Earth is from the Sun, so there may well be liquid water – one of the main conditions for life. It is still difficult to say whether such planets have a habitable atmosphere, but the latest technology, the crown of which has already become the James Webb telescope, will soon allow answering this question. True, one such telescope is not enough – the device is sharpened to search for the oldest stars, and with its help it is difficult to detect Earth-like planets. To fill this shortcoming, scientists are already developing the METIS mid-infrared spectrograph, which will become part of the Extremely Large Telescope (ELT). The creation of the ELT system is happening right now in Chile, the project will be completed by the end of the 2020s. The ELT will be equipped with a segment mirror with a diameter of 39.3 m, consisting of 798 hexagonal parts. The main purpose of the telescope is to detect terrestrial planets. Also, astrophysicist Quantz noted another promising program to search for life outside the solar system, called LIFE (Large Interferometer for Exoplanets). This is a space telescope-interferometer that will study distant exoplanets and search for molecules that alien organisms could produce. The astrophysicist believes that it is necessary to obtain a detailed picture of chemical reactions and external conditions that best fit the search criteria for life in space. This will give scientists the opportunity to choose the most priority areas for in-depth research. LIFE is still at an early stage of development, but it is likely that it will be approved, built and put into operation in the astrophysicist’s designated 25 years.
https://forbes.pw/2022/09/14/astrophysicist-told-when-humanity-will-find-extraterrestrial-life/
A new study estimates the masses of the seven Earth-like planets, which provides insights on what their atmospheres and surfaces may be like. T T he seven Earth-size planets around the distant star TRAPPIST-1 are "tugging" on each other as they travel around their parent star. By carefully observing those tugs, scientists were able to gather information about the planets' composition and found that some of the TRAPPIST-1 worlds could have as much as 250 times more water than the amount in all of Earth's oceans, according to a new study. Intriguingly, they found that each of the five lightest planets could have about 250 times more water than the amount in Earth's oceans, according to a statement from NASA. Up to 5 percent of their composition could be water, whereas only 0.02 percent of Earth is water. TRAPPIST-1c, d, and e lie close to the star's "habitable zone," or the region where a star receives enough radiation that water might be able to exist as a liquid on its surface. TRAPPIST-1b, the innermost planet, and TRAPPIST-1c likely have rocky interiors and atmospheres denser than Earth's, according to the study. Of all the TRAPPIST-1 exoplanets, TRAPPIST-1d is the lightest, at about 30 percent Earth's mass. This may mean it has a large atmosphere, an ice layer or an ocean, but scientists cannot yet discern that. TRAPPIST-1e is likely a rocky planet with a thin atmosphere. TRAPPIST-1f, g, and h are so distant from their parent star that their surfaces are probably covered in ice. "We were able to measure precisely the density of exoplanets that are similar to Earth in terms of their size, mass, and irradiation, with an uncertainty of less than 10 percent, which is a first and a decisive step in the characterization of potential habitability," said Brice-Olivier Demory, a professor at the Center for Space and Habitability and co-author of the study, which was published in late January 2018 in the journal Astronomy and Astrophysics. The exoplanet TRAPPIST-1e yielded another interesting finding: It is the most similar to Earth in the amount of radiation it receives from its parent star, its size, and its density. And liquid water could exist on its surface. "We now know more about TRAPPIST-1 than any other planetary system apart from our own," said Sean Carey, manager of the Spitzer Science Center at Caltech/IPAC in Pasadena, California and co-author of the new study. "The improved densities in our study dramatically refine our understanding of the nature of these mysterious worlds." There is still much to learn about the TRAPPIST-1 system. Knowing a planet's density doesn't necessarily tell scientists what it's like on the surface of those planets. For example, the moon and Mars have the same density, but their surfaces are very different, according to the NASA statement. More precise findings about the TRAPPIST-1 planets' atmospheres and compositions can be obtained from upcoming projects, like NASA's James Webb Space Telescope, which is scheduled to launch in 2019.
https://www.seeker.com/planets/trappist-1-planets-could-hold-250-times-the-amount-of-water-as-earths-oceans
Though some of our Solar System’s moons don’t have atmospheres and are covered in ice, they are still among the top targets in NASA’s search for life beyond Earth. Saturn’s moon Enceladus and Jupiter’s moon Europa, which scientists classify as “ocean worlds,” are good examples. “NASA’s search for life in the Universe is focused on so-called Habitable Zone planets, which are worlds that have the potential for liquid water oceans,” said Stephanie Olson at the University of Chicago in an earlier 2019 study of planets with oceans. “But not all oceans are equally hospitable–and some oceans will be better places to live than others due to their global circulation patterns”. Europa’s Ocean–“Like a Miniature Earth” Jupiter’s moon, Europa harbors a vast salty ocean beneath its icy surface that scientists believe reaches 100 kilometers –a depth 10 times greater than the Marianas Trench. The rocky bottom of Europa’s ocean, suggests Caltech’s Mike Brown, who was not involved with Quick’s study, may be almost like a miniature Earth, with plate tectonics, continents, deep trenches, and active spreading centers. “Think about mid-ocean ridges on Earth,’ Brown writes on his blog, “with their black smokers belching scalding nutrient-rich waters into a sea floor teaming with life that is surviving on these chemicals. It doesn’t take much of an imagination to picture the same sort of rich chemical soup in Europa’s ocean leading to the evolution of some sort of life, living off of the internal energy generated inside of Europa’s core. If you’re looking for Europa’s whales – which many of my friends and I often joke that we are – this is the world you want to look for them on.” “Mount Everest Would Have Been Submerged” –Ancient Earth, One of the Milky Way’s Countless Water Worlds Water Worlds Unlike Anything in Our Solar System An extreme example was detected in 2016, when Kepler astronomers discovered planets that are unlike anything in our solar system –a “water world” planetary system orbiting the star Kepler-62. This five-planet system has two worlds in the habitable zone — their surfaces completely covered by an endless global ocean with no land or mountains in sight. “These are utterly different worlds compared to our own Earth,” said Harvard University astronomer Li Zeng, who was not part of Quick’s research, about the chances that water worlds are a common feature of the Milky Way, which was heightened by research using computer simulations showing that sub-Neptune-sized planets, that is, planets featuring radii about two to four times that of Earth, are likely to be water worlds, and not gas dwarfs surrounded by thick atmospheres as conventionally believed. Some of these planets, Zeng said, have oceans deep enough to exert pressures equivalent to a million times our atmospheric surface pressure. Under those conditions, fluid water gets compressed into high-pressure phases of ice, such as Ice Seven or superionic ices. “These high-pressure ices are essentially like silicate-rocks within Earth’s deep mantle—they’re hot and hard,” he said. Europa and Enceladus as Models Quick, of NASA’s Goddard Space Flight Center, decided to explore whether—hypothetically—there are planets similar to Europa and Enceladus in the Milky Way galaxy. And, could they, too, be geologically active enough to shoot plumes through their surfaces that could one day be detected by telescopes. A Quarter of the Exoplanets could be Ocean Worlds Through a mathematical analysis of several dozen exoplanets, including planets in the nearby TRAPPIST-1 system, Quick and her colleagues learned something significant: more than a quarter of the exoplanets they studied could be ocean worlds, with a majority possibly harboring oceans beneath layers of surface ice, similar to Europa and Enceladus. Additionally, many of these planets could be releasing more energy than Europa and Enceladus. In a separate study, University of Chicago planetary scientist Stephanie Olson presented a model that predicts how the circulation patterns of oceans can impact the favorability of life on that planet. These factors can guide scientists on the search for life on other worlds, and the researchers’ findings suggest that conditions on some exoplanets with favorable ocean circulation patterns could be better suited to support life that is more abundant or more active than life on Earth –that looking for a planet exactly like Earth may not lead us to the most likely places where alien life exists.” Mathematical Models as Predictors Scientists may one day be able to test Quick’s predictions by measuring the heat emitted from an exoplanet or by detecting volcanic or cryovolcanic (liquid or vapor instead of molten rock) eruptions in the wavelengths of light emitted by molecules in a planet’s atmosphere. For now, scientists cannot see many exoplanets in any detail. Alas, they are too far away and too drowned out by the light of their stars. But by considering the only information available—exoplanet sizes, masses and distances from their stars—scientists like Quick and her colleagues can tap mathematical models and our understanding of the solar system to try to imagine the conditions that could be shaping exoplanets into livable worlds or not. “So Long, and Thanks for All the Fish” –Water Worlds Like Earth May Not Be Best Bet for Life While the assumptions that go into these mathematical models are educated guesses, they can help scientists narrow the list of promising exoplanets to search for conditions favorable to life so that NASA’s upcoming James Webb Space Telescope or other space missions can follow up. Global Biospheres “Future missions to look for signs of life beyond the solar system are focused on planets like ours that have a global biosphere that’s so abundant it’s changing the chemistry of the whole atmosphere,” says Aki Roberge, a NASA Goddard astrophysicist who collaborated with Quick on this analysis. “But in the solar system, icy moons with oceans, which are far from the heat of the Sun, still have shown that they have the features we think are required for life.” Radiogenic Heat Sources of 53 Earth-Sized Planets To look for possible ocean worlds, Quick’s team selected 53 exoplanets with sizes most similar to Earth, though they could have up to eight times more mass. Scientists assume planets of this size are more solid than gaseous and, thus, more likely to support liquid water on or below their surfaces. At least 30 more planets that fit these parameters have been discovered since Quick and her colleagues began their study in 2017, but they were not included in the analysis, which was published on June 18 in the journal Publications of the Astronomical Society of the Pacific. Two Primary Sources of Heat With their Earth-size planets identified, Quick and her team sought to determine how much energy each one could be generating and releasing as heat. The team considered two primary sources of heat. The first, radiogenic heat, is generated over billions of years by the slow decay of radioactive materials in a planet’s mantle and crust. That rate of decay depends on a planet’s age and the mass of its mantle. Other scientists already had determined these relationships for Earth-size planets. So, Quick and her team applied the decay rate to their list of 53 planets, assuming each one is the same age as its star and that its mantle takes up the same proportion of the planet’s volume as Earth’s mantle does. This animated graph shows levels of predicted geologic activity among exoplanets, with and without oceans, compared to known geologic activity among solar system bodies, with and without oceans. (Lynnae Quick & James Tralie/NASA’s Goddard Space Flight Center) Heat from Tidal Force Next, the researchers calculated heat produced by something else: tidal force, which is energy generated from the gravitational tugging when one object orbits another. Planets in stretched out or elliptical orbits shift the distance between themselves and their stars as they circle them. This leads to changes in the gravitational force between the two objects and causes the planet to stretch, thereby generating heat. Eventually, the heat is lost to space through the surface. One exit route for the heat is through volcanoes or cryovolcanoes. Another route is through tectonics, which is a geological process responsible for the movement of the outermost rocky or icy layer of a planet or moon. Whichever way the heat is discharged, knowing how much of it a planet pushes out is important because it could make or break habitability. For instance, too much volcanic activity can turn a livable world into a molten nightmare. But too little activity can shut down the release of gases that make up an atmosphere, leaving a cold, barren surface. Just the right amount supports a livable, wet planet like Earth, or a possibly livable moon like Europa. On Deck –NASA’s Europa Clipper In the next decade, NASA’s Europa Clipper will explore the surface and subsurface of Europa and provide insights about the environment beneath the surface. The more scientists can learn about Europa and other potentially habitable moons of our solar system, the better they’ll be able to understand similar worlds around other stars—which may be plentiful, according to today’s findings. “Forthcoming missions will give us a chance to see whether ocean moons in our solar system could support life,” says Quick, who is a science team member on both the Clipper mission and the Dragonfly mission to Saturn’s moon Titan. “If we find chemical signatures of life, we can try to look for similar signs at interstellar distances.” “Unfathomable Abodes of Life?” –Water Worlds of the Milky Way When Webb launches, scientists will try to detect chemical signatures in the atmospheres of some of the planets in the TRAPPIST-1 system, which is 39 light years away in the constellation Aquarius. In 2017, astronomers announced that this system has seven Earth-size planets. Some have suggested that some of these planets could be watery, and Quick’s estimates support this idea. According to her team’s calculations, TRAPPIST-1 e, f, g and h could be ocean worlds, which would put them among the 14 ocean worlds the scientists identified in this study. Planet’s Lower Density Hint of an Ocean World The researchers predicted that these exoplanets have oceans by considering the surface temperatures of each one. This information is revealed by the amount of stellar radiation each planet reflects into space. Quick’s team also took into account each planet’s density and the estimated amount of internal heating it generates compared to Earth. “If we see that a planet’s density is lower than Earth’s, that’s an indication that there might be more water there and not as much rock and iron,” Quick says. And if the planet’s temperature allows for liquid water, you’ve got an ocean world.
https://dailygalaxy.com/2021/04/the-ocean-galaxy-many-of-the-milky-ways-4000-known-exoplanets-may-be-water-worlds/
Space agencies are continuously studying the entities that are beyond the Earth, including one of the biggest mysteries in space. A new report reveals that distant exoplanets may have sources of water. One of the biggest mysteries when it comes to exploring space or celestial entities is if there is life outside Earth. Express reports that there may just be life outside Earth due to the fact that some exoplanets may contain water sources. Scientists analyzed the data of 19 exoplanets in the cosmos, studying the thermal and chemical compositions of each one. These exoplanets were of differing sizes, ranging from 10 times to 600 times the size of the Earth, also referred to as “Super Jupiters.” The researchers found that the common factor all these exoplanets shared was water, however, the quantity of water in the planets were also limited. Cambridge Institute of Astronomy’s Dr. Nikku Madhusudhan stated that they have discovered the “first signs of chemical patterns in extra-terrestrial worlds.” Dr. Madhusudhan and his team studied the biosignatures of these exoplanets to check the water vapor in each of their atmospheres. They found that out of the 19 exoplanets, 14 of them had water vapor, and six planets have a lot of potassium and sodium. But the factor that the researchers have noticed was the amount of oxygen - the exoplanets lacked oxygen compared to the other elements. So the question remains on whether or not we will be able to find life beyond Earth as soon as next year. Space reports that experts do not think so. Interviewing experts from the Search for Extraterrestrial Intelligence Institute, senior astronomer Seth Shostak said that finding aliens next year in 2020 will be unlikely to happen. Shostak went on to explain that looking for extraterrestrial beings is mostly done through checking nearby star systems for flashes of light or other indicators of extraterrestrial beings. However, Shostak also said to the outlet that this could be successful at any time. Thus, there is no telling when we can be able to discover other beings beyond Earth. It may or may not happen, or it could happen in the coming decades, the latter being higher in probability. Tesla Cybertruck update: Is Elon Musk's futuristic pickup truck joining the 'Cyberpunk 2077' vehicle lineup?
http://www.econotimes.com/Aliens-Distant-exoplanets-may-contain-water-sources-1570136
It's all about astrophysical false positive probability calculations. Before Tuesday, there were no shortage of theories about what NASA’s discovery announcement would entail. (Full disclosure: I was responsible for much of that speculation.) Then Tuesday hit and we found out exactly what the big news was: NASA scientists just confirmed the identify of 1,284 new exoplanets in the universe — including nine planets that have the potential to be habitable to life. It’s an announcement that has already inspired scientists and ordinary individuals around the world to ponder whether we might seriously find extraterrestrial life soon enough. But the new study raises an interesting question: what changed between the last few years and now that allowed scientists to identify so many new exoplanets all at once? Did all of these planets just show up at once? Did we develop better technology? Did the Kepler Space Telescope miraculously get better (after weirdly almost breaking down)? What gives? The answer: It all comes down to a new method of validating exoplanet candidates that provides “astrophysical false positive probability calculations” for such objects, according to a new paper published in the latest issue of The Astrophysical Journal. Basically, the new method ascribes a number to every object found by Kepler that determines the likelihood that object is an exoplanet, and not an “imposter.” Call it a planet score. The higher the number, the more likely it’s a planet. The new method only allows an object to move from the “candidate” category to “exoplanet” if Kepler researchers can say so with 99 percent reliability or higher. This is an artist's conception of Kepler-20e, the first planet smaller than the Earth discovered to orbit a star other than the sun. A year on Kepler-20e only lasts six days, as it is much closer to its host star than the Earth is to the sun. We should slow down at this point and expound on exactly how astronomers find and evaluate potential exoplanets. Basically, through Kepler and a few other instruments, scientists stare at distant stars and measure the brightness of light emitting from those balls of fiery energy. When a star has a planet in orbit, its brightness will dim as that planet transits past it in relation to the telescope we’re using to watch it (a recent, albeit small, example is Mercury passing in front of the sun). As long as that dimming isn’t just a technical error, it’s a sign that something is passing through the neighborhood. A consistent dimming occurring regularly over time is further evidence it might be a planet. In the past, scientists had to pore over the brightness numbers along with assessing a variety of different data that might be attainable, like radio velocity observation or high-resolution imaging. Unfortunately, doing that kind of work is extremely time consuming, and we don’t always have the resources to find what we need. So in this day-and-age, we turn to computers for help. Timothy Morton, a Princeton researcher who studies exoplanets, developed a new method for exoplanet validation that combines previous exoplanet observations and the current brightness measurements scientists are gathering with Kepler. There are two kinds of simulations. The first looks at how the dimming compares to that from known exoplanets and imposter objects. The second goes a step further and deduces whether dimming is indicative of exoplanet behavior given what we already about how exoplanets are distributed and laid around the Milky Way. The two simulations are used to determine the statistical likelihood the object in question is an exoplanet. It’s a faster way of doing this work — and by all accounts, it’s even more accurate. In fact, the method is actually being used to verify previously confirmed exoplanets and determine whether they might actually be false-positives. This is crucial for the direction of future exoplanet research. The work accomplished since Kepler’s launch in 2009 has been huge in illustrating just how many other worlds exist in the universe — and it has given humans a staggering amount of hope we may find another habitable planet, or even alien life. NASA is already getting ready to launch the Transiting Exoplanet Survey Satellite (TESS) in late 2017, and the James Webb Space Telescope in 2018. Both will play a pivotal role in exoplanet investigations by acquiring lots more data that we’ve ever dealt with. Morton’s model will help our scientists on the ground sift through that data and identify potentially habitable exoplanets faster than we could have hoped.
https://www.inverse.com/article/15587-how-a-planet-score-helped-nasa-identify-1-284-new-exoplanets-in-one-fell-swoop
The quest to find habitable — and perhaps inhabited — planets and moons beyond Earth focuses largely on their location in a solar system and the nature of its host star, the eccentricity of its orbit, its size and rockiness, and the chemical composition of its atmosphere, assuming that it has one. Astronomy, astrophysics, cosmochemistry and many other disciplines have made significant progress in characterizing at least some of the billions of exoplanets out there, although measuring the chemical makeup of atmospheres remains a immature field. But what if these basic characteristics aren’t sufficient to answer necessary questions about whether a planet is habitable? What if more information — and even more difficult to collect information — is needed? That’s the position of many planetary scientists who argue that the dynamics of a planet’s interior are essential to understand its habitability. With our existing capabilities, observing an exoplanet’s atmospheric composition will clearly be the first way to search for signatures of life elsewhere. But four scientists at the Carnegie Institution of Science — Anat Shahar, Peter Driscoll, Alycia Weinberger, and George Cody — argued in a recent perspective article in Science that a true picture of planetary habitability must consider how a planet’s atmosphere is linked to and shaped by what’s happening in its interior. They argue that on Earth, for instance, plate tectonics are crucial for maintaining a surface climate where life can fill every niche. And without the cycling of material between the planet’s surface and interior, the convection that drives the Earth’s magnetic field would not be possible and without a magnetic field, we would be bombarded by cosmic radiation. “The perspective was our way to remind people that the only exoplanet observable right now is the atmosphere, but that the atmospheric composition is very much linked to planetary interiors and their evolution,” said lead author Shahar, who is trained in geological sciences. “If there is a hope to one day look for a biosignature, it is crucial we understand all the ways that interiors can influence the atmospheric composition so that the observations can then be better understood.” “We need a better understanding of how a planet’s composition and interior influence its habitability, starting with Earth,” she said. “This can be used to guide the search for exoplanets and star systems where life could thrive, signatures of which could be detected by telescopes.” It all starts with the formation process. Planets are born from the rotating ring of dust and gas that surrounds a young star. The elemental building blocks from which rocky planets form–silicon, magnesium, oxygen, carbon, iron, and hydrogen–are universal. But their abundances and the heating and cooling they experience in their youth will affect their interior chemistry and, in turn, defining factors such ocean volume and atmospheric composition. “One of the big questions we need to ask is whether the geologic and dynamic features that make our home planet habitable can be produced on planets with different compositions,” Carnegie planetary scientist Peter Driscoll explained in a release. In the next decade as a new generation of telescopes come online, scientists will begin to search in earnest for biosignatures in the atmospheres of rocky exoplanets. But the colleagues say that these observations must be put in the context of a larger understanding of how a planet’s total makeup and interior geochemistry determines the evolution of a stable and temperate surface where life could perhaps arise and thrive. “The heart of habitability is in planetary interiors,” concluded Carnegie geochemist George Cody. Our knowledge of the Earth’s interior starts with these basic contours: it has a thin outer crust, a thick mantle, and a core the size of Mars. A basic question that can be asked and to some extent answered now is whether this structure is universal for small rocky planets. Will these three layers be present in some form in many other rocky planets as well? Earlier preliminary research published in the The Astrophysical Journal suggests that the answer is yes – they will have interiors very similar to Earth. “We wanted to see how Earth-like these rocky planets are. It turns out they are very Earth-like,” said lead author Li Zeng of the Harvard-Smithsonian Center for Astrophysics (CfA To reach this conclusion Zeng and his co-authors applied a computer model known as the Preliminary Reference Earth Model (PREM), which is the standard model for Earth’s interior. They adjusted it to accommodate different masses and compositions, and applied it to six known rocky exoplanets with well-measured masses and physical sizes. They found that the other planets, despite their differences from Earth, all should have a nickel/iron core containing about 30 percent of the planet’s mass. In comparison, about a third of the Earth’s mass is in its core. The remainder of each planet would be mantle and crust, just as with Earth. “We’ve only understood the Earth’s structure for the past hundred years. Now we can calculate the structures of planets orbiting other stars, even though we can’t visit them,” adds Zeng. The model assumes that distant exoplanets have chemical compositions similar to Earth. This is reasonable based on the relevant abundances of key chemical elements like iron, magnesium, silicon, and oxygen in nearby systems. However, planets forming in more or less metal-rich regions of the galaxy could show different interior structures. While thinking about exoplanetary interiors—and some day finding ways to investigate them — is intriguing and important, it’s also apparent that there’s a lot more to learn about role of the Earth’s interior in making the planet habitable. In 2017, for instance, an interdisciplinary group of early career scientists visited Costa Rica’s subduction zone, (where the ocean floor sinks beneath the continent) to find out if subterranean microbes can affect geological processes that move carbon from Earth’s surface into the deep interior. According to their new study in Nature, the answer is yes. The study shows that microbes consume and trap a small but measurable amount of the carbon sinking into the trench off Costa Rica’s Pacific coast. The microbes may also be involved in chemical processes that pull out even more carbon, leaving cement-like veins of calcite in the crust. In all, microbes and calcite precipitation combine to trap about 94 percent of the carbon squeezed out from the edge of the oceanic plate as it sinks into the mantle during subduction. This carbon remains naturally sequestered in the crust, where it cannot escape back to the surface through nearby volcanoes in the way that much carbon ultimately recycles. These unexpected findings have important implications for how much carbon moves from Earth’s surface into the interior, especially over geological timescales. The research is part of the Deep Carbon Observatory’s Biology Meets Subduction project. Overall, the study shows that biology has the power to affect carbon recycling and thereby deep Earth geology. “We already knew that microbes altered geological processes when they first began producing oxygen from photosynthesis,” said Donato Giovannelli of University of Naples, Italy (and who I knew from time spent at the Earth-Life Science Institute Tokyo.) He is a specialist in extreme environments and researches what they can tell us about early Earth and possibly other planets. “I think there are probably even more ways that biology has had an outsized impact on geology, we just haven’t discovered them yet.” The findings also shows, Giovanelli told me, that subsurface microbes might have a similarly outsized effect on the composition and balancing of atmospheres—“hinting to the possibility of detecting the indirect effect of subsurface life through atmosphere measurements of exoplanets,” he said. This idea that subsurface life on distant planets could be identified by their byproducts in the atmosphere has just taken on a new immediacy with findings from the Curiosity rover that high levels of the gas methane had recently been detected on Mars. Earlier research had suggested that Mars had some subsurface methane, but the amount appeared to be quite minimal — except as detected once back in 2003 by NASA scientists. None of the researchers now or in the past have claimed that they know the origin of the methane — whether it is produced biologically or through other planetary processes. But on Earth, some 90 percent of methane comes from biology — bacteria, plants, animals. Could, then, these methane plumes be a sign that life exists (or existed) below the surface of Mars? It’s possible, and highlights the great importance of what goes on below the surface of planets and moons. Marc Kaufman is the author of two books about space: “Mars Up Close: Inside the Curiosity Mission” and “First Contact: Scientific Breakthroughs in the Search for Life Beyond Earth.” He is also an experienced journalist, having spent three decades at The Washington Post and The Philadelphia Inquirer. He began writing the column in October 2015, when NASA’s NExSS initiative was in its infancy. While the “Many Worlds” column is supported and informed by NASA’s Astrobiology Program, any opinions expressed are the author’s alone.
https://manyworlds.space/2019/06/23/the-interiors-of-exoplanets-may-well-hold-the-key-to-their-habitability/
What is an exoplanet and how many planets have been discovered outside our solar system? Planets outside our solar system are called exoplanets or extrasolar planets. For centuries, scientists, philosophers, and science fiction writers believed they existed, but there was no way of knowing. When was the first planet outside our solar system discovered? The first confirmed detection of planets outside our solar system came in 1992 when several terrestrial mass planets were discovered orbiting a pulsar. The outstanding discovery of a realm like our own was in 1995 when two scientists found 51 Pegasi b orbiting a sun like our own. As of May 2020, there are over 4,000 confirmed exoplanets and over 3,000 systems with almost 700 of them having more than one planet. The nearest exoplanet discovered, Proxima Centauri b, is located four light-years from Earth orbiting the closest star to the Sun. An artist’s impression of 51 Pegasi b (center) and its star (right). Source: ESO/M. Kornmesser/Nick Risinger (skysurvey.org) via Wikimedia Enjoying our blog? Check out the Space & Beyond Box: our space-themed subscription box! How to find a planet outside our solar system There are many methods of detecting planets outside our solar system, but Doppler spectroscopy and transit photometry have found the most. Astronomers can detect exoplanets indirectly by measuring their gravitational influence on the motion of their host star. The star will kind of look like it’s wobbly. More extrasolar planets were later detected by observing the variation of the star’s apparent luminosity as an orbiting planet transited in front of it. About 97% of all confirmed exoplanets have been discovered using indirect techniques like this. Almost all the planets detected so far are in the Milky Way. There is however evidence that extra galactic planets do exist; researchers in 2018 found them in a distant galaxy. There are approximately 2,000 extra galactic planets for every one star beyond the Milky Way. An artist’s impression of the orbiting planets and their stars in the Milky Way. Source: M. Kornmesser / ESO Most known planets outside our solar system orbit stars roughly similar to the Sun. Some have been found orbiting binary star systems. Only a few planets in triple star systems are known and there is one in a quadruple system. Planets may form within a few years to a tens of millions of years after their parent star. When planets form in a gaseous protoplanetary disk, they create hydrogen helium envelopes. These envelopes cool and contract over time and depending on the mass of the planet some or all of the hydrogen and helium is eventually lost to space. An example is Kepler 51b, which has only about twice the mass of Earth but is almost the size of Saturn, which is 100 times the mass of Earth. The different types of exoplanets There are many different types of exoplanets, but today we will only be covering three: Hot Jupiters, Super-Earths, and Rogue Exoplanets. Hot Jupiters are a gas giant extremely close to their star. Some complete a single orbit, what would be like their year, in as little as a few days here on Earth. Astronomers were surprised by these hot Jupiters, because there is a planetary formation that indicates that giant planets should only form at large distances from their star, but eventually more planets of their sorts were found and it is now clear that hot Jupiters make up the minority of exoplanets. Super-Earths are one of the most common types of exoplanets discovered so far with a mass between that of Earth and Neptune. The properties of these planets are largely unknown. If Super-Earths have more than 80 times as much water as Earth does, then they become an ocean planet with all land completely submerged. An artist’s concept of Earth in comparison to 55 Cancri e, a super earth exoplanet. Source: NASA/JPL-Caltech/R. Hurt (SSC) Some exoplanets are so far away from their star that it’s difficult to tell whether they are gravitationally bound to it. Unlike Earth, most of the exoplanets don’t have a strong relationship with their significant star, so they are actually just wandering through space or loosely orbiting between stars. Then, there are the complete bachelors called Rogue Exoplanets who do not orbit any star at all. The rogue exoplanets in the Milky Way possibly number in the billions or more. Are planets outside our solar system habitable? One of NASA’s ultimate goals in the exoplanet program is to find unmistakable evidence of current life. In January 2020, scientists announced the discovery of the first Earth-sized planet in the habitable zone detected by tests. There is special interest in capturing evidence of a distant hospitable world where it’s possible for liquid water, a prerequisite for life here on Earth to exist on the surface. About one in five Sun-like stars have an Earth-sized planet in the habitable zone. This confirmed planets like ours can exist elsewhere in the universe. Assuming there are 200 billion stars in the Milky Way, it can be hypothesized that there are 11 billion potentially habitable Earth-sized planets in our galaxy alone. As four planets are discovered, we will ultimately tackle the prospect of life on planets beyond the solar system. Then, there’s life as we don’t know it. While it makes sense to search for something like ourselves first, we don’t yet know if that’s really what should be expected.
https://spaceandbeyondbox.com/the-discovery-of-planets-outside-our-solar-system-exoplanets/
The idea of the moon crashing into Earth may sound like something straight out of a Hollywood blockbuster or a doomsday scenario. However, according to new research published in the journal Monthly Notices of the Royal Astronomical Society, collisions between exoplanets and their moons, also known as exomoons, may actually be quite common in other star systems. The study, which utilized computer simulations, found that these collisions could have disastrous consequences for any potential alien life forms that may be present on these planets. While scientists have yet to make a confident detection of an exomoon, they expect them to be plentiful in the vast expanse of the universe.“We know of lots of moons in our own solar system, so naturally we’d expect to see moons in exoplanet systems,” said Jonathan Brande, a University of Kansas astrophysicist. Astronomers and theorists are interested in exploring how alien moons and exoplanets may interact and how these interactions affect the potential for life in distant star systems. Brad Hansen, an astronomer at the University of California, Los Angeles and author of the new study, has calculated that collisions between unstable moons and their host planets could occur within the first billion years of their formation, particularly around exoplanets that are much closer to their stars than Earth is to the sun. Gravity plays a significant role in the interactions between a planet and its moons, leading to effects like tides and the slow recession of our own moon. Earth’s moon, for instance, creeps a little over an inch farther away from our planet each year, while Earth spins a little more slowly each year. In some exoplanet systems, however, moons that have wandered away from their host planets often return with a bang, smashing into the planet and creating huge dust clouds that glow in the infrared as they are illuminated and warmed by the star’s light. According to Hansen’s simulations, these dust clouds last only about 10,000 years before fading away. Observations from NASA’s Wide-field Infrared Survey Explorer space telescope suggest that every star will undergo at least one such event at some point in its lifetime.Hansen believes that these dust emissions represent the collisions between planets and their moons. However, because these dust clouds are so short-lived, astronomers have only observed about a dozen of them, and some astronomers are still not convinced that these dust clouds are from exomoons, suggesting that they may result from collisions between two planets. While moons are often considered helpful as they help stabilize a planet’s axis, making for gentler seasons that are more conducive to life, a collision like those in Hansen’s simulations would destroy any chance of life in a fiery explosion. Therefore, more observations are needed to figure out the role of exomoons in an exoplanet’s evolution and to determine if these collisions may affect alien life.
https://entflame.com/could-unstable-exomoons-be-the-reason-we-havent-found-extraterrestrial-life/
Scientists have for the first time detected atmosphere around an Earth-like planet beyond our solar system. The planet, GJ 1132b, which orbits the dwarf star GJ 1132, is reported to be 39 light years away from Earth. The discovery is a significant step towards discovering life beyond Earth. The study was published in the Astrophysical Journal on 31 March 2017. Key Highlights • The planet is reported to have a radius of about 1.4 times that of the Earth and mass of about 1.6 times that of the Earth. • The researchers initially called the planet a potential Venus twin considering its rocky world with a high surface temperature. • The recent discovery shows that the planet also has a thick atmosphere. Though Venus has a thick atmosphere as well, the atmosphere of the two planets may be made up of different compositions. • While Earth's atmosphere is mostly made up of nitrogen with a large oxygen component, Venus' is a thick veil of carbon dioxide.• According to the researchers, the new planet’s atmosphere is likely to be rich in water vapour or methane. Veteran NASA spacewoman to get 3 extra months in orbit Scientists for first time saw eclipses of binary star shed light on orbiting exoplanets While this is not the first instant that the scientists have discovered an atmosphere around a planet, as they have previously detected atmospheres around large Jupiter-like gaseous bodies and on the even larger super-Earth, this is the first time that they have detected it around an exoplanet that is almost close to Earth’s size. Scientists can use a planet’s atmosphere to identify potential traces of life on it or to determine if it is suitable for life, as we have on earth.
https://competitive-exam.in/post/scientists-have-detected-atmosphere-around-an-earth-like-planet
Find the prime factorization of the number 9,227,460. Use a factor tree to assist with solving. Take a look at our prime factorization page for additional help.Factor Tree |9,227,460| |2||4,613,730| |2||2,306,865| |3||768,955| |5||153,791| |11||13,981| |11||1,271| |31||41| The prime factorization in exponential form is: 22 x 31 x 51 x 112 x 311 x 411 Setup the equation for determining the number of factors or divisors. The equation is: d(n) = (a + 1)(b + 1)(c + 1)(d + 1)(e + 1)(f + 1) Where d(n) is equal to the number of divisors of the number and a, b, etc. are equal to the exponents of the prime factorization. Now substitute the letters in the equation with the the exponents of your prime factorization and then solve to calculate the total number of divisors. Take a look at the factors page to see the factors of 9,227,460 and how to find them. Try the factor calculator.
https://www.integers.co/questions-answers/what-is-the-total-number-of-factors-of-the-number-9227460.html
Before viewing this page, it would be helpful to learn how to Solve Simultaneous Equations By Graphing. The purpose of solving simultaneous equations is to find the same x-value and the same y-value that satisfies both equations. To solve, one term from one equation is substituted into the other equation. The x value and the y value are the same in both equations. In the second equation, x is equal to –2y, so we will substitute –2y for x into the first equation. Now, we'll find the x value by substituting y = 2 into either equation. The simultaneous solution for both equations is x = –4 and y = 2. In the second equation, x is equal to (y + 2), so we will substitute (y + 2) for x into the first equation. Be careful to use brackets. Now, we'll find the x value by substituting y = –1 into either equation. The second equation looks the easiest. The simultaneous solution for both equations is x = 1 and y = –1. In the first equation, y is equal to 2x + 1. In the second equation, y is equal to x + 3. Since both are equal to y, they are equal to each other. Now, we'll find the y value by substituting x = 2 into either equation. The simultaneous solution for both equations is x = 2 and y = 5.
https://www.mathsaccelerator.com/algebra/simultaneous-equations-substitution
substituting into the Slope-Intercept equation. We will now look at how to use the point-slope form to get the equation of a line given its slope and a point on the line. Find the equation of a line with slope –3 and passing through (–2, 1). Step 2 : Substitute the slope –3 and the coordinates of the point (–2, 1) into the point-slope form. This video looks at writing linear equations in point-slope form, given a point and a slope, or two points. It includes four examples. Find the equation of a line with slope – and passing through (–3, 1). Step 1: Substitute m = – , x = –3 and y = 1 into the equation y = mx+ c to obtain the value of c. This will show you how to write an equation of a line that has a given slope and passes through a given point.
https://www.onlinemathlearning.com/equation-of-a-line.html
The method of solving "by substitution" works by solving one of the equations for one of the variables, and then plugging this back into the other equation, "substituting" for the chosen variable and solving for the other. Then you back-solve for the first variable. Step 1. Solve one of the equations for one of the variables. Step 2. Substitute the expression for the variable chosen in step 1 into the other equation. Step 3. Solve the resulting equation in one variable. Step 4. Substitute the value obtained in step 3 into the equation obtained in step 1 and solve to obtain the value of the other variable. Step 5. Check the solution in both equations. Step 6. Write the solution as an ordered pair. Putting x = -2 in y = x + 1, we get y = -1. Hence, the solution of the given system of equations is x = -2, y = -1.
http://www.winpossible.com/lessons/Elimination_by_Substitution_Method.aspx
Solving absolute value equations is similar to solving equations of absolute value, however, there are some additional things to remember. It’s helpful to be comfortable in solving absolute value equations. However, it’s not a problem if you’re working on these in tandem! Definition of Absolute Value Inequality The first thing to note is that first of all, an absolute value inequality is an inequality that has an expression that is absolute. For instance, |5+x|-10>6 It is an absolute value inequality since it contains an inequality sign > along with an absolute expression 5 + 5+. How to Solve an Absolute Value Inequality The steps to solve an absolute value inequities are similar to the steps needed to solve an equation for absolute value: 1. Separate the absolute value expression from the other end of the equality. Step 2. Find”positive “version” of the inequality. Third step: Find the positive “version” of the inequality by multiplying the number on the opposite edge of the inequality with -1 before flipping the sign of inequality. This is a lot to consider in one go and here’s an example to guide you through the steps. Find the solution to the inequality the number x: |5+5x|-3>2 To calculate this, you need to find 5 + 5 5 by itself on the left-hand right side of this inequality. All you need to add is 3 on each side: Now , there are 2 “versions” of the inequality that we have to resolve two ways to solve it: the negative “version” and the negative “version.” In this section we’ll assume that the facts are the way they appear: 5+5 + x is greater than 5. |5+5x|>5-5+5x>5 This is a very simple inequalitythat you need to find the variable x in the same way as you would normally. Add 5 to both sides and and then divide each side by 5. This isn’t bad! Another alternative to solving our inequality is the value x is greater than zero. Since there are some absolute values in play It’s time to think about a different alternative. To comprehend this part it is helpful to know the meaning of absolute value. Absolute value determines the distance of a number from zero. The distance is always positive. meaning that 9.9 is almost nine unit from zero. However, 9 equals nine units from zero. So |9| = 9, but also |-9| = 9. you can also find the absolute value of any number wit the help of online absolute value calculator. Back to the issue earlier. The above work showed that |5 + 5x| > 5. In another way, the total value for “something” is greater than five. The problem is that any number greater than five will be further far from zero that five. The first possibility would be the possibility that “something,” |5 + 5x|, is greater than 5. This is: 5+5x>5 This is the scenario that was discussed previously in the second step. Think about it a bit more. What else is 5 units from zero? Well, negative five is. Anything that is further from negative five is likely to be further from zero. Thus, the “something” could be a negative number that is further to zero than negative 5. This means that it’s an even more distinctive number, however in terms of technology, it’s smaller than negative five since it’s moving in a negative direction on the line of numbers. Therefore, the number of our “something,” 5 + 5x, might be lower than -5. 5+5x<-5 The simplest way to perform math is multiplying the amount on the opposite edge of the inequality five, by negative Then turn the inequality sign upside down: |5+5x|>5-5+5x<-5 then, The two answers to this inequality would be that x>0 or x + 2. Try plugging into a couple of possible ways to verify that the inequality is still true. Absolute Value Inequalities With No Solution There’s a situation where there could be no solutions to an absolute value inequities. Since absolute values always have a positive value they cannot be greater or lesser than negative numbers. Therefore, the equation x | 2 is an unsolved problem since the result for an absolute value equation must the potential to result in positive. Interval Notation In order to write out the answer for our main instance using in interval notation consider what the solution appears on the numbers line. The solution we came up with consisted of the following: x > zero or x + 2. On a number line it’s an open dot starting at 0 and a line stretching to positive infinity as well as the open dots at the number -2 with a line stretching away towards negative infinity. These solutions draw attention towards each other but not towards each other, therefore, consider each piece individually. If x is greater than 0 on a line of numbers There’s an open dot at zero and the line extends out into infinity. In interval notation the open dot is shown with parentheses ( ) as well as closed dots or inequalities containing greater than or equal to or =, will use brackets, [ ]. In other words, for the case of x > 0 then put (0,∞). Other half that is x < 2. On the number line, has an unclosed dot starting at the -2 mark and an arrow that extends until the letter –∞. In interval notation, it’s (-∞+ 2). “Or” in interval notation is the union symbol . Therefore, the solution in interval notation can be described as:
https://myblogtime.com/education/how-do-you-solve-absolute-value-inequality/
Lines are illustrations of the relationship between two points in a given plane. This means that the make and the direction are directly affected by the location of the two points. You can easily illustrate this relationship through linear equations and lines. Linear equations are equations that illustrate the proportional relationship between two variables and points. Simple linear equations are equations with a single variable, while there are linear equations with two variables, and even linear equations with three variables. All of which have their systems of linear equations and ways of solving said linear equations. A can come in many different forms. The standard form of a linear equation with two variables is Ax + By = C where x is the X coordinate, y is the Y coordinate, and C is the constant number. One variable linear equation will come in the form of Ax + B = C. Begin by writing down the equation in a physical note, or digital note-taking software. This will help you visualize the equation without needing to backread the question. When you have finished writing down the equation, check whether the equation has one point or two points. You will be able to know this by the presence of X and Y coordinates in the equation. If the equation follows the Ax + By + C = 0 formats, it will require specific X and Y values that you can obtain from a different formula. Only use this how-to with equations that use the standard form of the linear equation or the standard one-variable linear equation. When you have finished discerning the equation, you must simplify the equation to its simplest form. This means if there is a common denominator between the three variables, then you must divide them by said common denominator. After you have simplified the equation, you must isolate and move one of the variables to the equals sign. The equation would either come in the form of x = C – B, Ax = C – By, or By = C – Ax. After isolating one of the points, you must now substitute the isolated variable with specific points. For example, if 2x = 2 – Y is the linear equation, we can substitute the x value to be x = 0. The equation will then become 2(0) = 2 – Y, which will equal to Y = 2. Repeat these steps until you have up to four sets of X and Y coordinates, note that each set will be a pair. Following the example above, the set of X and Y coordinates will be (0,2). Once you have four sets of these coordinates, graph the linear equation into the cartesian plane using these coordinates as your reference points. Afterward, you will connect all the reference points to create a straight line. Linear equations are equations that can describe how one variable affects another variable in a straight line, hence the word linear. This means that the effect is stable and works at a steady predictable rate, this is very important as there are effects that are unstable and unpredictable. Linear equations allow scientists, engineers, and everyday people to incorporate mental calculations on specific objects, events, and phenomena. Without linear equations, we will not be able to predict, establish, and study-specific phenomena. Linear equations have many usages in everyday life. By using the standard form of a linear equations Ax + By = C and substituting it with different variables that we can find in real life. For example, you rent an apartment for an unknown amount of time that requires you to pay a base 500 USD rent + an increment of 25 USD per month. You can make a linear equation of 500 + 25m = X where m is the number of months you have stayed in the apartment and X is the total cost of the rent. With this, you can easily predict and estimate the overall cost of the rent you will have to pay using simple substitution. Yes, you can graph irrational numbers on a number line. Irrational numbers are real numbers that cannot be written in fraction form, unlike rational numbers. We can graph these numbers in a line because irrational numbers have specific values attached to them, even though they reach an infinite point. If we are to use π, whose value is 3.1415926…, and graph it into a line that reaches the value of one to five. Then π will rest in the points between the numbers three and four. If you want to even be more specific π will be located in the points between 3.1 and 3.15, drawing close to 3.15. Linear equations are equations and solutions that describe the direct relationship between two variables or values. These two values are often represented by the letters X and Y, which can be solved to obtain four points that will create a straight line.
https://www.examples.com/business/linear-equations.html
Many important reactions in chemistry and biochemistry are pH-dependent, meaning that the pH of the solution can play an important role in determining whether and how rapidly a reaction takes place. Consequently, buffers---solutions that help keep the pH stable---are important for running many experiments. Sodium acetate is a weakly basic salt and the conjugate base of acetic acid, or vinegar. A mixture of sodium acetate and acetic acid makes a good buffer for weakly acidic solutions. A few different ways exist to prepare an acetate buffer, but one method in particular is straightforward and relatively safe. - 1 molar solution of acetic acid - 1 molar solution of sodium acetate - Paper - Calculator - Pencil - Beaker - Calibrated pH meter - Graduated cylinder Another way to make an acetate buffer is by adding sodium hydroxide to an acetic acid solution until you reach the desired pH. Sodium hydroxide is a strong base and therefore more dangerous to work with, however, so the above procedure is preferable. Vinegar and sodium acetate are eye irritants and mild skin irritants. Do not bring either in contact with eyes or skin. Determine how much buffer you need and what molarity you need for your buffer. The molarity of the buffer is the number of moles of solute, or substance dissolved in solvent, divided by the total volume of the solution. Sodium acetate will dissociate into sodium ions and acetate ions when it dissolves in water. Consequently, the molarity of the acetate plus the molarity of the acetic acid is the total molarity of the buffer. The molarity you'll need depends on the kind of experiment you're trying to perform and will vary for different experiments. The amount of buffer you need will vary as well, so check with your instructor or check the protocol to see what you need. Determine the ratio of acetic acid concentration to acetate concentration using the Henderson-Hasselbalch equation, pH = pKa + log (acetate concentration/acetic acid concentration). The pKa of acetic acid is 4.77, while the pH you need will vary depending on your experiment. Since you know both pH and pKa, you can plug these values in to find the ratio of the concentrations. For example, assuming that you need a pH of 4, you could write the equation as 4 = 4.77 + log (acetate/acetic acid) or -0.77 = log (acetate/acetic acid). Since log base 10 of x = y can be rewritten as 10 to the y = x, acetate/acetic acid = 0.169. Use the ratio of the concentrations and the buffer molarity to find the molarity you need of each chemical. Since molarity of acetate + molarity of acetic acid = buffer molarity, and since you know the ratio of acetate to acetic acid from Step 2, you can substitute this value into the buffer molarity equation to find the molarity of each component. For example, if the ratio of the concentrations is 0.169, 0.169 = acetate/acetic acid, so (0.169) x acetic acid concentration = acetate concentration. Substitute (0.169) x acetic acid concentration for acetate concentration in the buffer molarity equation and you have 1.169 x acetic acid concentration = buffer molarity. Since you know the buffer molarity, you can solve this problem to find acetic acid concentration, then solve for acetate concentration. Calculate how much acetic acid and sodium acetate you need to add. Remember that when you're diluting a substance, M1 x V1 = M2 x V2, meaning that the original volume times the original molarity = the final volume times the final molarity. In Step 3, you found the molarity of acetic acid that you need, so you have M2. You know how much buffer you need, so you have V2. You know the molarity of the acetic acid (1 M), so you have M1. You can use these numbers to solve for V1, the amount of acetic acid solution you should add, then do the same for the sodium acetate, which is also a 1 M solution. Using the graduated cylinder, measure out the volume of sodium acetate you calculated in Step 4 and add it to the beaker. Do the same for the acetic acid. Now add enough water to bring the total volume of the solution up to the total buffer amount you need (the V2 amount from Step 4). Stir or gently swirl the solution to ensure it's well-mixed. Test the pH with your pH meter to ensure you have the right pH for your experiment. Things You'll Need Tips Warnings References - "Chemical Principles, the Quest for Insight, 4th Edition"; Peter Atkins, Loretta Jones; 2008.
https://sciencing.com/prepare-acetate-buffers-7197677.html
To begin this lesson, I will ask the students to read a Chocolate Chip Cookie Recipe. Who doesn't enjoy chocolate chip cookies? This will engage the students immediately. As they read this recipe, they will notice missing ingredients and directions on baking the cookies, which will make it impossible. My goal in this, is to get the students to understand that, just as in a successful batch of chocolate chip cookies, certain components and elements are required to make a successful fiction story. They will see that without each "element" you can and will have a bad batch! I will have the students work independently to read the recipe and answer the questions. After the students have had the opportunity to answer the questions or after about 3 minutes, I will have the students share their responses with their Shoulder Partner. This will allow them to discuss and add to their thoughts. Next, I will elicit a few students to share their responses aloud with the class. I will use this time to discuss the recipe with the class and ask why it is important to have all the "ingredients" when making a recipe. Finally, I will connect it to fiction, by asking the students to explain what I mean when I say "A good story is like a recipe, you need all the ingredients to make it right." What ingredients are in a story? I will give the students think time to think about and recall the "ingredients" in a story. This will lead into our instruction. If you would like to see how I used the advanced organizer, click the link. Recipe For Fiction Advanced Organizer To really involve the students and to recall background information, I will have the students work with their Shoulder Partner to brainstorm all the "parts" to a story that they can recall. While the students are doing this, I am informally assessing their knowledge and current understanding for the elements of fiction. This will inform me on what "parts" I need to spend more time on. I will circulate the room, listen to conversation, and assess their knowledge. Some students may need a little prompting to recall the information. I would prompt the students by saying any of the following: What do we call where the story takes place? Who is in the story? What is the problem of the story? Allow the students 3 minutes to complete this brainstorm. Three minutes should be long enough to recall information but not too long to where they will get bored! Then, as a class share and discuss. I would create my list on the board. I would leave space for further definitions and examples. Next, I will pass out the Elements of Fiction Guided Notes sheet and have the students follow along with the Recipe for Fiction power point. Sixth grade is a big transition year and the guided notes help assist the students with learning how to take notes. It gives the students some guidance and teaches them what is important to write down. Using Power Point is very visual and allows for easy review for all students. As we are taking notes, I will ask the students to give examples of each element. At this point, I will have them work with their shoulder partner. This will provide them more support. I will only have the students spend time on the elements I assessed during the brainstorm as needing more attention. There is no need to review the elements they have mastered. Finally, I will complete the guided notes and power point with the students. Now it's time for the students to see how the elements of fiction all come together to create a story. Pass out the story "Eleven" by Sandra Cisneros to the students. I display the story, using the Smart board, so I can interact with the text. I will read the story aloud to the students. As I am reading, I will stop to identify the elements of fiction within the story. This modeling is important for them to see and understand. I will underline the text to demonstrate how I am working with the text to make the inferences needed to understand the plot, character traits, setting, and conflict. I will have the students underline their text as well. Through observation from the beginning of the year, it appears that the students are not used to interacting with the text closely. Not only will this help the students see how to identify all the elements of fiction, it will allow them to practice that skill of interacting with the text. This will prove to be important once we start doing more reading response writing. As I read through the entire story, I underline the parts of plot, discuss their importance to the development of the story. I also model how I identify the climax, but locating the conflict. I tell the students the conflict is the key to identifying the climax. Once we know the conflict, we have to look to locate when that conflict changes and that usually is our climax. Climax is often difficult for students to figure out in more complex text-so it is important to give them that key to assist them when they are working on their own. For this part, I usually partner the students up with their shoulder partners. I supply each group with the story "The Three Little Pigs". I use this story because it is a very easy and familiar story. The students are able to focus on identifying the elements of fiction verses focusing on comprehending. I would review the elements of fiction and how we used the text to identify all the elements in the story "Eleven." Next, I will give each pair of students Elements of Fiction Independent Practice This handout has the students go through and identify all the elements of fiction within the story. It forces the students to interact with the text. As the students are working, I can informally assess their abilities to identify all the elements. This will allow me to see what elements I may need to spend more time teaching. To asses the students and to allow them to process their learning, I will ask them to complete a Closure Slip.
https://betterlesson.com/lesson/490107/recipe-for-fiction?from=breadcrumb_lesson
Many of us have grown accustomed to checking nutrition labels to see how much of a certain food constitutes a serving. But when we turn to our own tried-and-true home recipes, we may find ourselves at a bit of a loss. How do we know how many servings the recipe yields, and how many calories are in each? No worries: You can still whip up your scrumptious from-scratch favorites without tossing out your healthy-eating plan. To sort out the stats on any recipe, you just need to follow four straightforward steps. Video of the Day 1. Weigh the Food. Before you can accurately divide your recipe into servings, you need to know how much it weighs in total. Weigh the casserole dish or platter the food will be served in before you start cooking. That way, you can subtract the weight of the empty dish from the total to get the weight of the food. Keep in mind that careful calculations when weighing foods can be especially important for those with certain health conditions, especially diabetes, according to the Diabetes Teaching Center at the University of California, San Francisco. 2. Divide the Recipe into Servings. Once you know how much the entire finished dish weighs, divide the weight by the number of servings, which is usually listed in the recipe ("serves six," or "serves eight," for example). Round the result to an easy-to-remember number to find the average serving size. For example, if your lobster mac and cheese weighs 92.86 ounces and yields eight servings, the first number divided by the second would be 11.607, which you would then round to 11.5 ounces per serving. (Divide up the food into individual servings if you don't want to use the food scale every time you'd like a helping.) 3. Calculate Calories. If your grandma's famous tuna casserole recipe does not tell you how many servings it yields, you will need to determine this yourself. One way to do this is by using a calorie counter. (Also, the calorie content of many ingredients and individual foods can be found in the USDA National Nutrient Database.) Add the calories from each ingredient to find the dish's total calorie count. The Tufts University Human Research Nutrition Center on Aging cautions against forgetting often overlooked ingredients used in cooking, like broth, butter or oil, which could boost the calorie count considerably. Next, you can decide how many calories you want in each serving. For example, if you'd like to keep each serving to 250 calories, divide the calories in the recipe by 250. Round the resulting number to make a whole number for how many servings the recipe yields. If the recipe has 1,289 calories, divide it by 250 to equal 5.15, round that down to five, and you have the number of servings in the recipe. Read more: Why Portion Control Works — and How to Get it Right 4. Save the Information. Calculating serving size information for a recipe takes a bit of time and trouble, so you'll want to save your calculations for every go-to recipe in your arsenal. That way, you will have it on hand when you make it again. All you have to do the next time around is weigh individual servings according to the predetermined weight.
https://www.livestrong.com/article/510434-how-to-calculate-the-serving-size-in-recipes/
The recipe editor allows for the creation, editing, and publishing of recipe data. It arguably the most important tool on the site. The editor is broken up into five main areas: Access to the editor is controlled by a token, which can be requested and is then associated with your foodious account. The base recipe data is what you might expect: recipe title, description, prep and cook times, meal type(s), and whether the recipe is considered to be kid-friendly. The recipe status controls who can see the recipe. The current status codes are imported, public, private, and deleted. If a recipe is private, access to view it can be granted on a per-user basis via your profile. The base data also includes a yields note for recipes whose yield is unclear (e.g.: if a recipe makes 24 fig bites, the "yields" might be 4 if each serving is 6 bites and the yields note would say "One serving is 6 bites"). The next set of base data consists of Attribution, Dependent Recipes, and Pairs Recipes. Attribution is for the case where foodious needs to provide attribution for a particular recipe (in this case a link back to the original site). Dependent Recipes are recipes that need to be made in order to make the recipe you're viewing. Instead of including all of the ingredients in a sauce or dressing (that's shared among many recipes), a dependent recipe is specified, which "includes" the shared recipe. Dependent Recipes are added to the nutritional calculations for a given recipe, and are also used to determine if a given recipe is paleo, vegan, etc, as well as whether it contains gluten, soy, dairy, etc. Pair Recipes are recipes that do not need to be made to in order to make the recipe you're viewing, but you may want to check out and see if you'd like to pair them with the recipe you're making. Ingredients are an extremely important part of foodious, as so much data is derived from them. Each ingredient in a recipe is broken out into four distinct parts: (1) amount, (2) measure, (3) ingredient name, and (4) preparation meta-data. The controls for each ingredient allow the editor to add ingredients, delete them, or make an ingredient substitution. The ingredient data here that's editable is only that which relates to an ingredient being in a particular recipe (e.g.: whether an ingredient contains gluten or is considered vegan are specified using another tool, but how much of a particular ingredient should be in a given recipe is specific to the recipe). Preparation is a set of controls to specify the steps necessary to prepare the recipe. Each step can be formatted as a header, footer, or just normal text. There's a real-time preview so you can get an idea of what this section will look like when you make your recipe public. There's a control to insert a step inbetween two existing steps, as well as deleting existing steps and adding a new one. Recipes can have multiple images but require only one in order for the recipe to be made public. Each image can have a caption, which is optional. One image can be specified as the main image when a recipe is viewed, as well as the specific image to be used when a recipe shows up in search results. If a recipe has multiple image, there's a small carousel of image thumbnails on the main recipe view page. Notes, Tips, and Tricks works a lot like recipe steps. This is the section where the recipe owner can add tips & tricks that make the recipe easier to prepare, make it come out better, or to save time. An example might be "You can make this the night before to save time" or "If you like your food spicy, double the amount of white pepper". If you want to see the entire recipe editor as a single image, here's a link to a screen capture of an entire recipe in the editor. On the mobile web version of foodious, each section has it's own tab to make it easier to navigate.
https://www.foodious.com/blog/read/03baf4-038335
This post may contain affiliate links. Please read my disclosure policy. This giant cinnamon roll recipe is made with the most delicious soft dough, cinnamon-sugar filling, and a tangy cream cheese icing. Ready to go in one hour from start to finish! Only thing more delightful than a pan of cinnamon rolls? One giant cinnamon roll. ♡ Seriously, we’ve been having such a fun time serving up this variation on my 1-hour cinnamon rolls recipe over the past year. People are downright delighted each time this giant roll emerges from the oven. And when topped with my favorite cream cheese icing, sliced into wedges, served up nice and warm, and drizzled with a little extra icing for good measure (I insist), this giant cinnamon roll will definitely be stealing the show at your next brunch! The great news for all of us is that this recipe is pretty simple to make and comes together easily in just one hour. There’s also no stand mixer required (although you are welcome to use yours if you own one), and the giant roll can be baked in a skillet, pie plate, or any other oven-safe pan that you prefer. We’ve also included detailed step-by-step instructions in the video below for exactly how to wrap this giant roll, so I recommend giving it a quick watch before you begin. If you love cinnamon rolls as much as we do in our house, you simply must give this one a try. Enjoy!! Giant Cinnamon Roll Video Giant Cinnamon Roll Ingredients Before we get to the giant cinnamon roll recipe below, here are a few notes about the ingredients you will need: - Milk: I typically use 2% cow’s milk in cinnamon roll dough. But any plain plant-based milk could also work well here. - Butter: We will use butter in the dough, cinnamon-sugar filling and cream cheese icing. Be sure to set it out in advance so that it can be room temperature, especially in order to help it spread easily for the cinnamon-sugar filling. - Flour: This recipe has only been tested with basic all-purpose flour. - Sugars: We will use both white granulated sugar, light brown sugar, and powdered sugar in the various elements of this cinnamon roll recipe. - Salt: Just a hint of fine sea salt will help the flavors in the cinnamon roll dough to shine. - Yeast: I recommend using instant (a.k.a. “rapid rise”) yeast to help the dough rise quickly. - Egg: One large egg will help the dough be extra moist and fluffy. - Cinnamon: Starring ingredient, of course, in the cinnamon-sugar filling. - Cream cheese: I always make my cinnamon roll recipes with a cream cheese icing, as its tanginess pairs so well with the sweet rolls. But if you prefer a basic icing (without cream cheese), see notes in the recipe below. - Vanilla extract: Finally, we will use a touch of vanilla to round out all of those delicious flavors in the cream cheese icing. Giant Cinnamon Roll Tips Full instructions for how to make this giant cinnamon roll are included in the recipe below, but here are a few extra tips to keep in mind: - Use a thermometer: I always highly recommend using a cooking thermometer to ensure that your milk mixture is the proper temperature (around 110°F) before adding it to the yeast. If the milk mixture is too hot, it will kill the yeast. Too cold, and it will not activate the yeast. - Use room temperature butter: As mentioned above, it’s important that your butter be room temperature so that it can be easily spread over the dough when adding the cinnamon-sugar filling. If you didn’t have time to set the butter out in advance, you can microwave the butter unwrapped on a plate in 10-second intervals, flipping the butter 90 degrees for each interval so that it can heat evenly on all sides, until softened. - Watch the video: I really recommend watching the video before making this recipe to get a clear visual on my recommended method for wrapping the dough, since it can be a bit unwieldy trying to wrap a cinnamon roll this large. Also, be sure to wrap the dough fairly loosely so that it will have room to expand while baking. And don’t stress about perfection…it will be delicious however it looks! - Multitask: If you legit want to make this recipe happen in one hour, you’ll need to do some multitasking and prep the cinnamon-sugar filling while the dough is rising, prep the cream cheese frosting while the rolls are baking, etc. More Cinnamon Roll Recipes Looking for more cozy homemade cinnamon roll recipes to try? Here are a few of my faves: Description This giant cinnamon roll recipe is made with the most delicious soft dough, cinnamon-sugar filling, and a tangy cream cheese icing. Ready to go in one hour from start to finish! Dough: - 1 cup milk (I used 2% milk) - 1/4 cup butter, softened - 3 cups all-purpose flour - 1/4 cup granulated sugar - 1/2 teaspoon salt - 1 envelope (2 1/4 teaspoons) instant (“rapid rise”) yeast - 1 egg Cinnamon-Sugar Filling: - 1/4 cup granulated white sugar - 1/4 cup packed brown sugar - 2 tablespoons ground cinnamon - 1/4 cup butter, softened Cream Cheese Icing: - 4 ounces (1/2 cup) cream cheese, softened - 3 tablespoons butter, softened - 1 teaspoon vanilla extract - 1 1/2 cups powdered sugar - 1–2 tablespoons milk (if needed) - Heat the milk and butter. Combine milk and butter in a microwave-safe bowl. Microwave* on high for 1 minute, then remove and stir. Continue heating in 20 second intervals, pausing after each to stir, until the butter is melted and the milk is warm to the touch but not hot. (It should be around 110°F — I recommend measuring the temperature with a cooking thermometer.) If the mixture is too hot, just wait a few minutes for it to cool. - Combine dry ingredients. In a separate bowl, whisk together flour, granulated sugar and salt until combined. - Mix the dough. In the bowl of a stand mixer fitted with the dough-hook attachment (or see note below to mix the dough by hand), add the warm milk mixture and sprinkle the yeast on top, then give the mixture a brief stir. Add the flour mixture and egg, and beat on medium-low speed until combined. If the dough is sticking to the sides of the bowl, add more flour (up to an additional 1/2 cup), until the dough begins to form a ball and pulls away from the sides of the bowl. (Use no more than 3 1/2 cups of flour total.) Continue beating for 5 minutes on medium-low speed. Remove dough and form it into a ball with your hands. Place it in a greased bowl and cover with a damp towel. Let the bowl rest for 10 minutes. - Mix the cinnamon-sugar filling. While the dough rests, make your filling by whisking together the granulated sugar, brown sugar and ground cinnamon together in a small mixing bowl until combined. - Roll out the dough. Once the dough is ready, turn it out onto a floured work surface. Use a floured rolling pin to roll the dough out into a large rectangle, about 12 x 18 inches in size. Use a knife or an offset spatula to spread 1/4 cup of softened butter out evenly over the entire surface of the dough. Then sprinkle the dough evenly* with the cinnamon and sugar filling, and gently press it into the dough. - Wrap the giant cinnamon roll. Use a pizza cutter (or a knife) to slice the dough into six 2 x 18-inch strips. Gently roll up the first strip and place it in the center of a greased 9-inch cast-iron pan or pie plate.* Matching up the ends, add the second strip of dough and loosely wrap it around the first to create one large cinnamon roll. Repeat with the remaining strips, transferring the cinnamon roll to the pan once it gets too large to handle, and then finish wrapping the final strips in the pan. (See video for a demonstration. Also, don’t worry if some of the cinnamon sugar filling spills out.) - Let the dough rise. Cover the dish with a damp towel, and leave it in a warm place to rise for 25 minutes. Heat the oven to 350°F. - Make the icing. While the dough is rising, whisk together the cream cheese icing ingredients in a mixing bowl until smooth. If it seems too thick, add in an extra tablespoon of milk at a time until it reaches your desired consistency. If it is too thin, add in extra powdered sugar. - Bake. Once the cinnamon roll has risen, uncover the dish. Place on the center rack of the oven and bake for 30 to 35 minutes, or until the cinnamon roll is golden and cooked through. Remove and let cool on a wire rack for at least 5-10 minutes. - Frost. Spread your desired amount of icing evenly over the top of the cinnamon roll. Slice it into wedges and serve warm, drizzled with extra icing if desired. Notes Stovetop option: If you do not have a microwave, you can just heat the mixture on the stove over medium heat in a small saucepan. Kneading the dough by hand: Alternately, you can stir the ingredients together in a large mixing bowl. Then turn the dough out onto a floured surface and knead by hand for 5 minutes until smooth. Skillet/baking dish: You are welcome to use any oven-safe skillet or baking dish that you prefer here. The recipe is designed to fit well in a 9-inch round pan, but it should also work just fine in a slightly larger or smaller pan too.
https://spoonfulofhealthy.com/salad/giant-cinnamon-roll-recipe-ready-in-just-1-hour/
Need a recipe to use up leftover canned pumpkin? These gluten free low carb pumpkin pancakes made with almond flour are a nice breakfast treat. A lot of recipes don’t use a whole can of pumpkin. So, what do you do with the leftover puree? I tried putting a little in my coffee and sprinkling pumpkin pie spice on top. That was pretty good, but it didn’t use up much of the leftovers. Another way to use it up is to make a quick paleo pumpkin mug cake, but even that doesn’t use much. That’s why I have find multiple ways to use up the leftovers. When I searched for ideas on the internet, I found some pancake and waffle recipes. I wasn’t really into pulling out the waffle maker so I made some low carb pumpkin pancakes with almond flour. It doesn’t take long to make pancakes and they are perfect for making in larger batches to freeze some for later. All you do is blend all the ingredients in a large bowl. You could probably use a blender as well to make sure the batter is well blended. But, I find using a whisk is just as effective. The batter for these low carb pumpkin pancakes is pretty thick. I just thin out each batter mound on the griddle. You could also add in a bit more almond milk. I may try to make some waffles too as they are better suited for heating up in the toaster. To reheat the pancakes, I just pop them into the microwave for about 45 seconds. These tasty bites can be warmed up quickly before work or school. I’m always in hurry so I just pack them in my cooler bag and enjoy them at work. Sometimes, I don’t even bother to heat these low carb pumpkin pancakes up before eating. They are delicious right out of the fridge. The pancakes are okay by themselves, but they are really good with some sugar free maple syrup on top. I also like to spread on some grass-fed butter. I think the recipe could use a little more tweaking with the spices and sweetener. The spices may be a bit much and the sweetener not quite enough in these low carb pumpkin pancakes for some. With my recent discovery of being sensitive to egg whites, I’ve started making my pancakes with yolks and an egg white replacer. The best replacement I’ve found for egg whites is aquafaba and a little psyllium. If you’ve never heard of aquafaba, it’s the cooking liquid remaining from beans and other legumes. I use the liquid from canned organic chickpeas. And, there’s very little carbs in the liquid. Egg white sensitivity is actually very common and the symptoms are very subtle. I never would have known about it if I didn’t take a food sensitivity test. Do you have a favorite way to use up leftover canned pumpkin? Let me know in the post comments how you like to use it. Need a recipe to use up leftover canned pumpkin? These gluten free pumpkin pancakes made with almond flour pancakes are a nice breakfast treat. Mix together almond flour, spices, salt, and baking powder. Stir in the rest of the ingredients (may want to leave out the egg whites) until well combined. For light and fluffy waffles, it's best to separate the egg whites out, whip them to a stiff peak, and fold into the batter. Drop by heaping tablespoonfuls onto pan and cook on medium heat, flipping each pancake once to cook each side. This just became my family’s favorite pancake recipe! So delicious and easy to whip up any time of the day. I made these delicious pancakes and I love them! Having made regular ine for the grandkids and adjusted this recipe by adding 1/2 tsp of baking powder and 1 tbs of cider vinegar. This made them incredibly light & fluffy! Thank you for your efforts, all of your recipes are top notch! Thanks so much Melanie! So glad the grandkids enjoyed the pancakes too. We give a generous teaspoon of canned pumpkin to each of our dogs with their daily feeding. We are looking forward to making the pancakes, and never would have to worry about left over pumpkin. The dogs must love that! Confused on Nutritional Info on Low Carb Gluten Free Pumpkin Pancakes. Recipe says Serves 4, making 10-12 pancakes, which would be 2 to 3 pancakes per serving. But Nutritional Facts say Serving is 1 Pancake. Does this recipe also work for waffles? Nutritional data is per pancake not serving on the pumpkin one. I can fix that if too confusing. Whipping the egg whites separately should work for waffles too. Fabulous recipe! They taste amazing and you would be hard pressed to find anyone that could say they can tell they are low carb. My only issue is they are very dense, so my my husband and I decided to whip the egg whites to a stiff peak and fold them in last and it made an amazing difference. They were light and fluffy. Also can use this recipe for waffles just as easily. That’s a great tip! Thanks for sharing the idea to whip up the egg whites and fold them in to make light and fluffy pancakes. I’m adding that as a note to the recipe. I don’t currently have access to stevia glycerite or erythritol, but I do have granulated stevia “In the raw” and splenda. I’m more or less still starting out on low-carb and haven’t yet really gotten a ‘feel’ for the different sweeteners and how to use them. Can you recommend how much would I use of those to obtain the best effect as a substitute? Thanks! I’d say about 2-3 tablespoons of Splenda or equivalent. Looks like a good basic recipe to start with. A far cry better than my fritters which fell apart! I’ve found that my almond flour pancakes (haven’t tried them with pumpkin yet but will very soon – yummm!!) need more than a pinch of salt – it really makes a difference; brightens up the flavors tremendously. Maybe that’s the case with this recipe? Thanks for the suggestion, Kate! I will try that next time. Salt is a wonderful flavor enhancer so it might be what’s missing.
https://lowcarbyum.com/low-carb-gluten-free-pumpkin-pancakes/
There are practically no words to describe how good this cake is. It’s that good. This cake…oh, this cake. When we voted on which cake to make for April, I was so happy my fellow bakers chose this one. I hurried and made it as soon as I could. I used Twinings and there were plenty of tea bags left to make another cake (which I did) and even another (which I will). If I drank tea, I’d drink Chai each and every day. Man it was good in this cake. To add to the chai tea flavor, the recipe calls for adding my favorite spice of all–cardamom and some cinnamon. I might go heavier on the cardamom next time. Neither of my two cakes rose up very much. With the first cake, each layer sank horribly in the middle. It was so moist and delicious that we didn’t care. But I didn’t take any pics because it wasn’t so pretty. The second attempt was better. The cake layers still sank a bit in the middle and it was drier. I ended up brushing the layers with a simple syrup. (That was probably a mistake even though it did make it moist.) Anyway, I’m going to make it again and try to adjust the recipe for altitude. I know that’s why it was dry and didn’t rise up so much. The recipe uses a huge amount of baking powder. I just need to keep trying. :) The frosting makes the cake. It’s made in the food processor and whipped to creamy perfection. We ate the extra frosting with blackberries. And it made me think that I need to make a Honey-Ginger Ice Cream to go with blackberries. Check out the Cake Slice Bakers blogroll to see all the other lovely cakes. Oh? You want the recipe? Click here.
https://cafejohnsonia.com/2009/04/cake-slice-bakers-chai-cake-with-honey-ginger-cream.html
I made this dish final night……..It was GREAT!!!!!! Here’s a intelligent trick to pack veggies into these Latin-impressed stuffed potatoes: Chop the carrots, onion and tomato into small items that go nearly unnoticed when cooked with the ground beef. I’ve to say it’s a very exhausting state of affairs, but on the identical time we do value more these moments as a result of we do not see each other day-after-day or each week. I hate feeling rushed once I cook so Monday – Friday I take advantage of those crockpot meals and make enough that we have lunch the next day. Do not let floor beef cook dinner for lengthy durations of time in it’s own grease. We make our model with a base of floor beef, canned tomatoes, green pepper, and onion, plus a wholesome quantity of umami-wealthy Worcestershire sauce. Giada packs sirloin patties with basic pizza flavors, offering a myriad of topping solutions to accompany them on toasted buns. This is the one dish Ree Drummond makes ahead for a group greater than some other. Do this unique recipe at residence by making a straightforward, tasty sluggish-cooker chili after which serving it over spaghetti noodles. Go ahead and take that floor beef out of the freezer, because we have rounded up a few of the easiest methods to prepare dinner it for dinner tonight. Try Rachael’s easy meatloaf muffins and enjoy all the flavors of a traditional meatloaf in a cute particular person-portion measurement. Under you can find 20 Quick and Easy Floor Beef Recipes that take no time at all! The bottom beef takes only minutes to brown but you still get that scrumptious Korean taste on this meal. In a small bowl whisk brown sugar, soy sauce, sesame oil, ginger, crimson pepper flakes and pepper.
http://denniszaki.com/simple-ground-beef-recipes.html
Easy Pakora Recipe Before we go into the details of our easy pakora recipe you might be wondering what is pakora exactly. That is a very good question! Pakoras are made by combining grated or finely chopped vegetables (and perhaps other ingredients) with Indian spices, then cooking the results in little balls until crispy, either frying or baking them. Hot and Typically Indian This crispy Indian snack is best served hot. There are different types of pakora, and you can use meat to make it or even keep it vegetarian. These treats may be served alongside other Indian favorites, such as a chicken curry and rice, or you can enjoy them as an appetizer. Some people like to order onion bhaji in the Indian restaurant as well as pakoras, because these two are nice enjoyed together, perhaps with a dipping sauce. I like to get poppadoms with the trio of sauces (mango chutney, mint raita and the onion one) and either pakoras or onion bhaji too. I always wondered how easy it was to make an easy pakora recipe at home and now I know! I suppose there are trickier recipes you can use but this recipe is pretty easy to follow and the results are lovely. Easy Pakora Recipe Ingredients First you need to make sure you have all the ingredients to make this tasty dish. Here we are going to make vegetarian pakoras so you will need some tasty vegetables to make them. I like to use carrot, rutabaga and shallot in my easy pakora recipe for a nice sweetness, as well as spinach so you get that lovely green color and a contrasting flavor. A touch of garlic is also good and you can saute that in the pan with the veggies. Use a nonstick pan to saute them or a well heated stainless steel pan if you prefer, either a pot or a wok. Don’t be scared to change the ingredients in this easy pakora, recipe because all kinds of vegetables work here, and you can base the pakoras on carrot and onion for that typical flavor or experiment with different ingredients. See what you have in the refrigerator to use up. You will need to keep stirring the vegetables so it is best to use something with medium or high sides, especially if you tend to flick things out of something with lower sides like I do! The first thing to do is switch the oven on, although if you are going to be prepping your veggies by hand (lots of grating!) then this can wait a while. Preparing the Veggies for Your Easy Pakora Recipe Grate or chop your vegetables using a food processor if you have one, else you can use a cheese grater or do them by hand. You might want to buy some of them ready-grated if you can get those where you live, just to save time and ensure the pieces are all of a uniform size. Add some cooking spray to your pan and add the grated veggies and garlic, then keep it moving over a medium heat. Don’t worry about browning because some browning is fine with an easy pakora recipe. Obviously don’t let the mixture burn! Burning does not equal browning! Keep the mixture moving, using a wooden spoon, and within a couple of minutes the vegetables should be crisp-tender and slightly browned, so then it is time for the next step. You are now ready to add the spices, so throw in your salt, pepper, turmeric, garam masala and chili powder and keep stirring the mixture until the spices are well incorporated. Now some people like spices more than others, and others like a spicy flavor more than others. What that means is you are the cook and it is your choice how much of each spice to add. Don’t want it too spicy? Then add no more than a tiny pinch of chili powder. And so on. Easy Pakora Recipe: Adding More Ingredients Once you have blended those spices in, you need to take the mixture off the heat, because you don’t need to cook it any more. Stir in some tomato concentrate (or tomato ‘puree’ if you’re a Brit) and the spinach. You can also stir in the flour (use all-purpose flour or ‘plain white’ flour here). The mixture should be starting to look and smell really good at this point and your mouth should be water, but don’t taste it. It’s not ready yet! The most important thing to do at this point is just to get the ingredients right, and not just what you are adding but also work on the consistency. You don’t want these to be too firm or too liquid, but a happy balance between the two, and you don’t want too much of one kind of vegetable for example, or too much tomato. Easy Pakora Recipe: Achieve the Perfect Texture Add a little water too. Just a splash of water ought to be enough. Don’t add too much anyway, just add a little at a time. The consistency should be like a thick batter. The pakora mixture is going to be separated into little balls and baked like that so the batter should be thick enough for these balls to hold together. You don’t need to over-stir the mixture but everything should be well combined so make sure you give it a good stir to get a nice mixture of each ingredient in each pakora. Easy Pakora Recipe: Get Ready for Baking If your oven isn’t already heated up turn it on now and let it reach 400F. Grease a baking sheet with cooking spray and then arrange dollops of the batter mixture on it. The number you get depends on the size you make them but you should get about 8 pakoras on there. Arrange them as far apart from each other as possible. You don’t want them to stick together. They should hold their shape nicely if you’ve got the consistency of the batter right. Bake Until Golden Brown, Crisp and Irresistible! Spray more cooking spray over the easy pakora recipe and then pop the pakoras in the oven until they are golden brown and crispy. Now, depending on your oven this might take 20 minutes or it might take 30, but 25 minutes is a good average. We live in Holland and have a ‘mini-oven’ (glorified microwave – shudder) so things usually take longer. A frittata took more than an hour in there once but I digress. My sister Kath came up with this recipe and she lives in the UK and has a ‘normal’ oven, so you can count on the minute estimate being more or less right! You will be able to see when the pakoras are done because they will look done. Wait until they are a lovely golden brown color and they look crunchy rather than soft. That’s when to serve them! And enjoy them you certainly will! Calories in an Easy Pakora Recipe Pakora calories vary hugely, especially if you choose to add meat to the mixture or deep fry them. The following recipe makes 2 generous servings and each serving offers 175 calories which is very reasonable. The reason they are so low in calories is they are mostly made of vegetables. The amount of spray oil you use can alter the calories and in fact these might be closer to 150, but if I’m counting calories I prefer to overestimate rather than underestimate, so follow the recipe and assume you are having 175 calories to be on the safe side. Even if you aren’t counting pakora calories this is a nice and easy pakora recipe. You aren’t sacrificing anything by baking rather than deep frying. The taste is still amazing, and the texture is still crunchy! - 1 minced garlic clove - 1 grated carrot - 4 grated shallots - 2 oz (55g) grated rutabaga (swede) - 3 oz (90g) fresh or thawed (drained) spinach - 1 tablespoon tomato concentrate - 30 ml (1 fl oz) water - Pinch each of salt, black pepper, turmeric, garam masala and chili powder - 40g (1½ oz) all-purpose flour - Small handful cilantro (coriander) leaves - Frylight or PAM (or similar) cooking spray, as needed - Saute the garlic, shallots, carrot and rutabaga in cooking spray until beginning to brown. - Add all the spices (not the cilantro/coriander) and cook for 1 minute. - Take the pan off the heat and stir in the spinach, tomato concentrate, flour and enough water to make a batter consistency. - Add the cilantro and arrange small dollops on a greased baking sheet. - Spritz with cooking spray and bake for 25 minutes at 400 degrees F (200 degrees C) or until browned and crispy. If you would prefer a more traditional recipe, the following video shows you how to make authentic vegetable pakoras and fry them in oil. You might like to compare both recipes and see which you prefer!
http://victoriahaneveer.com/recipes/appetizers/vegetarian-appetizers/easy-pakora-recipe/
Cheese souffle is one of those dishes that is considered difficult enough to use as tests on cooking competitions. Contestants dread them. So I thought it was high time to try one. And I stressed through it enough that my husband thought I was being a bit nutty. 🙂 The result is that it is time consuming but not overly difficult. I found a recipe on epicurious.com for this and used Jarlsberg and some garlic chives that are just starting to come up on our garden. Easy to follow but why does it have to be another recipe that uses 4 egg yolks but 5 egg whites? Tiny pet peeve of mine. The first part of the recipe through adding the egg yolks can be made a few hours ahead of time if need be. Helpful when the dogs need to be walked. 🙂 It called for 1 1/2 quart dish, which I used, but I think in the end it was too big so next time I’ll try something a bit smaller. Butter the dish and then coat with finely grated parmesan cheese. A tip from Alton Brown is to freeze this for about 5 minutes to set it. To start warm a cup of milk in a small sauce pan. Warm it to steaming but not boiling. Their recipe called for whole milk but we used 2% and it was fine. Also all we had. In a larger sauce pan melt 2 1/2 T butter. Once the butter is melted add 3T of flour and whisk it all together. Cook this roux until it isn’t raw anymore. Don’t brown it. Mine turned beige but it was ok. Remove from the heat for about a minute then whisk in the warmed milk. Whisk until it is blended. Put it back on the stove and reheat on low to medium while whisking the mixture. You want to keep stirring until it thickens. This will take a few minutes. And when it thickens it happens quickly. Remove from the heat once again. Add 1/2 tsp of paprika and 1/2 tsp of salt. Just a sprinkle of nutmeg then blend it all together. Add one egg yolk at a time and blend well each time. Again this calls for 4 egg yolks. Set this aside to come to room temperature. Then coarsely grate a cup of Jarlsberg. Like brown sugar this should be a tightly packed cup. And chop the chives. When you are ready to do the second part of the souffle pre-heat the oven to 400F/204C. I did it on convection but I don’t recommend it as it browned the top well but the very centre of the souffle was a bit undercooked. So next time I will do it regular to the browning and the centre will finish together. Take 5 egg whites and use a mixer to whip them until “stiff but not dry” according to the recipe I was following. Take a 1/4 of the egg whites and gently fold into the souffle mixture that has cooled. For these final steps you don’t want a heavy hand. Gradually add the cheese, chives, and the egg whites gently folding as you go. You want a light airy mixture. Pour it into the dish and place it into the oven. Turn the temp down to 375F/190C. Set the timer for 25 minutes. As the recipe states “Do not open the door for at least 20 minutes!” That is really hard for me not to do! 🙂 When it is golden brown remove from the oven. Again I would use a smaller dish for this. Also as I did it in convection I removed it a few minutes before the 25 minutes was up so the centre was just undercooked a bit. This recipe served 4 of us and we served with salad. It was a delicious light meal.
https://ourgrowingpaynes.blog/2013/04/22/jarlsberg-and-chive-souffle/
Topic Modelling to Identify Themes In this recipe we use 3 ebooks to show how topic analysis can identify the different topics each text represents. We will use Latent Dirichlet Allocation (LDA) approach which is the most common modelling method to discover topics. We can then spice it up with an interactive visualization of the discovered themes. This recipe is based on Zhang Jinman's notebook found on TAPoR. NB: Any number of texts can be used, we choose 3 for this recipe. - Python 3 - 3 texts with different themes (see How to Find Electronic Texts) - Natural Language Toolkit (NLTK) - Matplotlib - NumPy - pyLDAvis - Corpora - TAPoR provenance - Extract the 3 text files and make a list containing all the texts. - Preprocessing - we iterate through the list, for each text we: - tokenize sentences into words - remove any punctuations and stopwords - lemmatize the text - The processed corpora then converted into a matrix by: - first, get the top vocabularies ordered by term frequency across the corpus. Since the tokenized texts can be too big, we can simply get the top 1000 from the results. - we then learn the vocabulary by using results above as the training set - finally, return a transformed matrix. - Specify the number of topics (typically same as number of texts, in this case 3) and a threshold for the number of top words each topic can have. - We use an LDA modelling method and get the words from each text that closely represent the main topic of discussion. - Visualizations - Visualization is done by first getting a distribution of the topics in each text then transforming the LDA modelling results into a document-word matrix - A more advanced and interactive visualization is done using pyLDAvis ingredient and uses LDA modelling results as input Depending on the size of texts collected, topic modelling will give you a fairly good idea of the texts at hand. Even without the visualization ingredients, the resulting word-sets should give you an insight especially working with large, diverse and unstructured texts. This is a common method to do preliminary analysis on texts and identify themes you may want to confirm and pursue. This recipe is based on Jinman Zhang's cookbook (see TAPoR).
http://methodi.ca/recipes/topic-modelling-identify-themes
Puddings are a favorite among many, and this giant Yorkshire pudding recipe will serve many people. It is simple to make and can be tailored to your liking. Follow this recipe, and you will have a delicious pudding everyone will love! This recipe is perfect for any special occasion! Made with self-rising flour, milk, eggs, and beef or chicken stock, this dish will impress your guests. So why not give it a try today? You won’t be disappointed! What is the Giant Yorkshire Pudding recipe? The Yorkshire pudding is a traditional English dish that dates back to the 18th century. The dish is made by baking a batter of eggs, flour, and milk in a hot oven until it forms a puff. The original recipe called for the batter to be cooked in beef dripping, but it is usually made with vegetable oil today. Yorkshire pudding is typically served with roast beef, but it can also be enjoyed with other meats or simply on its own. How long does it take to make this recipe? This dish takes about 40 minutes to make. And it is always better to keep a check on the recipe while it is in the oven, to save overcooking. Does this recipe need any pre-cooking marination? No, this recipe does not require any pre-cooking or marination. Mix the ingredients, pour into a baking dish or muffin tins, and bake at 425 degrees F for 30 minutes. Giant Yorkshire Pudding Recipe Ingredients - 2 cups Self-rising flour - 1/2 teaspoon Salt - 2 cups Milk - 3 tbsp Melted butter - 6 Eggs (beaten) - 1 tbsp Oil - 2 1/2 cups Beef or chicken stock Instructions - Preheat the oven to 425 degrees F (220 degrees C). - Sift flour and salt together into a large bowl. Make a well in the center and pour in the milk and melted butter. - Gradually stir the mixture until smooth. Beat in eggs, a little at a time. - Pour batter into a 9x13 inch baking dish or giant muffin tins greased with oil. - Pour stock over batter. Bake in preheated oven for 30 minutes, or until golden brown. - Serve immediately. What else can I serve with this dish? You can serve this dish with roast beef, other meats, or simply on its own. If you want to make a sweet version of this dish, you can add fruit or sugar to the batter. Can I add or replace some ingredients? Yes, you can. If you want to make a vegetarian version of this dish, you can replace the beef or chicken stock with vegetable stock. You can also add or replace some other ingredients to suit your taste. For example, you could add onions, garlic, or herbs to the batter. Some variations of this recipe: There are many variations of this recipe that you can try. - You could add some chopped onions or garlic to the batter for extra flavor. - Or, you could top the pudding with some sauteed mushrooms and onions after it comes out of the oven. However you choose to make it, this dish will be a hit! Tips and tricks for a better recipe First of all, to make a lower-fat version of this recipe, you can substitute skim milk for the whole milk and use low-fat margarine or butter instead of the regular butter. - If you want a richer flavor, you can add 1/4 cup of grated Parmesan cheese to the batter. - For a different twist, try adding chopped fresh herbs to the batter, such as rosemary, thyme, or oregano. - If you don’t have self-rising flour on hand, you can make your own by combining 1 cup all-purpose flour, 1 1/2 teaspoons baking powder, and 1/4 teaspoon salt. - For a heartier dish, try adding some diced cooked ham, bacon, or sausage to the batter. - If you want a sweeter pudding, try adding 1/4 cup of sugar to the batter. - If you want a richer texture, you can use half and half or cream in place of the milk. - You can also add 1/2 cup of chopped vegetables to the batter, such as onions, peppers, or mushrooms. - To make a gluten-free version of this recipe, substitute your favorite gluten-free flour for self-rising flour. - If you watch your sodium intake, you can omit the salt from this recipe. Finally, this pudding can also be baked in individual ramekins or muffin tins. Grease the ramekins or tins with oil before adding the batter. Bake for 20-25 minutes, or until golden brown. What drink can you serve with this recipe? For this recipe, we recommend serving it with a classic English ale. But feel free to experiment and try something different! After all, Yorkshire pudding is all about trying new things and having fun! So go ahead and pour yourself a pint. We’re sure you’ll enjoy it! Cheers! Is this recipe good for serving as dinner or lunch? This recipe is typically a side dish but could be served as a main course for lunch or dinner. If you want to serve it as a main course, we recommend adding some additional toppings such as gravy, roasted vegetables, or mashed potatoes. Thanks for trying our Giant Yorkshire Pudding recipe! Can this recipe be refrigerated for the next day’s use? Yes, this recipe can be refrigerated and used the next day. Reheat in a 425-degree oven for 30 minutes or until golden brown. Serve immediately. Does the beef used to have to be fresh? You could use any beef that you like, but we recommend using fresh beef if you want to ensure that it is cooked properly. This will help to prevent the beef from drying out during cooking. Can I make this dish ahead of time? You can make the Yorkshire pudding batter up to 24 hours in advance if you need to. Just store it in the fridge until you’re ready to use it. However, we don’t recommend making the entire dish ahead of time as it may not be as crisp and fluffy when you reheat it. Is Giant Yorkshire Pudding a summer recipe or winter? There’s no definitive answer since different people have different preferences. Some might say that Giant Yorkshire Pudding is best enjoyed in winter because it’s a heartier dish that can help warm you up on a cold day. Others might say that summer is the ideal time to make and enjoy Giant Yorkshire Pudding because it’s a perfect accompaniment to traditional summer BBQ fare. Ultimately, it’s up to the individual to decide when they want to enjoy this dish. How well do salads go with Giant Yorkshire Pudding? There’s no denying that salads and giant Yorkshire puddings go together like peanut butter and jelly. The two classic dishes seem to compliment each other perfectly. Whether you’re looking for a light starter or a hearty main course, pairing a salad with a giant Yorkshire pudding will please. So the next time you’re looking for an easy yet impressive meal, consider whipping up a salad and giant Yorkshire pudding. Your guests will be impressed, and you’ll be able to enjoy a delicious meal without all the fuss. Final Words This Giant Yorkshire Pudding recipe is perfect for any special occasion! Made with self-rising flour, milk, eggs, and beef or chicken stock, this dish will impress your guests. So why not give it a try today? You won’t be disappointed! Similar Recipes:
https://www.hotsalty.com/everyday/giant-yorkshire-pudding-recipe/
Availability:This title is currently out of stock. We will ship as soon as we receive our stock. Physiological Bases of Human Performance during Work and Exercise is a high-level physiology text for advanced students, researchers and practitioners in the fields of human physiology, exercise science and applied physiology. Eighty internationally recognised scientists from sixteen countries have written chapters within six areas: Physiological performance limits and human adaptation; The physiological bases of gender differences in performance; Age and human performance; Performance under environmental extremes; Exercise and health interactions; and Optimising performance through supplementation. Each section contains state-of-the-art reviews of the scientific literature. To stimulate critical thinking, there are thirteen debates and discussions that focus on some of the controversial topics that exist across these disciplines.
https://www.elsevier.ca/ca/product.jsp?isbn=9780443102714
This article is based on a fuller article: Debate: An Approach to Teaching and Learning A reasoned debate allows students to explore and gain understanding of alternative viewpoints and, for the participants, develops communication, critical thinking and argumentation skills. They are commonly associated with disciplines such as Law, Politics, and Social Work where practitioners are required to present and defend particular positions. They can be used in other disciplines too, for example, students in design-based subjects can use the skills they have learned through debating to defend design choices in response to a project brief. Suggested Room Configurations The circle or square configuration encourages wider participation in the debate by enabling the whole group to see and address each other directly. This diagram shows the lecturer’s role skirting the outside of the room and making occasional interventions by entering the debating circle. This configuration lends itself well to a variation of the activity where people sit on one side of the room or the other based on their personal view on the issue. Participants can be allowed to physically move when their view changes. The traditional classroom layout can be used for debates, with the advocates positioned at the front and the rest of the cohort forming the audience. Potential Supporting Technology An Electronic Voting System (EVS), sometimes known as a Personal Response System (PRS), is well suited for use during debates as it allows the persuasiveness of the debaters and their arguments to be recorded. For example, an EVS could be used to display which way the people in the room are leaning in ‘real-time’ during the debate at key points. This can help the group to identify the most persuasive arguments without disrupting the flow of the debate. Several different EVS products are being used at SHU, ranging from dedicated keypad-based solutions such as Turning Point, through to web and mobile tools like Responseware, Socrative and PollEverywhere. Extending the discussion beyond the room Technology also allows debates to incorporate a wider range of views and experiences by including participants who would be otherwise unable to attend the university to take part, such as professional experts from around the world, patients, prisoners, or those who would simply prefer some anonymity. The use of tools such as Skype or Blackboard Collaborate enable these people to take part in the discussion from anywhere in the world via the internet. Similarly, social networking tools such as Twitter also allow outside parties to engage in the discussions and provide alternative viewpoints. In addition, recording debates, either as audio or video, can be done with common technologies such as smartphones and tablets. These recordings can then be used later to review the debate, or enable the debaters to reflect on their performance and analyse the persuasiveness of specific arguments.
https://blogs.shu.ac.uk/learningspaces/teaching-approaches/debate/?doing_wp_cron=1575699127.7417099475860595703125
People, Place and Policy provides a forum for debate about the situations and experiences of people and places struggling to negotiate a satisfactory accommodation with the various opportunities, constraints and risks inherent within contemporary society. It aims to foster dialogue between academics engaged in research and thinking about major societal challenges and concerned with identifying problems and suggesting solutions, and the policy-makers and practitioners charged with proffering a response to these challenges. - Full article PPP is founded on the belief that academic research has a critical role to play in the creation and assessment of policies. This is not to criticise social scientists who shy away from involvement in the messy business of policy, but to celebrate the contribution of critical and questioning applied social research to both academic knowledge and thought, and the interpretation, understanding and responsiveness of policy to contemporary social challenges. Inevitably, this focus raises some difficult questions. Applied research might strive to put an end to the perceived problems of contemporary society and promote a resolution that ensures that such problems are a thing of the past, but this agenda raises fundamental questions about what the problems are and how they can be ‘tackled’. These key questions, together with the complications of doing applied research and the potential political and ideological compromises involved, are matters of immediate concern to PPP-Online. PPP-Online welcomes both empirically and theoretically informed discussion from different viewpoints about: the problems facing contemporary society; how they are perceived and presented by policy-makers; the appropriateness and effectiveness of the policy and practice response; the practical and political realities of policy orientated research; perspectives on different methods and methodologies; and the conflicts and challenges encountered by the researcher and the researched. The journal will publish reflections on broad theoretical and methodological debates, as well as findings from empirical studies and policy analysis. In addition, the journal welcomes think pieces and debates between academics, policy-makers and practitioners. The journal will provide a forum through which ideas, thinking, comment and findings can be disseminated to the policy, professional and academic worlds across a broad spectrum of areas including social and economic regeneration, housing and labour market analysis. The journal welcomes articles which are multi-disciplinary, which relate to a range of policy arenas, and which may inform debates in different territorial contexts. The range of contributions welcomed by the journal include: - research findings, including emerging findings from ongoing research - methodological discussions and reflections on research and evaluative techniques and approaches - policy reviews - literature reviews - opinion pieces, that will stimulate debate that might straddle a number of issues Contributions will be between 3,000 and 5,000 words. All submissions will be reviewed by a member of the editorial board. The editorial home of the journal is the Centre for Regional Economic and Social Research at Sheffield Hallam University. This centre is founded on principles of undertaking policy and practice oriented research informed by theoretical and metholodigical debate. David Robinson and Peter Wells are the joint editors of PPP-Online. Correspondence Address: Centre for Regional Economic and Social Research, Sheffield Hallam University, Unit 10, Science Park, Howard Street, Sheffield S1 1WB. Email: [email protected].
https://extra.shu.ac.uk/ppp-online/editorial-statement/
God Incarnate: Story and BeliefWritten by A. E. Harvey (ed.) Reviewed By John Webster One of the main themes which emerged from the debates which followed the publication of The Myth of God Incarnate (ed. J. Hick, London: SCM, 1977) was that the most telling flaws of that book were its failure to examine the kind of language which we have in Christology, and its failure to see that there are many ways in which we may talk about fundamental beliefs. Not only was the concept of ‘myth’ used in a crude manner, but its significance in telling us about the nature of the world was rejected in a thoroughly positivist manner. Some of these points were taken up by contributors to the volume Incarnation and Myth: The Debate Continued (ed. M. Goulder, London: SCM, 1979). The present, very impressive collection of essays takes these themes further. Though not directly a ‘reply’ to The Myth, the discussions at its base were prompted by that book, and it may justly be seen as a significant contribution to current British Christology. The essays seek not so much to make dogmatic affirmations of the truth of Jesus’ divinity as to explore some of what is involved in making such an affirmation. And almost all the authors conclude that many of the besetting problems about the doctrine of the incarnation come from the desire to frame it in a series of logically-coherent propositions, rather than in narrative terms. Thus Anthony Harvey in ‘Christian Propositions and Christian Stories’ argues that the uniqueness of Jesus’ relationship to God can best be expressed by telling his story, and that it is resistant to neat prepositional definition. Or John Macquarrie, in his contribution on ‘Truth in Christology’, suggests that the ‘truth’ of Christological language is closer to that of aesthetic or personal truth than that of truth in the sciences, and that it is not thereby less significant in disclosing what is the case. Rachel Trickett brings professional expertise as a literary critic to bear upon the matter by suggesting in ‘Imagination and Belief’ that response to the gospel narratives is as much a matter of imagination as of critical, historical reason. A further essay by Macquarrie, ‘The Concept of a Christ-Event’, analyses the logic of the concept and argues that it refers not only to the individual Jesus but to an entire world of meaning in which past and present are involved. This he links to the Christology of Bultmann and John Knox, suggesting that both Jesus’ past and the tradition which he evoked are embraced in the Christ-event. In similar fashion, Peter Hinchliff emphasizes the significance of the tradition as a mode of access to the presence of Christ in ‘Christology and Tradition’. Others are concerned more directly with the biblical material. The editor surveys ‘Christology and the Evidence of the New Testament’ and arrives at modestly conservative conclusions about the historical value of the gospels. His essay is discussed in some detail by Geza Vermes in ‘The Gospels without Christology’. And James Barr contributes a stimulating analysis of such concepts as story, myth and history when applied to the biblical writings. The last, and very moving, piece of the collection is a Christmas day sermon by Pastor Baelz, and offers a gently insistent reminder of the claim which in their various ways the contributors attempt to analyse: that in the story of Bethlehem we are invited to find the disclosure of God to man. All the essays (with the possible exception of Vermes’) are tentative, critical, exploratory rather than assertive; as such, they offer an eirenic voice in discussions too frequently acrimonious. There are, quite naturally, points at which questions might be raised: about the reliability of the gospel records, or about the normative role of Scripture (here the editor’s first contribution should be examined with care). But the value of the book is that it shows that if the sterility of recent debates is to be got beyond, there must be a shift of ground, into the fields of analysis of just what is involved in claims about the incarnation. In particular, theologians, of orthodox as well as radical complexion, will need to attend to the ability of the imagination in general and stories in particular to illuminate our understanding of what is the case. My biggest regret is that the collection is frustratingly brief, with many fine thoughts left half-explained. But now that the dust has settled somewhat after The Myth, this quietly eloquent volume may stimulate others to the analytical and critical task which is incumbent upon those who wish to make sense of and articulate, rather than dismiss, their belief that the ways of Jesus are nothing less than the ways of God himself.
https://www.thegospelcoalition.org/themelios/review/god-incarnate-story-and-belief/
Aga Khan University - Professional Development Centre, Gilgit (PDC’ G) of the Aga Khan University – Institute for Educational Development (AKU-IED) works for quality improvement in education in Gilgit-Baltistan (GB) region of Pakistan. PDC’G is the pioneer of the Whole School Improvement Programme (WSIP) in the GB region which treats the school, rather than individuals, as the unit of training and development. PDC’G aims to enable teachers to stimulate new thinking about their teaching practices, teaching material and curricula, and to make the optimal use of personnel and facilities through improved management; all converging into enhanced student learning outcomes. This is a short-term contractual position for a period of eight months which may or may not extend beyond the current contract period. coordinating with Head PDC’G and/or his nominee to incorporate best practices and lessons learned into the project cycle. ​Interested candidates should send their detailed CVs by email to [email protected]. Please mark the subject line with position number "10000103". PDC’G is the pioneer of the Whole School Improvement Programme (WSIP) in the GB region which treats the school, rather than individuals, as the unit of training and development. It is an effort to enable teachers to stimulate new thinking about their teaching practices, teaching material and curricula, and to make the optimal use of personnel and facilities through improved management; all converging into enhanced student learning outcomes. This is a short-term contractual position for a period of ten months. participating in in-house faculty meetings, forums, policy based discussions, various other meetings, etc. ​Interested candidates should send their detailed CV by email to [email protected]. Please mark the subject with position number "10000102".
http://www.gbacademia.net/2018/04/career-opportunities-for-you-at-aku-pdc.html
Dept. of Political Science and Public Administration - Ph.D. / Sc.D. This dissertation investigates women columnists’ narratives on feminist self-identification with the aim to disclose the narrative lines along which feminist identity is negotiated in 2000’s Turkey. In the contemporary social and political milieu in which neoliberal, neo-conservative discourses undermine feminist demands and the poststructuralist critique makes it difficult to articulate stable identity claims, the issue of feminist self-identification comes to the forefront as a critical theme underlying the discussions on the future of feminism. These global debates also resonate at the local level with a unique tune that derives its peculiarity from the social and political context in question. Keeping this in mind, I trace the repercussions of the debates outlined above in the Turkish social and political context. It has been widely argued that the current Justice and Development Party (AKP) rule in Turkey is heavily characterized by a neoliberal, neoconservative and antifeminist political stance. Given the antifeminist ethos of the current political landscape, public negotiations of feminist self identification in contemporary Turkey display multiple layers of complexity that are difficult to disentangle. This complexity begs the question of how feminist identity is negotiated and narrated in a discursive field in which antifeminist discourses are constantly reproduced through certain discursive opportunity structures. Against this background, this dissertation particularly focuses on the narratives of women columnists who are well-known public intellectual figures in contemporary Turkey. The study of media is especially important for a study that intends to examine the positionality of narratives on feminism in public deliberation. It is worthwhile to investigate the alternative media domains in the high circulation mass media and map out the zones of potential that can contribute to the counter hegemonic attempts challenging the contemporary conservative gender regime in Turkey. The study of women columnists’ narratives on feminism and feminist identity may provide us a fertile ground to delve into the discursive openings in the mainstream media through which profeminist discourses can acquire a considerable standing in public deliberation. It can provide us critical tools to nuance our reading of public sphere by disclosing the functioning mechanisms of publics that constantly shift between hegemonic and subaltern publics, which we could name as “publics in-between”. Following the research goals described above, this study intends to delve into the prominent features of the positionality of women columnists in contemporary Turkey vis-a-vis the political struggles over the gender regime and shed light on the intricacies, the promising aspects and the limitations in women columnists’ narratives on feminism and feminist identity. As a result, it aims to disclose how women columnists situate themselves vis-a-vis feminist subaltern publics in contemporary Turkey.
http://repository.bilkent.edu.tr/handle/11693/18609
A basic understanding of the main Schools of IR and security studies. Course Description This course examines the gendered dimensions of international security and explores contemporary debates on peace, conflict and security through the lenses of critical feminist approaches. Class discussions will aim at enhancing critical thinking and challenging mainstream narratives of security. Students will use feminist analyses to investigate how gendered identities and norms affect key issues like war, militarism and peace. The course will emphasise the importance of looking at structural cases of insecurity and at the linkages between various forms of (in)security. No aspect of security can be understood fully without the integration of gender as a category of analysis. Teachers GANZ, Aurora (Doctorante) Course validation ASSESSMENT Participation 20% - This module will be driven by the active and thoughtful participation of all students. Students must come to class prepared to discuss the readings and ready to comment on the specifics of each case we study. Research/Creative Project 30% - A 2,000 words research paper. Students will use the concept of ‘intersectionality' to describe and analyse a particular case of (in)security, e.g. by looking at specific marginalised groups or the gender effects of a particular global policy. Students may also present your findings as a creative project, such as a video, news article, blogpost, art piece, or other creative venture. A more detailed description of the assignment will be provided in class. Final essay 50% - The final assignment will consist of an essay (3,000 words) question that probes students' understanding of all course materials (readings, class lectures, and additional materials such as videos shown in class and additional materials handed out in class). Required reading - Enloe, Cynthia. (2014). Bananas, Beaches, and Bases: Making Feminist Sense of International Politics. Berkeley and Los Angeles: University of California Press. 2nd edition. Revised and Updated. - Sjoberg, Laura, ed. (2010). Gender and International Security: Feminist Perspectives, Routledge, 2010. - Collins, P. H., & Bilge, S. (2016). Intersectionality. John Wiley & Sons.
http://formation.sciences-po.fr/enseignement/2018/KINT/5185
Last Monday I attended Africa Gathering London. The topic was ‘Social Media Revolutionizing Africa: How is new media changing Africa, giving voices to the voiceless, improving governance and transparency, and changing narratives?’ The event stimulated thinking and brought up some hot discussions around technology, traditional and social media, aid and development, participation and governance. (Big congratulations to Marieme Jamme for curating a great line up that brought in an interesting and engaged group of participants and to William Perrin of Indigo Trust for keeping things on track and generating good debate!) See the program, the speaker bios and some short video interviews. Some quotes, thoughts and debates from the day:
https://lindaraftree.com/2011/06/26/africa-gathering-london-is-social-media-revolutionizing-africa/
Divya Bhatia, Principal, Amity International School, Saket, emphasizes instilling curiosity in every subject and encourages discussions based on current happenings … In the ancient times, India was known for the vast wealth of knowledge, which was disseminated through ‘Gurukuls’ and worked with the belief that knowledge gives liberation. Knowledge acts like the ‘third eye’, which provides insight into the world. Education involved three basic stages, which included ‘Sravana’ (acquiring knowledge through listening), ‘Manana’ (to internalize through thinking, analysis and assimilation) and ‘Nidhyasana’ (comprehending and applying knowledge in the real life). Students even in the age-old education system were encouraged to think and form their opinions. The art of questioning was encouraged by the gurus. At Amity, our motto is to ‘blend modernity with tradition’. Therefore, in keeping with the present day need of a child living in a world wired to technology, we attempt to integrate our ancient philosophy to today’s teaching and learning process. We give great emphasis to the 5 Cs -- Curiosity, Critical thinking, Creativity, Communication and Collaboration. Critical and creative thinking requires students to think broadly using skills such as reason, logic, resourcefulness, imagination and innovation in all learning areas at school and in their daily lives. Thinking needs to be productive, purposeful and intentional. It is the centre of effective learning. The 21st century with all its challenges requires young children to be enterprising and adaptable. The capability to think creatively stems from innate curiosity and that is a part of every child. It is important to stimulate this inherent curiosity and keep it alive. From the very beginning, students are encouraged to think out of the box. All lessons are planned keeping in mind the spirit of inquiry. Turn every lesson into a question answer session, leading from one question to another. Students are encouraged to come prepared for the forthcoming lessons with a list of queries. Teachers are encouraged to leave some open-ended and some unanswered questions for the children to discover themselves. The joy of discovery ignites the spark for further curiosity. In primary classes, curiosity is ignited by using ‘curiosity corners’ and class boards that stimulate young children to think and question. Students who show curiosity should be encouraged and rewarded. Think pair and share activities are an integral part of classroom teaching. Flip classrooms and project based learning further enhance the students’ curiosity and creativity. An ‘idea box ‘ is placed on the corridors where children can drop an innovative idea that is further discussed and worked upon with the help of teachers. Curiosity and creativity are not restrained to a particular subject. They are related to our daily lives and discussions based on current happenings form an important part of everyday teaching. To quote an example, the recent Aadhaar judgement was used in a class discussion where students were asked to think of the fundamental rights which were affected, followed by a debate on whether it is constitutional. Searching the Internet for answers and bringing newspaper clippings for discussion are regular features. Students have access to a variety of open sources of information and the role of the teacher should be to facilitate their understanding and help them filter the misinformation. Your brochure has been successfully mailed to your registered email id .
https://school.careers360.com/articles/innate-curiosity-nurtures-true-creativity
About Model United Nations Model United Nations is an educational simulation and leadership program in which students play the role of ambassador to an assigned country. Students learn about diplomacy, international relations, and the United Nations. Usually an extracurricular activity, some schools also offer Model UN as a class. Click on UNA USA's "Model UN for Everyone" video below to learn more Model UN and Education Model UN is a great way to enrich interactive learning in any classroom while being in compliance with national educational content standards. - Common Core Content Standards (CCSS) Common Core is an American education initiative launched in 2009 to ensure that upon graduating from high school, students were ready for university and the work force. With the help of educators and extensive research on national and international high performing schools and districts, CCSS set standardized learning goals to incorporate these findings and improve the American education system . Because CCSS only defines what students are expected to know and be able to do, not how teachers should teach, the use of Model UN addresses numerous 6-12th grade CCSS that include: CCSS.ELA-LITERACY.SL.11-12.1.A Come to discussions prepared, having read and researched material under study; explicitly draw on that preparation by referring to evidence from texts and other research on the topic or issue to stimulate a thoughtful, well-reasoned exchange of ideas. CCSS.ELA-LITERACY.SL.11-12.1.B Work with peers to promote civil, democratic discussions and decision-making, set clear goals and deadlines, and establish individual roles as needed. Not only are MUN simulations fun, but they also build and improve academic and critical thinking skills that students will use throughout their lives; some of these include: - public speaking - research - negotiating - conflict resolution - leadership Global Citizenship - At the heart of Model UN is politics, a divisive topic that can either bring people together or tear them apart. By immersing students into the politics, geography, economics, culture and religion of a certain country, participants will better understand the reasoning behind the behavior of nation states . .
https://www.unthrumun.com/model-un-
Reading & Writing – Journal of the Reading Association of South Africa is an open access, peer reviewed inter-disciplinary and inter-professional scholarly journal that explores how literacy is defined, enacted and promoted in a range of institutional, socio-cultural and disciplinary contexts', particularly within Africa and other developing countries. The journal publishes original articles that provoke debate, explores issues and posit solutions about literacy interventions, practices and education. It focuses on and relates to transnational and translocal literacies associated with immigrants and mobile people in African settings. The aim is to design literacy practices in education to stimulate community based socio-economic transformation and development in Africa. The journal offers the breadth of outlook to promote interdisciplinary and multi-disciplinary approaches that can stretch and invigorate our send of what concepts and approaches are productive in the field of literacy education. The Rhodes Journalism Review is a specialist magazine for journalists aimed at heightening their contribution to democracy and development. It is based at the School of Journalism and Media Studies at Rhodes University and has been publishing since 1991. The writers consist of practitioners in the field and media experts, academics, monitors and researchers. The Review takes a strongly interventionist stance, setting agendas and promoting debates on critical media issues for journalists. It communicates these thoughts visually and with striking design. Rhodes Journalism Review seeks not so much to document the passing events and controversies involving journalists and journalism but rather to explore in a more sustained and issue-based way the methods, mechanisms and mindsets involved in a journalism that is more responsive and responsible in relation to its social and political role in democratic countries. We seek to put into practitioners&apos; and educators&apos; hands a tool for critical thinking, analysis and altered practice. This is done by commissioning a wide variety of points of view and experts on multiple subjects and by looking into and reporting on innovative practices and ideas which lend themselves to being more widely disseminated.
https://journals.co.za/content/collection/social-sciences-and-humanities/r
By Kini Nsom Intellectual debates with all its ramifications have returned to the Yaounde University I after many years, in the form of monthly conferences during which lecturers and students exchange views on topical issues. According to the University authorities, the idea is to stimulate critical thinking among students and make them develop the power of analysing topical issues of public interest. The History Department of the University launched the debates last January, with a lecture on the History of Cameroon.On February 22, the French Department took the queue, when it organised a conference on " Cameroon Cameroon The occasion brought together students and lecturers from other institutions to the amphi 700 of the school. The Rector of the University, Dr. Dorothy Limunga Njeuma was in attendance. Prof. Andre-Marie Ntsobe, the Vice Rector in charge of Teaching who moderated the conference said it was an inspiring new lease of academic wave in the University. He congratulated the Dean of the Faculty of Arts, Letters and Human Sciences, for organizing such a debate, which has not taken place for the last 20 years.
https://www.postnewsline.com/2006/02/intellectual_de.html
This course develops an informed perspective on K-pop (Korean popular music) as a tool for more critical understanding of contemporary South Korean society and its place in the global context. It explores elements of K-pop in relation to social, cultural, political, and technological developments in contemporary South Korea as well as in transitional contexts. This course also examines the global dissemination and prominence of K-pop and the rise of K-pop fandom. Utilizing various music and video clips, this course will incorporate discussions based on academic articles and chapters and requires student to critically analyze the key features and various aspects of K-pop in national, regional, and global contexts. Objective This course is designed to develop an understanding of key features of K-pop and its place within contemporary Korean society and to develop analytical tools for understanding contemporary South Korean society through K-pop. Assignments are designed to improve critical thinking and analytical skills in understanding K-pop as transnational cultural phenomenon.
https://www.studiesabroad.com/destinations/asia/south-korea/seoul/korean-language-courses-in-english-and-stem-research/isou1422/k-pop-as-transnational-phenomenon-528864
thinking. A detailed illustration of some of the teaching strategies are given in a KU Leuven lecturer Jozefien De Keyzer’s examples. In her 4 videos, she explained in detail how she tried to develop students’ critical thinking by making thinking explicit through problem solving. Particular tips are given in her videos concerning. For instance, what kind of rubric can be created in assessment to stimulate thinking, what questions can be asked during the demonstration of problem solving and how to use a free tool perusal to organize online discussions to stimulate in-depth thinking, etc. In Jiang´s presentation, from time to time, she refers to Keyzer’s videos (subtitled in Vietnamese). Presenter: Dr Lai Jiang, KU Leuven, Belgium Co-presenters: Dr Vo Thi Nga and Dr My-Ha Le, Ho Chi Minh City University of Technology and Education, Vietnam Ms Mai Thi Bich Van, College of Technology II, Vietnam Keyzer’s examples:
https://kaltura.hamk.fi/media/Develop+students%E2%80%99+critical+thinking+ability+and+attitudes/0_5uteb080/258423?fbclid=IwAR02au9u4HBv9cgmRYmwmU1C5hv5k6tfuJeqRFLji4vqIfyTHNyWueOJiG4
Income distribution has become less equal in many countries in recent decades, and prominent economists and politicians have advocated policies that would reverse the change in inequality. Some advocates of greater equality have acknowledged a trade-off between greater equality and higher total income. However, Paul Krugman (2014) and Alan Blinder (2014) have recently denied that there are trade-offs. They argue that efficiency gains from greater equality will also increase total income. Blinder calls the argument a supply- side case for redistribution. They both use the same examples of impoverished families that are too poor to invest enough in food and education for their children. They claim that redistributing income toward poor families would allow investments in children that would increase labor productivity and the total income. ANTI-POVERTY VS. REDISTRIBUTING INCOME Productive investments in children could raise total income, but Krugman and Blinder are mixing anti-poverty programs with changes in the distribution of income. Anti-poverty programs raise the incomes of poor families without necessarily lowering incomes of more prosperous families. Raising incomes of poor families can increase investments in their children, but these investments do not depend on a more equal distribution of income. Since the Industrial Revolution, economic growth in many countries has lifted millions of people out of poverty without major harm to groups that were initially more prosperous. More recently, economic reform in China since 1977 has moved millions of Chinese out of poverty without major redistribution of income away from other groups of Chinese. Poverty can be reduced even if the distribution of income remains constant. EQUAL OPPORTUNITY AND ECONOMIC GROWTH Krugman and Blinder have not invented a new argument in favor of redistributing income. Instead they are repeating an old, but valid, argument in favor of anti-poverty policies. Because it is difficult to borrow against future labor income, low income parents may be unable to make investments in the health and education of their children that would otherwise be profitable. Policies that provide minimum incomes or safety nets for families can raise total income by increasing equality of opportunity for families. Equal opportunity is important for economic growth but equal result is not. The recent emphasis on income distribution is misleading and possibly counterproductive. The policies of China under Mao Tse Tung are an extreme example of policies intended to produce greater equality of results, that resulted in enormous sacrifices in income and lives. The Soviet Union is another example of an economic system that, under the pretext of greater equality, lowered total income and made most of the population worse off than they otherwise would have been. Economic history has demonstrated that poverty can be reduced as a result of productivity growth, whether the distribution of income becomes more equal, less equal, or is constant. INCOME SHARE VS. TOTAL INCOME If the goal is to improve the lives of impoverished people, equal opportunity is more important than equal results. Having a smaller share of a larger total income may be more beneficial to a poor family than a larger share of a smaller income. For example, incomes per capita in 2013 (adjusted for purchasing power) were $54,000 per person in the US and $1,700 in Haiti. If impoverished families earned 20% of the national mean income, they would receive $340 in Haiti and $10,800 in the US. Would an impoverished Haitian be better off with a more equal income distribution that raised his income to the mean income level in Haiti or the opportunity to move to the US and earn only 20% of the US mean? Greater equality in Haiti would result in an income of $1,700, but greater inequality in the U.S. would result in an income of $10,800. The difference between a larger share of a smaller total in Haiti and a smaller share of a larger total in the US is $9,100. Accepting greater inequality would allow substantially more spending on education and health of children. The crucial variable is total family income, not the family’s place in the distribution of total income. Emphasis on distribution is misleading, because economic opportunities of a family depend on how many goods and services it can buy with its total income, not on how income is distributed in a country. It is also clear from the attempts by Haitians to migrate to the U.S. that they are more interested in higher total income and equal opportunity than a more equal distribution of a lower total income. Achieving higher income per capita and faster economic growth need not result in extremely unequal incomes. Today the general pattern is that the greatest inequality within countries exists in the poorer regions of the world (Tsounta and Osueke 2014.) Using standard measures (Gini coefficients), Latin America has the greatest inequality, followed by Sub-Saharan Africa, the poorest region of the world. Asia (including India and China) has the next greatest inequality, and the greatest equality of income is in the group of high income countries that includes the U.S., Canada, Australia, Japan, and Western Europe. Criticism of the U.S. has focused on greater inequality relative to some European countries, but inequality is greater in poorer countries. Higher income countries have provided greater equality of opportunity to their citizens, which has increased their productivity. Higher productivity is an important determinant of economic growth, and growth allows the reduction of poverty without necessarily redistributing income away from other citizens. INEQUALITY OF RESULTS Equality of opportunity is important for poverty reduction, but equality of results is not. To raise incomes, people need opportunities to acquire education and skills, enter occupations, open businesses, and trade freely. The resulting distribution of income will depend on relative wages and salaries that will change in response to changes in relative demands and supplies of skills. The degree of inequality of income will vary with economic conditions. However, in an open and inclusive economy, the degree of inequality will be limited by the ability of workers to acquire skills and enter occupations where rates of return on investments in skills are the greatest. REDISTRIBUTION AND ZERO-SUM THINKING Emphasis on redistributing a fixed total income is a narrow perspective that leads to unproductive zero sum thinking. If total income is fixed, low income workers can only gain $1 by taking $1 away from high income workers. One can only gain at the expense of others. However, if one considers a broader range of choices that allows low and high income workers to cooperate by discovering mutually beneficial activities, both low and high income workers can raise their incomes. No one needs to gain at the expense of someone else. Discovering new opportunities to cooperate is the essence of economic growth. Suppose cooperating on a new project would raise low incomes by $1 and raise high incomes by $1.05. Total income and the incomes of both groups would rise, but the distribution of income would become less equal. The project would be rejected if concern about unequal results dominated. Furthermore, if a reduction in cooperation on an existing project would reduce low incomes by $1and and reduce high incomes by $1.05, both groups would be poorer, but income distribution would be more equal. If the goal of more equal income distribution dominates, cooperation would decrease in favor of a poorer, but more equal society. Redistribution is zero sum by definition, and it turns attention away from cooperative policies that reduce poverty through economic growth. Instead it emphasizes the coercion that is necessary to take from one group to give to another. Attempts at redistribution lead to tension, resentment, envy, and accusations of class warfare. Politically, redistribution policies turn people against each other, whereas positive sum policies are consistent with greater cooperation and harmony. EQUAL PAY ACROSS OCCUPATIONS Certain policies intended to decrease inequality of results are economically harmful, because they reduce the efficiency of the labor market. Attempts to increase equality of results by imposing more equal wages interferes with the allocative function of the labor market. Shortages of certain skills can be eliminated by higher wages and surpluses can be eliminated by lower wages. If equal wages are imposed, shortages in certain occupations and surpluses in others will persist. Legal minimum wages and government regulation of executive compensation produces disincentives for workers to move to where they are more productive. RAISING TAX RATES ON THE RICH Raising tax rates on the rich is a popular proposal intended to reduce inequality of results. However, attempting to “soak the rich” is subject to two limitations. First, some high income workers can move to lower tax domiciles. Saez et al 2014) found high-earning European soccer players to be very responsive to differences in tax rates across countries. He also found that high income workers were highly responsive to lower tax rates offered by a special Danish tax program. The recent exodus of American firms (so-called inversions) to lower tax European countries indicates that corporations are also responsive to differences in tax rates across countries. Attempts to soak the rich have also resulted in highly complex tax rules. U.S. corporate tax rates are now more than fifteen percentage points higher than in some European countries. American companies have an incentive to spend up to $.15 per dollar to save $1 in taxable income, although the exact gain would vary with exemptions and allowances that vary by country. Their employment of clever lawyers, accountants, and investment bankers to produce legal tax gimmicks is privately profitable and favored by shareholders. However, to the world as a whole, this employment is a deadweight loss. The same bright and imaginative people could have produced useful products instead of playing a game against tax collectors. Tax avoiding gimmicks also produce bitterness and resentment by the public that perceives legal tax avoidance schemes to be cheating or disloyal behavior. Complex tax schemes and loopholes are a direct result of high tax rates in countries seeking to use taxes to produce greater equality of results. INEQUALITY OF OPPORTUNITY Changes in inequality of results are difficult to interpret without knowing their source. However, inequality of opportunity is a legitimate economic problem, and reducing it can raise total income and decrease unequal results. Examples of unequal opportunity are policies that exclude people from schooling or other training because of race, ethnicity, religion, or gender. Historically, caste systems and slavery were extreme forms of institutions that denied people equal opportunity. Modern command economies in the Soviet Union and pre-reform China excluded people from certain preferred occupations. Certain radical groups in Afghanistan and Pakistan have used violence against girls who sought education. Other examples of inequality of opportunity are excluding people from entering businesses in order to protect the monopoly power of favored businesses. Carlos Slim, the Mexican billionaire, was able to become one of the richest men in the world by effective acquisition of monopoly power. Government failure to provide law and order that allows organized gangs (Mafia) to extort money and goods from businesses and individuals is another example of unequal opportunity. Also corruption involves use of political power to extract money and favors from people with less political clout. Corruption reduces equal opportunity, and it is economically inefficient. It contributes to poverty, and it is most common in the poorest countries in the world, such as Afghanistan and Iraq (see data from Transparency International). ANTI-POVERTY AS EQUAL OPPORTUNITY Poverty can contribute to inequality of opportunity by preventing parents from investing in their children’s health and education, as pointed out by Krugman, Blinder, and many earlier writers. This problem can be dealt with by anti-poverty programs that provide a safety net or minimum income for families. Ed Dolan (2014)has recently discussed these issues in this forum. Anti-poverty policies can deal with this issue without resorting to explicit polices of redistributing income. UNEQUAL OPPORTUNITY, UNEQUAL RESULTS, AND THE DISTRIBUTION OF POLITICAL POWER Does increasing inequality of income lead to greater political power for the rich that warps the political system in their favor? The political influence of groups is constrained by competition. Rich people are not homogeneous. Some rich individuals donate to the Democratic Party in the United States, others donate to the Republican Party, and some donate to both parties. Wealthy people can be found lobbying on both sides of specific issues. On the Keystone Pipeline, the wealthy Koch brothers spend money promoting the Pipeline, but billionaire Tom Steyer spends his money opposing it. The Obama administration has opposed international corporate mergers that lower business taxes (so-called inversions), but they have received large donations from many business people who gained from inversions. (Bloomberg) In U.S. Congressional elections, candidates that have spent the most money have won most of recent elections, but the direction of causation between spending and winning is unclear. Incumbents have won a very high percentage of elections, and donors like to support winners. Hence, part of the correlation between spending and winning is induced by likely winners inducing donations and spending. Also, the claim that wealthy people can reliably buy elections is questionable. There were a number of prominent recent elections, including the upset of House Majority Leader Eric Cantor, in which the candidate who spent the most money was soundly defeated. CONCLUSION Increased economic inequality has received great attention recently, but it is important to distinguish between inequality of opportunity and inequality of results. Unequal economic opportunity that restricts people’s ability to invest in the health and education of their children is economically harmful, and reducing it can raise total income and reduce poverty. However, increased inequality of results is not necessarily harmful, and certain policies intended to reduce inequality of results can be counterproductive. Higher tax rates can reduce incentives to work, although it is possible to construct minimum income programs with fewer disincentives than current programs. Policies that restrict earnings differences across occupations can lead to inefficiencies in the acquisition of skills. Pro-growth policies are the most effective solution to poverty, and concern about distribution is an unnecessary distraction. At its best, emphasis on income distribution distracts from pro-growth policies that reduce poverty directly. If concern about distribution leads to anti-growth policies, they magnify the poverty problem. REFERENCES Blinder, Alan. 2014. “The Supply-Side Case for Government Redistribution”. Wall Street Journal, August 15. Bloomberg News. 2014. “Obama Won’t Return Money from Tax Deals He Dislikes”. August 14. Dolan, Edwin. 2014. “A Universal Basic Income and Work Incentives: Part 1: Theory. August 18. Krugman, Paul. 2014. “Time for Trickle Up Economics”. New York Times . August 11. Saez, Emmanuel. 2014. “Taxes and International Mobility of Talent”. 2014 Number 2. NBER Reporter. August. Tsounta, Evridiki, and Anayochukwu Osueke. 2014. “What is Behind Latin America’s Declining Income Inequality?” IMF Working Paper WP/14/124, July 2014.
http://archive.economonitor.com/blog/2014/08/income-distribution-equal-opportunity-vs-equal-results/
Bolivia is the poorest country in South America. It possesses the largest ratio of indigenous people, who make up 62 percent of the population. Most of these indigenous groups suffer from poverty—over 74 percent are poor. The indigenous groups also make up most of the rural areas, where the greatest amount of poverty in the region is found. The unemployment rate remains high, with 8 percent of the population without jobs, increasing poverty in rural areas. Bolivia’s income distribution is one of the most uneven in the world, ranking second in unequal income distribution. The land is rich in minerals and resources, but the elite Spanish ancestry dominates the economic system. Most Bolivians are low income farmers and traders. There has been long running tension over the rich natural gas resources by exploitation and export, which continues to strengthen the Bolivian income gap. Social unrest in Bolivia is growing with the tax reform. The inflation rate is controlled by the tax reform and causes more tension within Bolivia’s economy. These issues in the economic system are creating poverty that affects groups like the indigenous people. Poverty can lead to inequality, which limits human rights and mobility through different strata of class, causing a separation of income. Throughout history, indigenous people have been the poorest and most excluded from social economic growth. Access to basic health care and necessities is limited due to isolation. The high fertility rate among the indigenous people of Bolivia has increased their population to over 5 million people. The increase is so drastic because of the lack of access to education and health care needs. Bolivia sees the highest rate of child malnutrition, particularly among indigenous cultures. World Vision estimates that over a quarter of the children under the age of five are malnourished and do not have access to proper health care. Recent organizations, like World Vision, have formed local centers in Bolivia to help monitor the well-being of these children. This includes the implementation of training for local health care workers to bring awareness to kids to stay safe from different forms of child maltreatment. Most of the women living in rural areas have limited education or training for employment. There is also a lack of health services and education in the health sector for women. This restricts the growth of the economy by preventing these women from bettering their futures and the economy. The rural areas continue to suffer from poverty. With the deficiency of natural resource management and limited approach to technology in rural areas, infrastructures such as roads will be neglected. Without the proper road system, isolation of indigenous groups will increase, causing lack of job opportunities and access to education. These regions of Bolivia are facing obstacles in the economic development in many of the indigenous groups. The advancement of these obstacles relies on policies to protect the economic growth in the rural regions, where indigenous groups reside, and to help increase labor productivity.
https://borgenproject.org/bolivian-income-gap-causes-extreme-poverty/
Bogota, December 9, 2019 – The demonstrations sweeping across the world today signal that, despite unprecedented progress against poverty, hunger and disease, many societies are not working as they should. The connecting thread, argues a new report from the United Nations Development Programme (UNDP), is inequality. “Different triggers are bringing people onto the streets — the cost of a train ticket, the price of petrol, demands for political freedoms, the pursuit of fairness and justice. This is the new face of inequality, and as this Human Development Report sets out, inequality is not beyond solutions,” says UNDP Administrator, Achim Steiner. The 2019 Human Development Report (HDR), entitled “Beyond income, beyond averages, beyond today: inequalities in human development in the 21st Century,” says that just as the gap in basic living standards is narrowing for millions of people, the necessities to thrive have evolved. A new generation of inequalities is opening up, around education, and around technology and climate change — two seismic shifts that, unchecked, could trigger a ‘new great divergence’ in society of the kind not seen since the Industrial Revolution, according to the report. In countries with very high human development, for example, subscriptions to fixed broadband are growing 15 times faster and the proportion of adults with tertiary education is growing more than six times faster than in countries with low human development… …The 2019 Human Development Index (HDI) and its sister index, the 2019 Inequality-Adjusted Human Development Index, set out that the unequal distribution of education, health and living standards stymied countries’ progress. By these measures, 20 per cent of human development progress was lost through inequalities in 2018… Planning beyond today Looking beyond today, the report asks how inequality may change in future, looking particularly at two seismic shifts that will shape life up to the 22nd century: • The climate crisis: As a range of global protests demonstrate, policies crucial to tackling the climate crisis like putting a price on carbon can be mis-managed, increasing perceived and actual inequalities for the less well-off, who spend more of their income on energy-intensive goods and services than their richer neighbours. If revenues from carbon pricing are ‘recycled’ to benefit taxpayers as part of a broader social policy package, the authors argue, then such policies could reduce rather than increase inequality. • Technological transformation: Technology, including in the form of renewables and energy efficiency, digital finance and digital health solutions, offers a glimpse of how the future of inequality may break from the past, if opportunities can be seized quickly and shared broadly. There is historical precedent for technological revolutions to carve deep, persistent inequalities – the Industrial Revolution not only opened up the great divergence between industrialized countries and those who depended on primary commodities; it also launched production pathways that culminated in the climate crisis. The change that is coming goes beyond climate, says the report, but a ‘new great divergence’, driven by artificial intelligence and digital technologies, is not inevitable. The HDR recommends social protection policies that would, for example, ensure fair compensation for ‘crowdwork’, investment in lifelong learning to help workers adjust or change to new occupations, and international consensus on how to tax digital activities – all part of building a new, secure and stable digital economy as a force for convergence, not divergence, in human development.
https://ge2p2-center.net/2019/12/16/human-development-report-2019-beyond-income-beyond-averages-beyond-today-inequalities-in-human-development-in-the-21st-century/
What We Do | Regions We Serve MAP in the U.S. Overview The U.S. has a population of over 330 million people. Although the U.S. has the world’s largest economy by nominal GDP and net wealth, there are over 40 million people who live below the poverty line in the U.S. The income inequality in the U.S. is the highest of all the G7 nations with a continued increase of the wealth gap between America’s richest and poorer families. Poverty entails more than the lack of income and productive resources to ensure sustainable livelihoods. Its manifestations include hunger and malnutrition, limited access to education, health care and other basic services, as well as mental and emotional trauma. MAP works with 40+ partner schools and centers in various locations across the country to provide food, hygiene, basic provisions, and education to those in need. Additionally, MAP provides humanitarian relief to the victims of natural disasters in the U.S. MAP Hunger Relief & Basic Provisions Program in the U.S. Twelve percent of children in the U.S. live in poverty experiencing lack of food and other basic needs. We address these issues through our Nutrition and Outreach Programs. Research has shown that hunger impairs a child’s ability to concentrate and perform well in school. It also shows that children who struggle with hunger are more likely to experience health and behavioral issues. MAP’s Nutrition Outreach Program helps fulfill the emergency and vital food needs of children facing food insecurity. This program provides protein and vegetables to children and their family on a regular basis. This improves childrens’ focus and attendance, which leads to better school performance and overall mental and emotional health. MAP’s Hygiene Program addresses both the health consequences and the embarrassment which lies around lack of hygiene. We provide hygiene products and access to laundry facilities at our partner centers to the children in need so they can have the dignity they deserve. At no time are disparities in the community more visible than during the holidays. Most impoverished families can’t afford to spend money on gifts and celebrations. Each year, MAP provides food, toys, blankets, gift cards, and other necessities to impoverished children within our community as part of our Holiday Helping Hand Project. MAP Education Program in the U.S. Education can open doors to jobs, resources, and skills that an individual needs to thrive and become self-sufficient. Studies have shown that access to early childhood and primary education not only supports a child’s well-being, but also gives them a better chance to succeed in their future learning endeavors. MAP partners with schools and centers to provide educational opportunities to the underserved children in the community. We support after school STEM education programs to help bolster childrens’ understanding of these subjects for future learning. We also support early childhood learning by providing books and setting up libraries in schools and centers.
https://momsagainstpoverty.org/regions-we-serve/map-in-the-u-s/
Racial disparities create unequal access to education. Race is a determining factor in the ability of the student to access quality education. Race directly influences school factors such as policy, financing and curriculum in America. Locally affected schools, with high levels of poverty, are usually linked to a community of minority groups. The legal basis for racial segregation in the United States was separate but equal between 1896 and 1954, when the Supreme Court handed down the landmark Brown v decision. Education Board. The judges declared that segregation by law violates the 14th constitutional amendment. Pasted immediately after the civil war, the 14th amendment guaranteed all citizens rights irrespective of color. In 1951, on behalf of Brown and others, NAACP filed a class action lawsuit. The following year, the Supreme Court agreed to hear this case and four other cases, calling schools into question racial segregation. The five were heard collectively under the single name Brown v. Education Board. Thurgood Marshall was among those who argued for the plaintiffs, who would later be the first African American to be named to the supreme court. On May 17, 1954, Thurgood Marshall and his legal team scored an historic victory in the struggle for civil rights. On that date, chief justice Earl Warren announced the court’s decision, being separates was unconstitutional. The landmark ruling was meant with resistance and anger across the south. And integration was slow and coming. But Brown v. Board of Education broke the backs of segregation and helped spark the civil rights movement. Initially, segregation was based on race and was a societal normality in the history of the United States of America. Because of this, academic institutions followed suit and saw no problem in racial segregation, which led to disparities in education among different races (Henderson, 2014). For example, stated in Ford and King (2014), in the landmark decision Brown vs. Board of Education 2 case in 1955, the Supreme Court ruled that separating or specifically segregating children based on race was unconstitutional. Racial segregation in schools became illegal, but 60 years later, remains an issue. According to Ford and King (2014), “the inequitable distribution of resources and opportunities for Black students promotes and exacerbates educational disparities; thus, creating a vicious cycle” (p.3). Due to being oppressed and discriminated, too many African American males and females fail to reach their potential in our schools and gain access to school programs. Furthermore, minority youth are further disengaged from the school by the lack of cultural representation in textbooks. Their culture and experience are devalued (Henderson). Eventually, situations would come to challenge the social confines of segregation and how it affected everyday life. Racism is a barrier that continues to play an active role in everyday life worldwide. Ralph Ellison (1952), as cited in Henderson, (2014) states, “I am invisible; understand, simply because people refuse to see me.” Racial tensions may be less in America, but inequality still exist, and the effects still harm low-income and minority students. School workers are normally employed by the school district or an agency contracted to provide services to the school district. The role of a school social worker is helpful in providing teachers with resources to understand their students’ cultural backgrounds so that they avoid culture clash. Training on cultural competency could be taught to teachers. Students can also be educated and counseled by their school social worker to explore their own culture. Finigan-Carr and Shaia indicated, “they also are dedicated to providing comprehensive supports that address many of the out-of-school needs that limit students’ learning” (p.26). There are many services that a school social worker can do, as educators, to better serve, support, and educate economically disadvantaged students. As mentioned in Finigan-Carr (2018), school social workers “assume leadership in providing effective quality programming that can ensure that the needs of children and families are met” (Carr, p. 29). Take the time to know the students one on one. Discover the way they think. Let the children speak first so that they won’t be nervous. Be there for educational and emotional support, listen to the needs and get to know the whole child. A social worker is there to value the student and treat them with respect. Convince them that they are important. Show them that they bring important skills to the classroom and the world. A social worker can help them understand equality and make them feel equal. Explaining to the children and family by letting them know that they are not alone and that you are there to advocate for them and help provide family services is crucial. A school social worker is involved in the committee and does research on what’s going on around them to better provide for young clients in the school system. According to Carr (2018), “social workers work with parents in these situations to access school and community resources that may help families reduce these stressors and improve the family’s outcomes” (p. 27). School social workers can link parents to unemployment resources to help gain new training or help in getting a job to become financially stable. Resources are everywhere and there could be activities that can be provided to children who cannot afford services from school. Another approach used is to engage school’s partners and local businesses when finances are necessary. A difference can be made for schools with lots of dedication and time and a willingness to help the less fortunate. As a social worker in the school system, the vision should be to nurture young children ‘s curious minds, to create lasting trust and to inspire joy in learning. A professional writer will make a clear, mistake-free paper for you!Get help with your assigment Please check your inbox Hi!
https://studydriver.com/segregation-was-based-on-race/
One in every 10 Indian is an adolescent girl. Consequently, India hosts nearly 20 percent of the world’s population of adolescent girls and each and every one of them has the potential to contribute to India’s future economy, says the report, ‘Best Foot Forward: Enhancing the Employability of India’s Adolescent Girls’ by Dasra, an NGO. The report commissioned by Bank of America points out that India’s disadvantaged adolescent girls stifled by social and economic challenges, struggle to claim their place in one of the fastest growing economies in the world and become self-sufficient . For many marginalised girls in India, there is lack of control over decisions that determine the course of their lives, restriction on mobility and limited access to public spaces, interrupted education, inability to challenge social norms, resisting early marriage and pregnancies, vulnerability to violence at home and in labour market and increasingly limited opportunities to acquire skills needed to build financial security or independence. The report findings show that school curricula in India is not designed to help marginalised girls to access or create income generating opportunities and employability programs so as to help plug this gap by supplementing school education with training in hard and soft skills. It adds that programs initiated under the Skills India Campaign and those run under Ministry of Skills Development and Entrepreneurship focus purely on hard skills training for youth without accounting for challenges like restrictions on mobility and social pressure to get married that young women face. It also points out that few non-profit and government interventions are currently working with economically marginalised, disabled and adolescent girls including those living in areas of conflict, belonging to scheduled castes and scheduled tribes or those with disabilities. The findings recommend employability programmes to enable adolescent girls understand and articulate their needs with parents, relatives, community members and other key decision makers in their lives and successfully negotiate for their rights. Employability programs that influence a cultural change in perceptions around women’s roles with a consolidated long term focus on overturning gender norms that negate a young women’s earning potential and ability to participate as decision makers. It calls for the government schools to make curriculum more relevant by establishing clear links with school education and higher economic returns for adolescent girls and their families. The report also calls for employability programs to create open spaces for girls to interact with their peers, learn from each other and isolated girls to draw strength From these social networks. Connect girls struggling to articulate their concerns with mentors they can confide in and learn from including skills like computer training, book-keeping, accounting and driving that defy stereotypes and provide wider range of income generating opportunities. “Women are most likely to invest their assets in their children and improve inter-generational development outcomes,” it says. It also calls for introducing effective implementation of policies on sexual harassment, maternity benefits and crèche facilities for young mothers, promoting gender equality at workplaces for protective and supportive work environment including focussed modules on entrepreneurship, skills training within the curricula of employability programs to build girls capacity for problem solving, management of finances. It also asks for people and resource mobilisation and evaluating non-profit interventions and government programs so that gaps and best practises are identified. “Hence there is clear policy- level need for investing in girls and providing them with economic alternatives that allow them to build identities apart from their roles as future wives and mothers,” says the report.
https://www.governancenow.com/news/regular-story/indian-girls-have-untapped-potential-report-
Women bring an enormous contribution to the Moldovan economy and the society overall. However, they face disproportionate barriers that limit their employment options and result in inequality on the labour market. These include significant wage disparities, segregation into lower-paying occupations, traditional expectations about their career choices, unequal sharing of work and family responsibilities, overprotective maternity leave policies, and limited access to childcare. Women are under-represented in highly paid and in demand sectors and are mostly employed in lower-paid jobs and in the most “feminized” sectors of the economy which include public administration, education, health and social assistance, and trade, hotels and restaurants. For example, women represent over 80% of labour force in health sector, while earning on average 13.5% less than men.4 Limited economic opportunities for women affect the entire society. UN Women partners with Government, civil society organizations, academia and the private sector in Moldova to remove barriers to women’s economic empowerment and empower them to have income security, decent work and economic autonomy. - Promote Women’s Empowerment Principles (WEPs) with private companies in Moldova. Through the GirlsGoIT initiative, UN Women together with government institutions, international organizations, businesses, and civil society seek to advance the digital literacy of women and girls, especially those from disadvantaged groups.
http://moldova.unwomen.org/en/munca-noastra/economic-empowerment
Curriculum change might be the solution to youth unemployment Youth employability is one of the most pressing issues in the present world economy, with over 70 million people under the age of 25 without a job. Sadly, over 40% of the world’s young people are either unemployed or have a job but live in poverty. In South Africa, approximately 5.9 million youth across the nation are bearing the brunt of the high unemployment rate. Although government has made tremendous efforts in addressing youth unemployment, much more remains to be done. The quality and relevance of education is at the heart of youth employability. Our education curriculum has been criticized as not adequately tailored to the needs of the labour market. Many young people are unable to find jobs and employers are not able to hire them because they lack essential employable skills for their needs. The changing economic landscape requires young people to become entrepreneurs that can take initiative and organise a team to get work done. It will take a holistic approach from government, academic institutions, civil society, labour and the private sector to address these issues and collectively design interventions affordable to all young people irrespective of their socio-economic realities. Presently, many young people are unable to enter the labour market due to conditions beyond their control. For instance, apartheid development architecture limited access to better education opportunities for the majority of African people, limiting their options to pursue qualifications necessary for an employable future. Fundamentally, education should not be limited to basic skills such as literacy and numeracy but should be comprehensive, including civic, social, personal leadership and financial skills to equip young people not only to cope but to thrive in a complex, highly interconnected and globalizing world. Until the education curriculum is revised to redress these structural effects and provide meaningful skills to young people, they will remain trapped in poverty cycles. Curriculum revision is urgently needed so that young people who are still in school are better prepared for the world of work. The curriculum should include vocational, business management, entrepreneurial, social and money management skills. These skills can help young people with income generation, finding jobs or setting up their own businesses. Furthermore, these skills should be linked to targeted programmes that engage them in activities based on their future livelihood, their interest in other people and the environment, as well as their desire to develop and lead enterprise activities. Furthermore, financial management aspects should be added to the new curriculum. As young people become consumers, workers and producers, it is crucial they understand money and markets that increasingly affect them. For example, decisions to spend, save or borrow money influence their ability to access services such as education or health care. Unless they are adequately equipped with essential financial skills, young people will continue to struggle. Linked to financial management, additional entrepreneurial activities should be added to the curriculum so that young people can develop essential entrepreneurial skills by incorporating practical activities such as forming savings clubs and self-help groups led by young people, aimed at stimulating community-based enterprises and supported locally by their own communities. Then, self-help groups would be based on peer support, so that young people can develop essential personal leadership skills that are vital in career or business development. Ultimately, empowered youth equipped with essential critical skills and exposure to useful market opportunities can reduce poverty through their own entrepreneurial efforts while promoting sustainable livelihoods in their own communities, thereby increasing both individual and community well-being. They will become agents of their own future, with enhanced confidence to tackle some of the most pressing societal problems. In addition to fixing our education curriculum, we must remember that democracy must be lived to be celebrated. It is difficult to enjoy the fruits of democracy when we are living in a highly unequal society. Young people are a critical demography in our nation whose plight must be addressed urgently. It is incumbent upon all of us, as citizens, to recognize our common responsibility and act speedily. As patriotic citizens, those of us with access to opportunities can do something to offer a young person within our reach a real chance to gain skills and knowledge that will help them secure employment or create opportunities for themselves. We can all do something to create possibilities for young people in the world of work. * Dr Paul Kariuki is programmes director at Democracy Development Programme (DDP), a national NGO based in Durban. He also serves on the board of the Greater Durban YMCA.
https://www.timeslive.co.za/sebenza-live/features/2018-06-26-how-changing-the-education-curriculum-can-help-with-youth-employment/
Xenophobia in South Africa: Unequal opportunities drive violence These are my speaking notes from an event organised by The Star newspaper on xenophobia in South Africa held in June 2008. Unfortunately, nine years later the same argument applies. All data in the article is correct for 2008, and I have not updated it. The point being that the fundamental challenges unfortunately and devastatingly remain the same. Opportunity South Africa transition to democracy carried with it a promise, that no longer will life opportunities be dictated by initial conditions of race, gender and class. The xenophobic attacks tell us that that promise remains unfulfilled. It remains unfulfilled because the violent mobs that carried out the xenophobic attacks, represent a significant section of our society that have access to very limited set of opportunities. Young unemployed South Africans have neither a strong prospect of getting that elusive first job, nor are they provided relief through government’s social security system, even though this system is very extensive. How then do we create a society with opportunities for the poor? Let us first explore the problem a little more deeply, before moving on to solutions. Why Xenophobia? Let us take a household called ‘Mzansi’. This household has young adults that cannot find that elusive first jobs. Statistically, they represent 76% of all unemployed people who have never worked, being between 15 and 30 years. This household has received both a child support grant, as well as housing, water and electricity. In other words, they have benefited from governments service delivery programme. Yet, these transfers have mattered little as this household attempts to access job opportunities, possibly start a micro enterprise, save for a rainy day. Should this household get into a small business – let us say baking bread – they will find large cartels that set prices, control distribution channels, and have sown up the distribution channel. Even if this household somehow manages to enter the market, they will find that there is a lack of demand in their area, or rather that incomes in the areas where they live might be insignificant to running a business profitably. In other words, this households is unable to break out of its current path of dependence, into a path which provides hopes for a better future. This household is part of a system of distribution, that is unequal, and that this inequality is structural. Some reminders of these significant disparities are: • Over 60% of those employed earn less than R 2500-00 per month, whilst the highest paid CEOs earn in access of R 10 million. • In shorthand this translates to workers earning around R 1-00 for every R 333 rands that a CEO earns. • Since democracy the distribution of income has remained more a less constant, with the bottom 10% receiving less that 1% of total national income, and the top 10% in access of 50% of national income. • In 2004, the Minister of Trade and Industry provided data that 70% of BEE deals went to four companies – the so-called ‘usual suspects’. Despite some widening in the beneficiaries of BEE deal – it is vital to remember that they do not result in the creation of new jobs in the economy. Towards solutions Let us know turn to solutions. South Africa thus needs to continue and expand its poverty alleviation measures, but must focus much more strongly on inequality. This means focussing on the entire distribution of income and assets, not just on those at the bottom. Whilst this does not imply confiscating assets from the top to transfer to the bottom, it does entail significant changes in government policies. First, it is not merely for government to reprioritise its spending allocations in the national budget. Lack of money is not the only problem. Far more important in many ways is the need for government to shift its allocation of its own human and organisational resources. Putting its best people to addressing the problems in our school system, our health care, our housing and community infrastructure delivery would be a strong signal as to where government’s priorities lie and would surely lead to performance improvements. At a household level policy must create conditions for the poor to build assets. Moreover, for inequality to decrease and poor households to make the huge transition out of poverty, their asset base must grow faster than the assets of those at the top end of the wealth distribution,. The consequences of such a strategy would be improved social cohesion, participation in the economy, and longer run economic growth. Reducing inequality makes economic sense. Economic policy has focussed on macroeconomic stability and increased business competitiveness. The assumption has been that ramping up economic growth will bolster employment and in turn reduce poverty. The blind spot in economic policy making is that the quality of economic growth matters, not only the rate of growth. In South Africa, too much of our recent growth has been in sectors where the labour force is small and high-skill – finance, communications, metals processing. This reinforces inequality and exclusion of the poor. In fact, international evidence indicates that more equal societies have better prospects for economic growth that makes a difference to job creation and lower poverty, but also for sustaining growth, from which everybody will benefit. The sort of re-thinking of policy and growth priorities which is needed will not happen easily. In countries where inequality has been successfully lowered, it has often been because the poor were part of a political alliance with the middle class. With support from the latter, lower inequality and poverty were pushed to the top of the policy agenda. Is this likely or possible in South Africa today? It may seem remote but there are rays of hope. The recent whistle-blowing on bread price-fixing by a small Cape Town bakery is one. The bread cartel is a good example of how inequality can become a self-reinforcing process in society. Though perhaps acting from self-interest to increase its own sales, the whistle-blower also positively affected the lives of the poor. If such examples of common interest between the middle classes, especially small business, and the poor could proliferate, South Africa would be well on the way to lower inequality. However, the solution also lies with us as individuals. We must ask a question what can we do in our capacities as individuals. A key message must be that as a minimum we must transfer resources and time. At its basic level I would like to propose the creation of and ‘opportunity fund’ – that would focus on providing small-scale venture capital to micro enterprises. It would be build up on the small debit orders by individuals, and larger ones by companies. The fund would focus not on charitable functions but rather on providing risk capital into our economy. Simply stated, it would provide a means for households to break out of current paths. For the household that we described it might entail the following: • Access to employment through an expanded national youth service programme • Government and private households creating systems for venture capital for small businesses • A significant change in the quality of schooling through educational reform programme In doing this, the assets of the poor will be enhanced, their participation in the economy would improve, and with it we may finally move a little closer to realise a central goal: a nation of opportunity. Unless we do that, we can expect a reoccurrence of xenophobic attacks, and also higher levels of violent service delivery protest.
https://unequal.blog/2017/02/25/xenophobia-in-south-africa-unequal-opportunities-drive-violence/
ACCORDING to the latest poverty report by ZimStat and the World Bank, Zimbabwe has become poorer over the last three years. Tafara Mtutu Investment analyst Extreme poverty rose to 38% in 2019, while general poverty went up eight percentage points to 51%. To note is the country’s latest Gini coefficient of 50,4%, which is up from 44,7% in 2017. The Gini coefficient is a measure of income or wealth distribution in a country. A Gini coefficient that is close to 0% implies that there is perfect or equal distribution of wealth. The other end of the spectrum — a Gini coefficient of 100% —indicates gross inequality, where only one person earns all the income. Various developed countries have a Gini coefficient ranging between 27% and 45%, while developing countries range higher than that. Southern African countries boast some of the highest Gini coefficients in the world. South Africa has the highest level of inequality in the region and the world, at a Gini coefficient of 65%. Other countries in the region have Gini coefficients of 59,1% (Namibia), 57,1% (Zambia), 54,2% (Lesotho), 54% (Mozambique), 53,3% (Botswana), and 51,5% (Eswatini). The unequal distribution of income in these countries compromises the efficiency of policies that use averages such as gross domestic product (GDP) per capita (GDP divided by population figure) to map a way forward or to measure the population’s well-being. For example, country A and country B could have the same GDP per capita of US$10, but if country A’s income is significantly skewed towards the top percentile of income earners relative to country B, the policy on income tax could be different between the two countries. Country A is better served by a progressive tax regime compared to country B. A nation’s prospects also hinge on the distribution of income. Inequality in human development is often a function of the inequality of income distribution. Data from the United Nations Educational, Scientific and Cultural Organisation (Unesco) and the United Nations Department of Economic and Social Affairs (UN Desa) shows that more children born in 2000 in very high human development countries will move into higher education compared to children of the same age that were born in low development countries. This can be linked to economic growth through empirical evidence that shows that a country’s economic growth is most positively affected by investments in post-secondary education compared to similar investments in primary and secondary education. This is largely because an economy’s innovation and growth are driven by strides in post-secondary institutions such as universities. Developed countries have made significant investments in post-secondary education and their institutions consistently dominate top spots in global university rankings. Universities in developing countries, on the other hand, are hardly in the top 100. The best university in southern Africa (and on the continent) is the University of Cape Town in South Africa which is ranked 220th by the QS World University Rankings. Egypt’s American University of Cairo also features on the rankings at 411, eight positions above South Africa’s University of Witswatersrand. The top spots, however, are dominated by institutions in developed countries that include the United States, United Kingdom, Switzerland and Singapore. Zimbabwe’s top institution, the University of Zimbabwe, is ranked 1 451 by the Centre for World University Rankings. The inverse relationship between post-secondary education development and income inequality serves to cement the importance of quality education and relevant skills in addressing inequality, and paving way for a developed and equal Zimbabwe. Inequality in Zimbabwe has also been perpetuated by the lack of access to international capital and a concentration of remittances among the few wealthy individuals in the country. Zimbabwe’s remittances are largely skewed towards the wealthy few in the country. Wealthy families have some members who are in the Diaspora. These members subsequently afforded themselves a better life and they regularly send money back home. Over time, the wealthy individuals in Zimbabwe perpetually and exclusively take advantage of opportunities available because they have access to a stream of international capital that the rest of the country cannot tap into. The opportunities often entail massive capital outlays that over 80% of Zimbabweans cannot raise without external support, and therein comes the issue of the lack of access to external capital. Zimbabwe’s external debt stood at about US$8 billion at the end of last year and 74% of this debt is in arrears. The international lending community made it clear that it will not support Zimbabwe until it clears its arrears. FDI inflows into Zimbabwe by private equity investors have also waned, and 2020 was marked by international investor exodus through fungible stocks on the Zimbabwe Stock Exchange and the interbank auction system. The decreased appetite of international investors for Zimbabwean investments and the skewed distribution of the only other significant source of international capital (remittances) mean that one single eventuality emerges; the rich get richer and the poor get poorer. So strong are Diaspora remittances that, while global FDI is expected to fall by 40% in 2020, Diaspora remittances into Zimbabwe surged by 45% in the nine months of 2020. Zimbabwe currently uses a progressive tax regime to redistribute wealth from the rich to the poor, but the country is extensively porous given that it has evolved into the second largest shadow economy in the world after Bolivia. Further measures to support the productive minds in the country with no access to capital, such as a SME stock exchange and allocation efficiencies in the distribution of arable land and other resources, remain ever critical in addressing the inequality gap and dismal economic growth over the last two years. These issues, if not addressed, could result in Zimbabwe’s current economic situation worsening after the African Continental Free Trade Agreement (AfCFTA) becomes effective in 2021. The prospect of opening our borders without the capacity to export more than we import holds discomforting implications for the country.
https://www.theindependent.co.zw/2020/12/11/zims-inequality-the-rich-get-richer-and-the-poor-get-poorer/
Last week, writing about how we should review the outcomes of the World Summit on the Information Society, I emphasised its vision – especially its opening call for a ‘people-centred, inclusive and development-oriented Information Society’ (or ‘digital society’ as I think we should now call it). You can take each of these three notions and analyse them for their meaning now and in the future. What, for example, will ‘people-centred’ mean when decisions that matter are mainly taken by computer algorithms? What is development, or ‘sustainable development’, as now preferred? This week, what is inclusion? Inclusion and exclusion Let’s get three things clear. First, there’s a big difference between ‘inclusion’ and ‘exclusion’. They are not counterparts. The extent to which people are ‘included’ in economics and societies is a continuum. Too often, policies designed to address ‘exclusion’ focus on ‘the poorest’, ‘the socially excluded’ or ‘most marginalised’, at the expense of those that are merely ‘poor’ or have limited (but less limited) access to decision-making power, goods and public services. Policies that focus on ‘the poorest’ or ‘most marginalised’ are palliative rather than transformative. They’re concerned to lift the life experience of those at ‘the bottom of the pyramid’, say the 10 per cent with lowest incomes. This is, of course, highly desirable from the point of view of social and economic welfare but its aim is to alleviate the impact of exclusion rather than address inequalities across society. To do that, you’d need to focus on the whole community, particularly the inequalities experienced by those who struggle to get by above that ‘bottom’ 10 per cent. Inclusion and digital inclusion Second, there’s a big difference between ‘inclusion’ in what society has to offer, and ‘digital inclusion’. All societies, political and economic systems, have power structures. Political, social and economic inclusion are about the ability to participate in the decisions that affect one’s life and livelihood, one’s family, one’s neighbourhood. They’re about equitable access to many things that people value and that they can leverage to access opportunities – jobs, education, health services, financial security, decent housing, clean water, the ability to afford to get about by public or private transport. Access to ICTs may help people to access opportunities, but it’s not more important than lack of access to jobs or medicine or childcare. Intersectionality Third, inclusion and exclusion tend to be (but are not necessarily) intersectional. Those at the top of economic and social hierarchies tend to share some commonalities – inherited wealth, expensive education, gender, ethnicity, membership of the same social clubs, participation in the same pastimes (ski holidays, golf clubs), fluency in international languages (especially English: check out the language environment at this year’s IGF if you happen to be there). Marginalisation is often – but by no means always – also cumulative, involving the inverse of some of the same factors as well as others (disability, poor health, breakdown of law and order in local communities, family breakdown and abusive relationships). You can’t fix this with Netflix. ‘Real access’ … It’s always been a problem that the easiest way to measure ‘digital inclusion’ has been to measure access and connectivity. Almost all surveys of where we are in meeting WSIS targets or achieving an inclusive Information Society begin with data on who’s got access to the Internet and where. I’ve just, after some thought, been guilty of this again myself because it is the easy way to go. But we should always bear in mind that it is very partial. As early as the Summit, back in the first years of the century, civil society groups including APC were already talking about ‘real access’ which, alongside connectivity, included the affordability of (then mobile phone, now online) access, the availability of ‘relevant’ (often identified as ‘local’) content, and the skills (from literacy through to research and analytical skills) required to make use of the new services that were becoming available. More recently, from survey evidence, we’ve added barriers based on fear and insecurity: ‘the Internet won’t help me’ has been joined by ‘it might even harm me’ in inhibiting some potential users of the Internet from joining. The importance of these ‘real access’ factors is recognised more widely now. The pace of growth in Internet access is slowing because it’s not affordable to the lower income groups that make up almost all the unconnected. The gender gap appears to be growing because, in lower income countries, men are more likely to be above the financial threshold that enables them to buy a phone and use it. Though we still tend to start our measurement of access in raw data. … and power structures But there’s another factor, too, that matters here, which is to do with access to power structures. The most powerful are best equipped to take advantage of new digital resources. Most people don’t have the time, resources or, importantly, the contacts to maximise the value of the opportunities that digital technologies might offer. Those who are most marginalised have least opportunity to do so. This is another reason why ‘digital inclusion’ doesn’t necessarily reduce social exclusion or inequality but may, as several recent studies have suggested, actually increase it. This is, in practice, where the impact on those who are most marginalised is most significant. Some governments, my own among them, are keen to make as many public services as possible ‘digital by default’ – ostensibly because it improves the quality of service (which experience suggests, at best, is arguable), but also because it cuts the cost (and so enables popular tax cuts). The problem is that many people aren’t digital by default, particularly those who are most marginalised. The evidence suggests that there’s a proportion of most populations that can’t or won’t be. Making public services or welfare benefits dependent on digital access makes life more difficult for those most needy. The impact of requiring elderly, digitally-inexperienced job-seekers and benefit-claimants to seek jobs and claim benefits online was vividly illustrated, for my country, in Ken Loach’s film I Daniel Blake a couple of years ago. So, three conclusions Three closing points, therefore, on digital inclusion and its relationship with wider public policy. First, digital inclusion/exclusion and digital inequality result from and reflect inclusion/exclusion and inequality in other areas. If you have problems affording or accessing healthcare or childcare, you’ll have problems affording or accessing broadband. If you’re life’s consumed with coping, making ends meet, just scraping by, you won’t have time or energy to browse the web and take advantage of all it offers. Second, digital inclusion’s not a solution to economic and social inequality, as some enthused in the years round WSIS and some still do. It may provide resources which people can use to support themselves, address aspects of exclusion, overcome economic and social inequalities but it will be doing so effectively for those with the time, resources and capacities to take advantage of them. It can lift exclusion for some, but it’s doubtful that it will have much impact on relative inequalities and power structures across societies as a whole. It’s least likely to lift those who are most excluded out of marginalisation. Third, as a result, policies to address digital exclusion should be seen as integral to policies to address exclusion generally. This does not (emphatically not) mean they should lead them. On the contrary. Too much thinking about digital inclusion has been built round ICT solutions – but the problems that bring it about are societal not digital: affordability, illiteracy, poor or limited access to education, gender inequality, discrimination against ethnic and social groups, the marginalisation of those with disabilities, high levels of criminality, lack of opportunities for people to lift themselves out of poverty. These are the longstanding priorities of civil society organisations, and should be its priorities in the digital context as well. Next week: some thoughts from this year’s Internet Governance Forum.
https://www.apc.org/en/blog/inside-digital-society-so-who-are-you-including
March 29, 2019 NORTH ADAMS, MA—MCLA announces that it is adding a new bachelor’s degree program in communications, plus a concentration in digital media innovation, to be offered starting Fall 2019. Current MCLA students will also be able to transition into this major if they so desire. This new degree will be offered through the MCLA Department of English/Communications. The Communications major and its Digital Media Innovation concentration will prepare students within the English/Communications Department who want to pursue careers in the communications professions, including (but not limited to) journalism, public relations (PR) and corporate communications, broadcast, radio, film, and digital media. The Massachusetts Board of Higher Education approved the new program in March. The curriculum in this major incorporates courses in communications research, a range of media production skills, writing, English, and media and cultural studies. It will allow students to seek jobs in a number of communications fields that are increasingly in demand, and it will give them the necessary overall communication skills that are highly sought after by companies. Students will have the ability to adapt to and thrive in a rapidly changing technological environment. Furthermore, it will prepare those wishing to continue with graduate study in media and cultural studies, journalism, or related fields within the context of a liberal arts education. “Communications skills are in high demand,” said MCLA President James F. Birge, Ph.D. “A Harvard Business Review study of 22 million job postings from 2014-15 identified communications as one of the skills employers look for in potential new hires. This new program, with its expanded course offerings, will prepare our students to work in a variety of related fields, and give them the digital media literacy skills needed in today’s workplace.” The Communications major will build upon resources that are unique to the Berkshire region, including but not limited to the growing creative economy and the relatively underserved media market in the county, especially in the northern part of the county. Partly because of Berkshire County’s location “on the edges” of the bigger media markets of Albany and Springfield, there is a paucity of locally produced media outlets, especially digital, video, and audio media organizations. Students have been starting to fill this void with independent study projects and internships, local election coverage on public access television, and expanding their coverage in student media to include the towns of North Adams, Adams, Cheshire, and Williamstown. This presents a unique opportunity for students to interact with the local community in developing stories that also helps local citizens stay informed of relevant happenings. Furthermore, the growing creative economy has created a need for public relations expertise in order to promote the many events and opportunities for tourists and local residents alike. Everything from website development to printed brochures, newsletters, and press releases are needed by the new organizations being created throughout the county. This need, in turn, encourages students to remain in the county after graduation, something many graduates have said they wish to do, which could be a factor in slowing the general population decline in Berkshire County. Massachusetts College of Liberal Arts (MCLA) is the Commonwealth's public liberal arts college and a campus of the Massachusetts state university system. MCLA promotes excellence in learning and teaching, innovative scholarship, intellectual creativity, public service, applied knowledge, and active and responsible citizenship. MCLA graduates are prepared to be practical problem solvers and engaged, resilient global citizens. For more information, go to www.mcla.edu.
https://mcla.edu/mcla-in-the-community/press-release/2019March/mcla-adds-new-bachelors-degree-in-communications-plus-concentration-in-digital-media-innovation.php
Offers career development and training programs to help job seekers acquire the education and skills needed to succeed in today’s labor market. If you’re a resident of Broward County, you may be eligible to receive a scholarship of up to $12,000 to help cover the cost of tuition, books, supplies, and more. Scholarship recipients are also given access to a variety of resources and workshops to help with direct job placements, résumé writing, interview skills, employment readiness, and career guidance. Broward Broward UP Through Broward UP, Broward College offers FREE educational opportunities, workforce training, and support services directly in neighborhoods throughout Broward County. Their goal is to help individuals get the training needed to find a good job, make more money, and get the skills required to thrive in the workforce. Tamarac is proud to partner with Broward College to provide FREE educational opportunities to our residents. From business, health care, information technology, manufacturing and more, find what interests you. Visit www.Tamarac.me/BrowardUP to register.
https://tamaracedo.com/incentives/workforce-training/
"We followed the map as best we could, periodically checking our bearings using the chronograph and the sextants that the seer had given us. Eventually we found the deserted location that corresponded to the coordinates on the rapidly disintegrating map. And we began digging... "We started a trench that went down about fifteen feet into the baking sand and headed due South. After a few hours our spades rang with the sound of steel on stone and as it did so the group gathered round to see what we had hit. Some hand digging revealed a dark black stone that had been carved with a strange texture on it's surface like a series of overlapping layers of petrified tendrils frozen for perhaps a thousand years. It looked and felt utterly alien, and yet our goal lay in the centre of this forbidding artefact. Captain Blackthorn grimaced against the salt air that sandblasted his face. His men were weary, his ship was falling apart and the hold was replete with treasures beyond counting. It was time to head home and enjoy the bounty that years at sea had brought them. As he braced himself against the pressing squall he considered the conundrum of converting said bounty into a transferable asset that could easily be spent without arousing suspicion of the local militia or the jealousy of rival pirates. If only large amounts of wealth could be represented on something as light and unobtrusive as a piece of paper. But then Blackthorn had a idea: "I know what we'll do! We'll bury it!…" Meta energy...lent by lunos of the seven skys, father of the dragongods, has the power to alter events, rippling through time and space, no other form of energy is stronger, with the ability to even destory planets.It rips apart sheilds by going back in time before they were brought up. No weapon can withstand the force of such It is a power usable only by families bloodline, and those we have chosen to gift with miniscule amounts of its power. It not being of this plane, isn't even subjective to the so called "gods" here. Different worlds, different levels of power. It is a power of change and manipulation, essentially leaving us to create with it what we will, if our mind is strong enough to do so. Large amounts of this inserted into any one being/energy will cause them to implode, ripping themselves apart and sucked into another dimension entirely. The limitations of meta are only set by ones mind, an open mind has no limits to the powers of meta.
http://strolen.com/browse_author/meteorit/0
IV.A.8 Change Management Management should implement and align a consistent change management process throughout the entity, making sure to include BCM. As changes are made to production systems and business processes during the normal course of business, recovery systems and documentation at alternate locations should similarly be updated to reflect production and primary system changes. The change management process should allow for expedient implementation of emergency changes during an event, such as changing an access control list to provide rapid access for troubleshooting and analysis. Change tickets and corresponding activity should be reviewed for appropriateness once the event has been resolved. Even during events, changes should still be properly authorized, monitored, and documented. Poorly administered emergency changes can result in further disruption. Additionally, the interrelated nature of systems can compound disruptions to previously unaffected systems. After an emergency event, systems documentation should be updated for any changes made. Change management elements are addressed in more detail in the IT Handbook’s “Development and Acquisition” and “Operations” booklets.
https://ithandbook.ffiec.gov/it-booklets/business-continuity-management/iv-business-continuity-strategies/iva-resilience/iva8-change-management.aspx
On the banks of the Karnaphuli River, in the Bay of Bengal, the port city of Chittagong is Bangladesh’s second largest city, with a population of two and a half million people, many of them refugees from parts of the countryside that have become uninhabitable due to the floods that hit Bangladesh each year during monsoon season. The rains also affect the city as streets are submerged, marooning those living in the low-lying areas. Local rickshaws navigate the waist-deep water, the pullers and drivers taking advantage of the bad weather by demanding excess fares from the commuters. As these annual weather events become more extreme, causing fatal landslides and widespread disruption, can the city ever return to business as usual? Bangladesh is a country extremely prone to the effects of climate change. The monsoon season is sandwiched between two cyclone seasons - a period of uncertainty that runs from March to December. Even relatively subtle changes in the weather patterns are already having a big impact on the country. According to a 2017 report by the Food and Agriculture Organisation of the United Nations, Bangladesh emits only 0.3% of the world’s emissions but is the victim of some of the worst effects of climate change. And those changes are leading people to leave their homes. A 2011 report by the UK government found that “rural-urban migration can be a coping strategy for households affected by environmental events”. The report quotes a survey from the island of Hatia, off the coast of Bangladesh, which found that 22% of households used migration to cities as a coping strategy following tidal surges, and 16% following riverbank erosions. The same report found that the population of Dhaka, the capital city, increased from 1.4 million in 1970 to 14 million in 2010, and is expected to rise to 21 million by 2025. A UN-Habitat report in 2010 found that around 30% of the population of Dhaka were living in slums or informal settlements. The International Panel on Climate Change estimates that, by 2050, Bangladesh’s population at risk of sea level rise will increase to 27 million. But the issue is not just confined to Bangladesh. British environmentalist Norman Myers predicted that as global warming takes hold there could be as many as 200 million people affected by disruptions of monsoon systems and rainfall, severe droughts, sea-level rise and coastal flooding. — Carl Jung, C.G. Jung, Letters Vol. 2, Routledge.
https://wheretheleavesfall.com/explore/pictures/the-breaking-loose-of-the-elements/
Background: Functional connectivity detected by resting-state functional MRI (R-fMRI) helps to discover the subtle changes in brain activities. Patients with end-stage renal disease (ESRD) on hemodialysis (HD) have impaired brain networks. However, the functional changes of brain networks in patients with ESRD undergoing peritoneal dialysis (PD) have not been fully delineated, especially among those with preserved cognitive function. Therefore, it is worth knowing about the brain functional connectivity in patients with PD by using R-fMRI. Methods: This case-control study prospectively enrolled 19 patients with ESRD receiving PD and 24 age- and sex- matched controls. All participants without a history of cognitive decline received mini-mental status examination (MMSE) and brain 3-T R-fMRI. Comprehensive R-fMRI analyses included graph analysis for connectivity and seed-based correlation networks. Independent t-tests were used for comparing the graph parameters and connectivity networks between patients with PD and controls. Results: All subjects were cognitively intact (MMSE > 24). Whole-brain connectivity by graph analysis revealed significant differences between the two groups with decreased global efficiency (Eglob, p < 0.05), increased betweenness centrality (BC) (p < 0.01), and increased characteristic path length (L, p < 0.01) in patients with PD. The functional connections of the default-mode network (DMN), sensorimotor network (SMN), salience network (SN), and hippocampal network (HN) were impaired in patients with PD. Meanwhile, in DMN and SN, elevated connectivity was observed in certain brain regions of patients with PD. Conclusion: Patients with ESRD receiving PD had specific disruptions in functional connectivity. In graph analysis, Eglob, BC, and L showed significant connectivity changes compared to the controls. DMN and SN had the most prominent alterations among the observed networks, with both decreased and increased connectivity regions. Our study confirmed that significant changes in cerebral connections existed in cognitively intact patients with PD.
https://tmu.pure.elsevier.com/en/publications/changes-of-brain-functional-connectivity-in-end-stage-renal-disea
Adjustment Disorder is a psychological condition that happens when a person cannot deal with a stressor or a recurrent event causing stress. It may manifest with depressed mood, anxiety, maladaptive behaviours causing significant impairment in social, occupational or personal functioning. It is also related with a high risk of suicide. For this reason, assessment of Adjustment Disorder should include careful monitoring of both symptomatology and suicidal thinking. According to the the UK Club, in identifying stressors on board, many seafarers report separation from their families and friends, loneliness, high workloads, shift work, long working hours, limited recreational time, multicultural and multilingual crews as some of the greatest challenges. Stress and anxiety are normal responses to major life changes, but when the discomfort, distress, turmoil and anguish to the person are significant, Adjustment Disorder may be the source of the distress. Symptoms Symptoms can vary and include among others the following, usually taking place within three months of the introduction of the stressor, and can cause significant impairment in social, occupational or personal functioning: - Depressed Mood; - Sadness; - Worry and anxiety ; - Poor concentration; - Anger and disruptive behaviour; - Insomnia; - Physical Complaints (headaches, stomach aches, palpitations, chest pain, etc.); - Low self-esteem, sense of hopelessness, feeling trapped or lonely, or showing signs of withdrawal. Common stressors - Disruptions in close relationships; - Occupational losses or failures; - Major life changes such as leaving home, getting married, being separated from family and friends or loved ones; - Failure – disappointments or losses; - Changing jobs or adjusting to new jobs; - Being diagnosed with a chronic illness; - Bullying. Coping with Adjustment Disorder - Stay positive; - Find a routine; - Immerse yourself in the culture of the company; - Take notes – observe what is going on around you. Identify problem areas for you, and seek solutions to possible problems; - Set personal goals aligned with the vision of the organisation; - Be open and engage with others onboard by sharing information, ideas and thoughts, and, when appropriate, your emotions and thoughts; - Share and cooperate; - Be trustworthy. When people trust you, do your best to provide positive outcomes; - Be accepting; - Be supportive of others; - Understand both your personal strengths and weaknesses; - Seek assistance when needed; - Be courteous and respectful to your colleagues; - Increase personal and interpersonal insight; - Apply active listening skills, empathy and validation when communicating with others; - Communicate appropriately and responsibly, resolve conflicts when they arise in socially proactive ways. Adjustment Disorder is an excessive reaction to an identifiable life stressor or recurrent stressors. A substantial amount of research suggests that psychosocial risk factors such as high job demands, low job control, high effort – low reward and low social support may contribute to the development of depression and anxiety. Organisations can implement interventions that aim to prevent exposure to psychological and physical risk factors and thus reduce the risk for mental disorders. Adjustment Disorder can be prevented via proactive methods and strategies The UK Club explains. To address this problem, it is important to encourage culture and team cohesiveness on board, empower employees and promote well-being at sea. Employees should be educated in acquiring appropriate communication skills, building supportive social networks onboard, and encouraging behavioural well-being can lessen distress arising from stressors. Companies can empower employees by encouraging active coping skills, communication skills, building supportive social networks on board and implement resiliency training. In fact, work health can enhance employee well-being, performance and safety on board.
https://safety4sea.com/how-to-deal-with-and-prevent-adjustment-disorder-for-crew-members/
Submitted by kamal • March 28, 2016 articledunia.com Many businesses across the globe make use of office partitions to alter or expand the workplace. As compared to constructing a permanent wall, partition systems are quick to install and less expensive. You can create work zones as per your requirements, without causing disruptions to the workflow inside the office. Related posts: - Astronauts Future & Lil Uzi Vert T Shirts - Proteinase Activated Receptor-2(PAR2) Antibodies - 2 Nights 3 Days Jim Corbett Tour Package - Ways Of Determining Information Credibility On The Internet - Bise Lahore Board Matric 9th 10th Class Result 2014 - Why get a POS system for your business?
https://www.socialbookmarkzone.info/why-is-it-a-good-idea-to-invest-in-glass-office-partitions/
This morning, the Superior Court of the State of Washington granted the motion filed by Cooke Aquaculture Pacific, LLC (“Cooke”) against the Washington State Department of Natural Resources (“DNR”) and Hilary Franz, the Commissioner of Public Lands, to extend the deadline to April 14, 2023, to safely harvest steelhead trout and remove equipment at the Rich Passage and Hope Island fish farms in Puget Sound. Cooke operates its farm sites according to carefully coordinated farm management plans, with employee safety being its top priority. Significant changes in harvest schedules can both increase safety risks for employees and disruptions for customers. The arbitrary timelines originally set forth by DNR were impossible to meet without exposing Cooke employees to dangerous winter working conditions, increasing perceived environmental risks, and causing significant financial harm. Cooke sought this preliminary injunction to protect its employees and ensure safe working conditions. We are grateful that the Court granted our request as this extension gives our employees the flexibility required in a marine environment to ensure safe working conditions. Cooke can now remove the fish on its original harvest schedule and properly remove our equipment without subjecting employees to unnecessary risk. -30- Joel Richardson Vice President Public Relations Cooke Aquaculture Inc. / for Cooke Aquaculture Pacific, LLC.
https://www.cookeseafood.com/2023/01/06/cooke-granted-preliminary-injunction-for-safe-harvest-and-equipment-removal-of-washington-state-steelhead-trout-farms/
The purpose for the Critical Infrastructure Protection and Recovery (CIPR) Working Group (WG) is to provide a forum for the application, development and dissemination of systems engineering principles, practices and solutions relating to critical infrastructure protection and recovery against manmade and natural events causing physical infrastructure system disruption for periods of a month or more. Critical infrastructures provide essential services underpinning modern societies. These infrastructures are networks forming a tightly coupled complex system cutting across multiple domains. They affect one another even if not physically connected. They are vulnerable to manmade and natural events that can cause disruption for extended periods, resulting in societal disruptions and loss of life. The inability of critical infrastructures to withstand and recover from catastrophic events is a well-documented global issue. This is a complex systems problem needing immediate coordinated attention across traditional domain and governmental boundaries. For example, the US President issued Presidential Policy Directive PPD-21 that addresses “a national unity of effort to strengthen and maintain secure, functioning, and resilient critical infrastructure.” This includes an imperative to “implement an integration and analysis function to inform planning and operations decisions regarding critical infrastructure.” This working group will seek to support this and other policies with international reach. INCOSE, as the premier professional society for systems engineering, can provide significant contributions toward critical infrastructure protection and recovery. Role Responsibilities WG Lead Chair: Daniel Eisenberg [email protected] Be the primary POC for all WG activities, communications and actions. This role includes relationships both internal and external to INCOSE. Responsible for annual budget and other financial activities. Responsible for operating process development and approval. Co-Chair: John Juhasz Convene monthly member meetings, develop communications, programs, planning and arrangements for workshops, conferences, symposiums and special public meetings, and act as WG Lead when appropriate. Co-Chair: Anthony Adebonojo Ensure that facilities and other resources are available for special meetings, manage material and knowledge collection and distribution, maintain a list of technical projects and products, monitor progress on technical tasks, maintain the WG Connect site and external site.
https://www.incose.org/incose-member-resources/working-groups/Application/critical-infrastructure
BACKGROUND: This report presents the case of a woman with no known coagulopathy, use of anticoagulants, or history of trauma who spontaneously developed an epidural hematoma of the spine. This is an uncommon condition, with the potential for missed diagnosis and potential harm to the patient. CASE REPORT: The patient was an elderly woman with a history of Type 2 diabetes mellitus and hyperlipidemia. Of note, she had recently recovered from COVID-19. Because the woman presented with right-sided weakness and pain in the back of her neck, the stroke team was activated. A computed tomography (CT) scan of her neck revealed a very subtle hyperdensity, which on further investigation was found to be an acute epidural hematoma at C2-C3 space through the C6 vertebra. While awaiting surgery, the patient had spontaneous improvement of her right-sided weakness and her condition eventually was managed conservatively. CONCLUSIONS: Spontaneous spinal epidural hematoma is an uncommon condition, and a high index of suspicion is required to accurately diagnose and appropriately manage it. In the case presented here, the hematoma was subtle on the CT scan, and the patient’s weakness easily could have been misdiagnosed as an ischemic stroke. That may have resulted in administration of thrombolytics, potentially causing significant harm. In addition, the patient had recently recovered from COVID-19 disease, which may or may not be incidental. Further observation will be required to determine if there is a spike in similar cases, which may be temporally associated with the novel coronavirus.
https://www.amjcaserep.com/abstract/index/idArt/926784
How governments are addressing climate change Faced with the adverse effects of climate change, governments around the world are prioritizing climate resilience by investing in infrastructure, societal resilience, and data analytics to predict and prepare for future disruptions. The drought-fueled bushfires that ravaged much of southern Australia in 2019 and 2020 not only darkened skies and destroyed wildlife, they also damaged critical energy infrastructure, leaving tens of thousands of homes without power during the disaster. Months later, a different kind of weather event on the other side of the world left another government unable to provide electricity to its citizens: In February 2021, unusually cold temperatures in Texas froze natural gas wells, wind turbines, and coal piles, causing the state’s power grid to collapse and leaving millions to face harsh conditions without power. As extreme weather exacerbated by climate change continues to disrupt the delivery of water, power, and other services, government agencies around the world are prioritizing climate resilience—the ability to respond, recover, and adapt to the adverse effects of climate change. Agencies are institutionalizing climate resiliency by linking climate action to their missions, future-proofing critical infrastructure, embedding environmental justice in their programs, collaborating with public and private partners to unlock collective action, and enhancing their data analytics capabilities to prepare for future climate disruptions. Trend drivers - Lessons learned from the COVID-19 response have underscored the need for greater resilience in the face of disruption, whether it comes from climate, public health, or other causes. - The increasing frequency and severity of extreme weather events has instilled a sense of urgency within the public sector. - Frequent disruptions to operations, supply chains, and human lives are compelling broader climate action. - The cost of inaction is too high from an economic, social, and continuity of operations perspective. - Investments in climate adaptation can create jobs and spur significant economic growth. Trend in action A climate-resilient agency has a greater ability to pursue its mission in the face of climate-related disruptions and to protect individuals and communities from the adverse effects of climate change. Consider the mobility sector. Disruptions to the transportation network during extreme weather events not only affect the movement of goods and people but also limit access to employment and critical services such as health care. To mitigate future disruptions, Great Britain’s national railway manager, Network Rail, is working to improve its climate resilience. In response to projections of increased rain and flooding over time, Network Rail has implemented an integrated draining management policy and is investing in drainage systems along key routes to protect the infrastructure from flooding and to minimize climate-related disruptions to passenger transport. Linking climate to the mission Climate change is increasingly shaping agency missions at all levels—central, regional, and local. In the coming decades, it could significantly alter the operational landscape and may compel some agencies to rethink entire programs. Government entities must understand and embrace how climate change affects their missions—and act in a way that both aligns with and advances their objectives. The US Department of Defense (DoD) has linked climate resilience to its mission, noting that temperature extremes, rises in sea levels, and extreme weather events increasingly damage military installations, impair military capabilities, create harsher operational conditions, and fuel global instability and conflict. Acknowledging climate change as an existential threat to national security, the DoD has released a climate adaptation plan to future-proof military installations, build a climate-ready force, secure supply chains against extreme weather events, and inculcate climate-informed decision-making. Investing in societal resilience Governments are also increasingly investing in resilient infrastructure, enhancing the capacity of the community to withstand extreme weather events, and ensuring that disadvantaged communities aren’t left to face climate-related risks on their own. The cost of waiting can be extreme; note the US$32 billion cost that Indonesia is expected to incur to move its capital away from Jakarta, one of the world’s fastest-sinking cities. In September 2021, Deloitte’s State of the Consumer Tracker surveyed 23,000 people across 23 countries. Nearly half of respondents had directly experienced at least one climate event in the past six months. Data analysis will play a key role in understanding and mitigating these risks. To aid decision-making, in 2021, the US Federal Emergency Management Agency introduced the National Risk Index, a web-based tool that maps the nation’s vulnerability to 18 different risk factors at the county and census tract levels. The tool is designed to help agencies and communities direct their resources and actions where they’re needed most. Government investment in large infrastructure projects to build resilience against climate change’s disruptive effects is most obvious in coastal cities, which face the greatest risk from rising sea levels and extreme weather. Across the world, these cities are turning to hard engineering solutions such as sea walls or surge barriers; Venice (Italy), one of the world’s most flood-prone cities, has built a system of flood barriers—Modulo Sperimentale Elettromeccanico (MOSE)—to protect against rising sea levels and high tides. To build truly resilient societies, however, such investments must protect everyone, including those with few resources to deal with failing power or water systems. Adopting an equity lens can help governments evaluate not only the environmental impact of their actions but the broader social and economic outcomes. One example of this approach is the US Federal Justice40 initiative, which aims to address historic underinvestment by delivering “40% of the overall benefits from relevant federal climate investments to disadvantaged communities.” In Deloitte’s September 2021 State of the Consumer Tracker, two-thirds of respondents want their national governments to do more to fight climate change. Building data-driven anticipatory capabilities Resilience begins with information—understanding and weighing specific climate threats and their likelihood, potential impact, and community vulnerability to those threats. Governments need this level of specificity to take effective and meaningful action while minimizing waste. The US National Oceanic and Atmospheric Administration (NOAA), for example, recently launched an interactive map providing county-level information on various locations’ susceptibility to catastrophic climate disasters such as wildfires, floods, droughts, and heat waves. It is intended to help state and local agencies develop action plans. Data analytics tools can help agencies anticipate forces and events that could complicate or even alter their missions. Governments are collaborating with industry and academia to add artificial intelligence (AI) and machine learning to their arsenal, using them to parse vast troves of weather data to identify patterns and plan mitigation strategies. The UK Meteorological Office is currently partnering with Google to see how AI might enhance its ability to predict the weather. Moving forward As agencies draft ambitious climate resilience plans, a few steps can help them achieve long-term success: - Install climate leadership. Leadership is key to any large-scale transformation. Agencies should create positions such as chief climate officer or chief sustainability officer to lead resilience efforts and coordinate intra- and intergovernmental action. - Create a climate-ready workforce. Agencywide climate education can raise awareness among the workforce about the climate crisis and climate resilience strategies. - Build public-private climate innovation ecosystems. Groundbreaking technological innovation is key to climate-change resilience. Governments should build and nurture collaborative public-private ecosystems to take advantage of shared knowledge and resources while ensuring that the broader community supports their actions. - Link climate action to economic opportunities. Climate action has the potential to be the next big economic opportunity. Agencies should encourage private-sector participation by using their authority to set favorable regulations, create new standards, and make seed investments. Linking climate action to economic opportunities can make the private sector a willing participant in the low-carbon future. According to one Deloitte estimate, for instance, climate action could add AUD$680 billion to the Australian economy and create more than 250,000 jobs across its regions and industries by 2070, while inaction could curtail GDP by AUD$3.4 trillion and result in 880,000 job losses in the same period.
https://ceo-na.com/ceo-life/environment/climate-resilient-government/