content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
At two important events, leaders of the EU and its member states will discuss the European model, focusing on whether successful models such as the Nordic one can be transferred to countries in distress.
There is widespread agreement that there is no such thing as one European social model, but rather a variety of models with some common features. As some of those models are evidently doing better than others in dealing with unemployment, poverty and the financing of healthcare, the question arises as to what lessons can be learnt from those more successful models.
The recent study “Globalisation and the reform of the European social models” prepared by André Sapir for the think-tank Bruegel and presented at the ECOFIN Informal Meeting in Manchester on 9 September 2005 argued that there is not one European social model, but rather four – the Nordic, Anglo-Saxon, Mediterranean and the Continental.
• The Nordic model (welfare state, high level of social protection, high level of taxation, extensive intervention in the labour market, mostly in the form of job-seeking incentives)
• The Anglo-Saxon system (more limited collective provision of social protection merely to cushion the impact of events that would lead to poverty)
• The continental model (provision of social assistance through public insurance-based systems; limited role of the market in the provision of social assistance)
• The Mediterranean social welfare system (high legal employment protection; lower levels of unemployment benefits; spending concentrated on pensions)
It has been argued that the social models of the EU-10, though transitory, must be added to this schema. Controversially, the Sapir study concludes that only the Nordic and the Anglo-Saxon models are sustainable.
The Assembly of European Regions’ Committee on Social Cohesion, Social Policy and Public Health has provided a set of common denominators which, in their entirety, define the European social model as “a set of principles and values, common to all European regions”, and it has declared these principles to be:
a. Solidarity
b. Social Justice
c. Social Cohesion
d. Equal access to employment, in particular for the young and the disabled
e. Gender equality
f. Equal access to health and social protection
g. Universal access to education
h. Universal access to health and social services
i. Equal opportunities for everybody in society, in particular the elderly, the young, the disabled, the socially excluded and minority groups
j. Universal access to, development of and implementation of knowledge in health and social services. | https://www.euractiv.com/section/social-europe-jobs/news/eu-debates-european-social-model/ |
In Reply to: what exactly is a radial pull posted by jack on June 20, 2000 at 01:22:50:
High performance tires are known to be made of a softer compound rubber and have a different construction than normal passenger tires. That lends to more of a chance the tire will flatten somewhat and set up a vibration when driven until the tires reform to roundness again. This usually doesn't take too long but is dependent upon temperature, speed and to some degree, I quess, lateral pressures on the tires. Also, it is usually felt that the front tires are the only ones with this symptom but the rears have it just the same. This is because we feel the vibration more in the steering wheel because we are connected by way of touching it.
Radial pull...I don't know what that is unless it's a defective tire that has been manufactured or built incorrectly.
Bob ///M3
: I just bought new avs sports and i had my 96 m3 aligned and they said it was still pulling to right after the alignment. They said that they guessed the front right tire had a radial pull. What exactly is that. I took it to where i bought the tires and they ordered me a new one. Would this cause a vibration around 60 mph because since i got the new tires I have had some vibration especially when i haven't driven the car for awhile.
: Thanks
Follow Ups: | http://bimmer.roadfly.com/m3/messages/archive/msgsy2000w25/97728.html |
We know D.C. Get our free newsletter to stay in the know.
Black Informant responds; I respond to his response; soon, he will respond once more; then, it will be over.
Welcome back to the HIV Blame Game, wherein blogger Black Informant and I debate the root causes of the HIV epidemic in Washington, D.C. Yesterday, I wrote a post arguing why the gay and lesbian community shouldn’t be a scapegoat for D.C.’s HIV problem, that HIV/AIDS affects us all, and that we need to create strong educational initiatives to help stop the epidemic.
Black Informant responded—-read his post here. The crux of his argument is that HIV/AIDS is spread by people, both gay and straight, “who know how to avoid STDs, yet insist on doing the total opposite,” and public initiatives to further educate the public on the risks of HIV will force “society at large to foot the bill for their bad choices”:
For those of us in society who not only understand the importance of practicing safe sex, but actually abide by those guidelines, HIV/AIDS is not a problem. It is only a problem for those who live by the rule “If it feels good, do it”, all the while expecting society at large to foot the bill for their bad choices. That applies to both homosexuals and heterosexuals. But as I contend in both this post and my previous one, the homosexual community, by definition has regulated itself to high risk sexual behavior and should stop hiding behind what the heterosexual community is or isn’t doing. As far as “blaming” goes, you can start with the people who know how to avoid STDs, yet insist on doing the total opposite.
I couldn’t agree with Black Informant more that practicing safe sex is the best thing we can do to stop the spread of HIV/AIDS. But I have to disagree with him when he says that HIV is “only a problem for those who live by the rule ‘If it feels good, do it.'”
By that logic, the largest population of irresponsible hedonists are centralized in sub-Saharan Africa. Eleven percent of new HIV cases are reserved for irresponsible hedonistic newborn babies. Fifteen million children with AIDS are orphans, surely payment for their irresponsible hedonism. In D.C., most irresponsible hedonists reside in the city’s poorest areas, where even condoms are locked up to avoid irresponsible hedonists from stealing them to engage in more responsible hedonism.
The reality is this: Stating that those affected by HIV were infected through irresponsible hedonism is a way to willfully ignore a problem that does not affect you directly. Actually, being able to understand the “importance of practicing safe sex” and being able to actually “abide by those guidelines”—-like Black Informant can—-is a privileged position. It’s not a coincidence that low-income African-Americans are most affected by HIV. Protecting yourself costs money. It costs money to educate adolescents on the importance of safe sex. It costs money to distribute free condoms to people in poor areas who are most at risk. It costs money to offer free HIV testing to an entire urban area. It costs money to come out with comprehensive studies every year that try to pinpoint the populations that are at the highest risk and arm them with the knowledge to protect themselves. It costs money to change decades-old attitude that if you’re not gay, you don’t have to worry about HIV.
But what’s more expensive—-pushing for comprehensive sex education, testing, and condom distribution for all citizens, or watching an entire segment of your population waste away from a crippling affliction? Hey, let’s not pay to find out how many people have HIV, how they got it, and what we can do to prevent it. Once we declare that HIV is “not our problem,” all we have to do is sit back, relax, and wait for the next generation of Washingtonians to enjoy even higher AIDS rates because we couldn’t be bothered with the tax burden.
“Understanding” how to prevent HIV is not common sense. New information on HIV/AIDS comes forth all the time, and even educated, safe-sex-practicing citizens like Black Informant could use a refresher course. He writes:
Risky sexual behavior is the cause behind this jump, not “gay sex” as you are calling it. If a person chooses not to use a condom, that is a risk. If a person chooses to ingest fecal matter during sex (rimming), that is a risk.
Actually, it’s not. A sex act, like rimming, isn’t an HIV risk just because straight people associate it with the gay community. HIV can be transmitted through four fluids—-semen, vaginal fluid, breastmilk, and blood. It’s okay—-I didn’t know that either until I got my free rapid HIV test a couple months ago, and I’m a privileged college-educated white girl.
Educating people takes time, but changing attitudes takes even longer. We need to change attitudes that encourage unsafe sex, but we also need to change attitudes that blanket moral judgments over an entire population of sufferers. Yesterday, the Pope arrived in Africa to tell the continent that condoms are making the AIDS problem worse. Categorize those who use condoms as immoral, or categorize those who don’t use them as irresponsible hedonists—-either way, you’re shaking your hands of your responsibility to help those in need. | https://washingtoncitypaper.com/article/395934/the-hiv-blame-game-the-real-problem-is-money/ |
Despite the existence of well-defined relationships between cold gas and star formation, there is evidence that some galaxies contain large amounts of H I that do not form stars efficiently. By systematically assessing the link between H I and star formation within a sample of galaxies with extremely high H I masses (log /M > 10), we uncover a population of galaxies with an unexpected combination of high H I masses and low specific star formation rates that exists primarily at stellar masses greater than log M */M ∼ 10.5. We obtained H I maps of 20 galaxies in this population to understand the distribution of the H I and the physical conditions in the galaxies that could be suppressing star formation in the presence of large quantities of H I. We find that all of the galaxies we observed have low H I surface densities in the range in which inefficient star formation is common. The low H I surface densities are likely the main cause of the low specific star formation rates, but there is also some evidence that active galactic nuclei or bulges contribute to the suppression of star formation. The sample's agreement with the global star formation law highlights its usefulness as a tool for understanding galaxies that do not always follow expected relationships. © 2014. The American Astronomical Society. All rights reserved.. | https://research-repository.uwa.edu.au/en/publications/resolved-hi-imaging-of-a-population-of-massive-hi-rich-galaxies-w |
PURPOSE: To attain diversified charging system by registering a charge per unit time changing with a length of a speech time and calculating the talking charge based on the charge per unit time and a toll rate corresponding to the distance.
CONSTITUTION: A call control means 11 detects a start/end of talking and calculates a distance between an exchange for a caller and an exchange for a called party. On the other hand, a speech time analysis means 12 collects a start time and an end time of speech from the means 11 to calculate the speech time. Furthermore, a charge accounting means 13 collects a talking charge corresponding to the speech time from a unit time toll management memory M1 baed on the speech time calculated by the means 12 and collects toll ratio data corresponding to distance data from a distance dependence toll rate management memory M2 based on the distance data calculated by the means 11. Then a charge corresponding to the speech time for each block of speech times and a toll ratio corresponding to the distance data are multiplied, the calculated values of blocks of each speech time are added to calculate the speech charge and the result is registered in a subscriber dependent charge service register memory M3.
COPYRIGHT: (C)1995,JPO | |
War exists in actuality, but not as distinctly as our beliefs indicate. The propaganda surrounding war convinces those involved in war they themselves are the "good guys," and their enemies the "bad guys." This applies almost universally, with a brief look at the Nazis revealing the horrific phenomena how subjectivity can effectively drive war. Further, from the outside, it's simple to believe that the soldiers are all politically charged, passionate, and energized by the feel of bullets in their fingertips, though real, honest accounts of war reveal that everyone with a gun is scared. Soldiers indeed have political notions, but we're kidding ourselves if we think they're thinking politics each time they send a bullet to the other side. Soldiers are often cold, hungry, and uncomfortable, unlike the war propaganda posters suggest, with their own basal survival instincts surfacing, taking priority--and rightfully so--over political doctrines. If you've ever been in a fist fight, a similar phenomena occurs; when engaged, you're not intellectually thinking about the whys and hows, you're simply fighting to win, or running away. Soldiers have thus been used to levy their government's ideology, rather than placed on a battlefield to convey their authentic beliefs, as propaganda misleads. No matter how much indoctrination exists, when on the battlefield--as many first-hand accounts have demonstrated--it's chaos, with a thin and potentially corruptible perception of good-guy, bad-guy veil spanning all corners.
Page 433: "A louse is a louse and a bomb is a bomb, even though the cause you are fighting for happens to be just." Modern war has resolved this through technologically dislocating enemy combatants, preserving their higher functioning at the expense of almost automatic dehumanizing of their enemies. Point is, war is ugly no matter how enlightened or ignorant you are. It's the universal equalizer.
Political propaganda is dangerous because of its wide audience. Not only are soldiers and investors targeted, but the general population, whom, if we remember, aren't the ones facing the bleak frontline horrors. Hence, they're the most gullible. They really do believe, thanks to multiple forms of media, that the other side must be dysfunctional if they don't agree with them. That they have a defect preventing a truer appraisal of the good-guys bad-guys paradigm. Orwell places a lot of blame on the British Left-wing intelligentsia because they swung from "war is hell" to "war is glorious" without much reasoning. (Politics isn't about reasoning though, it's about positioning.)
Page 434: "As far as the mass of the people go, the extraordinary swings of opinion which occur nowadays, the emotions which can be turned on and off like a tap, are the result of newspaper and radio hypnosis. In the intelligentsia I should say they result rather from money and mere physical safety...We have become too civilised to grasp the obvious. For the truth is very simple. To survive you often have to fight, and to fight you have to dirty yourself."
The function of war is to redefine social truths, and unfortunately for humanity, ideals are still passed on as truths if they're false. Atrocities thus aren't perceived as atrocities if you commit them, but are seen as instituting justice, and "obviously" reasonable. Again, go back to the Nazis; their ideologies were so ubiquitous that, combined with a desire to regain German identity from having lost it in the Great War, they truly believed their actions saved humanity. To them, killing was a function of saving. At this level, blaming propaganda isn't enough, it's simply a device to market socially approved values as products. The Nazis were very civilized, in that the approved beliefs (read: truths) were everywhere; National Socialism was totalitarian at heart. Public values determined your private values, and the only capitalism that was encouraged was that which benefitted the public sector. So you were the business manager, not the private marketeer. The Spanish Civil war was similar because Franco was unapologetically totalitarian.
Orwell learned a lot about civilization through his altercation with a boy falsely accused of stealing; when the boy was stripped naked to check for the goods--finding none--the boy had no issue standing there naked. Page 438: "One of the effects of safe and civilized life is an immense over-sensitiveness which makes all the primary emotions seem somewhat disgusting. Generosity is as painful as meanness, gratitude as hateful as ingratitude." The boy wasn't civilized; how can that be good? Civilization's inherently beneficial, right? This event turns the cube on the power mechanisms of civilization, exposing it as a mutater of primal, innocuous behaviors and truths, with subjective, exploitative leanings.
The question "Does truth exist?" is really better asked, "Does objective truth exist?" It seems that one word is so often dropped out of this conversation that no one recognizes that it is the key to the conversation. Orwell mentions how objective truth doesn't exist for the Nazis, but only their theories and beliefs of purity. Each discipline--Orwell uses Science--has a qualifier: German Science, Jewish Science, etc. "The implied objective of this line of thought is a nightmare world in which the Leader, or some ruling clique, controls not only the future but the past." This manipulation contorts abstract ideals into tools, mutating objectivity into subjectivity, and in the Nazi's case, merciless, often gleeful destruction which, through relinquishing the reflective intellect, relinquishes our ability to understand things weren't always, and don't have to be, this way.
Totalitarianism is thus a superstition, a religion, guaranteeing nothing but fear and instability within individuals, resulting in the decay in our very ability to believe we can be better.
Leave a Reply.
|
|
Click the RSS FEED button below to receive notification of new Orwell 365 posts. | https://www.trperri.com/orwell-project/looking-back-on-the-spanish-war-p431 |
Temporal stability of individual differences is an important prerequisite for accurate tracking of prospective relationships between neurocognition and real-world behavioral outcomes such as substance abuse and psychopathology. Here we report age-related changes and longitudinal test-retest stability (TRS) for the Neurocognition battery of the Adol...
Background Cognitive training interventions appear capable of improving alcohol-associated neurobehavioral deficits in recently detoxified individuals. However, efficacy remains incompletely characterized in alcohol use disorder (AUD) and available data address only non-affective cognitive outcomes; enhancement of social cognition remains uninvesti...
The human toll of disasters extends beyond death, injury and loss. Post-traumatic stress (PTS) can be common among directly exposed individuals, and children are particularly vulnerable. Even children far removed from harm’s way report PTS, and media-based exposure may partially account for this phenomenon. In this study, we examine this issue usin...
Objective: Difficulties identifying emotional facial expressions are commonly observed in alcohol use disorder (AUD). Critically, this work utilizes single-race stimulus sets, although study samples are not similarly constrained. This is particularly concerning given evidence among community samples showing the impact of racial incongruity, giving...
Background The Adolescent Brain Cognitive Development ™ Study (ABCD StudyⓇ) is an open-science, multi-site, prospective, longitudinal study following over 11,800 9- and 10-year-old youth into early adulthood. The ABCD Study aims to prospectively examine the impact of substance use (SU) on neurocognitive and health outcomes. Although SU initiation t...
Cortisol profiles are known to vary across phases of alcohol use disorder (AUD; e.g. chronic use, withdrawal and early/sustained recovery). These patterns have largely been established through between-subjects contrasts. Using a segmental hair cortisol concentrations (HCC) approach, retrospective longitudinal analyses are feasible. Here, we examine...
Alcohol use disorder (AUD) commonly is associated with compromise in neurobiological and/or neurobehavioral processes. The severity of this compromise varies across individuals and outcomes, as does the degree to which recovery of function is achieved. This narrative review first summarizes neurobehavioral, neurophysiological, structural, and neuro...
Deficits in emotion processing among individuals with AUD are well accepted, however the potential impact of polysubstance use in this population remains uninvestigated. The current work begins to fill this gap by analyzing affective perception and processing in community controls (CCs) and two AUD subgroups differentiated by presence (Alc-Drug) or...
Background Individuals with alcohol use disorder (AUD) often display compromise in emotional processing and non-affective neurocognitive functions. However, relatively little empirical work explores their intersection. In this study, we examined working memory performance when attending to and ignoring facial stimuli among adults with and without A...
As natural disasters increase in frequency and severity (1,2), mounting evidence reveals that their human toll extends beyond death, injury, and loss. Posttraumatic stress (PTS) can be common among exposed individuals, and children are particularly vulnerable (3,4). Curiously, PTS can even be found among youth far removed from harm's way, and media...
Background Acute alcohol intoxication has wide‐ranging neurobehavioral effects on psychomotor, attentional, inhibitory, and memory‐related cognitive processes. These effects are mirrored in disruption of neural metabolism, functional activation, and functional network coherence. Metrics of intraregional neural dynamics such as regional signal varia...
Objective: Despite increased attention to risks and benefits associated with moderate drinking lifestyles among aging adults, relatively few empirical studies focus on acute alcohol effects in older drinkers. Using electroencephalographic indices of early attention modulation (P1 and N1) and later stimulus processing (P3), we investigated whether...
The N-methyl-D-aspartate receptor (NMDAr) system is critically involved in the pathogenesis and neurobehavioral sequelae of alcohol use disorder (AUD), and constitutes a potential pharmacotherapeutic target. Memantine (Namenda) is an FDA-approved NMDAr antagonist with suggested utility in AUD, however its safety and tolerability during long-term ad...
Background: Individuals with alcohol use disorder (AUD) display deficits across a range of cognitive processes. Decrements in social cognition may be particularly important for interpersonal functioning and post-treatment adaptation. Although social cognitive deficits are associated with chronic use of numerous substances, the role of polysubstanc...
Background: Prescription opioid non-medical use (NMU) and its associated consequences have been of concern in the US in recent years. Objective: We examined peer influence and parental guidance, in addition to peer and parental sources of alcohol, on patterns of prescription opioid use, including NMU, among males and females separately. We hypothes...
Objective: Cognitive training is an effective means of improving performance in a range of populations. Whether it may serve to facilitate cognitive recovery and longer-term outcomes in persons with alcohol use disorders (AUDs) is unclear. Here, we review historical and current literature and offer perspectives for model development and potential...
Background: A growing literature suggests deficient emotional facial expression (EFE) processing among recently abstinent individuals with alcohol use disorders (AUDs). Further investigation is needed to clarify valence-related discrepancies and elucidate neural and psychosocial correlates. We examined neurobehavioral indices of EFE processing and...
The Adolescent Brain Cognitive Development (ABCD) Study is an ongoing, nationwide study of the effects of environmental influences on behavioral and brain development in adolescents. The main objective of the study is to recruit and assess over eleven thousand 9-10-year-olds and follow them over the course of 10 years to characterize normative brai...
Background: Individuals in treatment for alcohol use disorder (AUD) display deficits across a broad range of cognitive processes. Disruptions in affective processing are understudied, but may be particularly important for interpersonal functioning and post-treatment adaptation. In particular, the role of sex in AUD-associated emotion processing de...
Objective: Despite the substantial number of older adult drinkers, few studies have examined acute alcohol effects in aging samples. We have explored these interactions across a variety of neurobehavioral domains and modalities and have consistently observed age-contingent vulnerabilities to alcohol-associated decrements in neurobehavioral functio...
Epidemiological estimates indicate not only an increase in the proportion of older adults, but also an increase in those who continue moderate alcohol consumption. Substantial literatures have attempted to characterize health benefits/risks of moderate drinking lifestyles. Not uncommonly, reports address outcomes in a single outcome, such as cardio...
The Adolescent Brain Cognitive Development (ABCD) study is poised to be the largest single-cohort long-term longitudinal study of neurodevelopment and child health in the United States. Baseline data on N= 4521 children aged 9–10 were released for public access on November 2, 2018. In this paper we performed principal component analyses of the neur...
The Adolescent Brain Cognitive Development (ABCD) Study is an ongoing, nationwide study of the effects of environmental influences on behavioral and brain development in adolescents. The ABCD Study is a collaborative effort, including a Coordinating Center, 21 data acquisition sites across the United States, and a Data Analysis and Informatics Cent...
Background: Non-medical use (NMU) of prescription opioids is a public health concern and sex differences in prevalence of NMU have been observed previously. Little is known about how youth are obtaining and using these drugs. While any regular use could be problematic, NMU is particularly concerning. More information is needed on NMU patterns amon...
Background: Treatment-seeking men with alcohol use disorder (AUD) classically exhibit a blunted hypothalamic-pituitary-adrenal (HPA) axis response to pharmacologic and behavioral provocations during the early phases of abstinence from alcohol. Independent of alcohol, a significant muting of HPA axis reactivity is also observed among racial minorit...
Background: Deficits in perception of emotionality are noted among individuals with alcohol use disorders (AUDs), including identification of emotional facial expressions (EFEs). Converging evidence suggests specific differences between AUD and control participants in decoding EFEs with negative valence (e.g., anger). However, few investigations sy...
Accumulating evidence indicates pain may be an important risk factor for development of alcohol use disorder (AUD) and risk of relapse for people recovering from AUD. This study was conducted to characterize the prevalence and severity of significant recurrent pain and various chronic pain conditions in treatment-seeking alcoholics. In addition, we...
Rationale: Our previous work demonstrated differential neurobehavioral effects of low-dose alcohol consumption on older and younger adults in a driving simulator. However, the ability to enhance or suppress a response in such context has yet to be examined. Objectives: The current study contrasted older and younger drivers' responses to specific...
Adolescence is characterized by numerous social, hormonal and physical changes, as well as a marked increase in risk-taking behaviors. Dual systems models attribute adolescent risk-taking to tensions between developing capacities for cognitive control and motivational strivings, which may peak at this time. A comprehensive understanding of neurocog...
One of the objectives of the Adolescent Brain Cognitive Development (ABCD) Study (https://abcdstudy.org/) is to establish a national longitudinal cohort of 9 and 10 year olds that will be followed for 10 years in order to prospectively study the risk and protective factors influencing substance use and its consequences, examine the impact of substa...
Background: Despite high prevalence of generalized anxiety disorder (GAD) substance use disorder (SUD) comorbidity, little is known regarding demographic characteristics associated with GAD in SUD treatment seekers. Objective: To characterize demographic differences between inpatient SUD treatment seekers reporting varying levels of GAD symptoma...
We developed an Observer-Reported Outcome (ObsRO) survey instrument to be applied in a multicenter, placebo-controlled, crossover randomized controlled trial of dichloroacetate in children with pyruvate dehydrogenase complex deficiency. The instrument quantifies a subject's at-home level of functionality, as reported by a parent/caregiver, who were...
Background: Over the last two decades, U.S. rates of prescription opioid (PO) misuse have risen drastically. In response, federal and state governments have begun to implement new PO policies. Recent legislative changes warrant up-to-date assessments of today's misuse rates. Objective: To explore potential changes in opioid misuse trends among s...
This study examined trajectories of progression from early substance use to treatment entry as a function of race, among inpatient treatment seekers (N = 945). Following primary race-contingent analyses of use progression, secondary analyses were conducted to investigate the effects of socioeconomic status (SES) on the observed differences. African...
Patient activation, the perceived capacity to manage one’s health, is positively associated with better health outcomes and lower costs. Underlying characteristics influencing patient activation are not completely understood leading to gaps in intervention strategies designed to improve patient activation. We suggest that variability in executive f...
Background: Driver age and blood alcohol concentration are both important factors in predicting driving risk; however, little is known regarding the joint import of these factors on neural activity following socially relevant alcohol doses. We examined age and alcohol effects on brain oscillations during simulated driving, focusing on 2 region-spe...
Purpose of review Increased understanding of “how” and “for whom” treatment works at the level of the brain has potential to transform addiction treatment through the development of innovative neuroscience-informed interventions. The 2015 Science of Change meeting bridged the fields of neuroscience and psychotherapy research to identify brain mecha...
Background: Previous studies suggest older adults may be differentially susceptible to the acute neurobehavioral effects of moderate alcohol intake. To our knowledge, no studies have addressed acute moderate alcohol effects on the electrophysiological correlates of working memory in younger and older social drinkers. This study characterized alcoh...
A strong body of research has ascertained a negative correlation between spirituality and depressive symptoms, and between spirituality and alcohol consumption. Moreover, increased spirituality has been associated with recovery and long-term abstinence in alcohol and substance use disorders. The positive influence of spirituality on depressive symp...
A strong body of research has ascertained a negative correlation between spirituality and depressive symptoms, and between spirituality and alcohol consumption. Moreover, increased spirituality has been associated with recovery and long-term abstinence in alcohol and substance use disorders. The positive influence of spirituality on depressive symp...
This review addresses current literature regarding health consequences associated with of a lifestyle or pattern of moderate drinking and the neurobehavioral effects of moderate drinking episodes in older adults. Discussed studies include both large-scale epidemiological investigations of the effect of moderate alcohol use on multiple health-relate...
Background About 35 % of non-elderly U.S. adult Medicaid enrollees have a behavioral health condition, such as anxiety, mood disorders, substance use disorders, and/or serious mental illness. Individuals with serious mental illness, in particular, have mortality rates that are 2 to 3 times higher as the general population, which are due to multiple...
Objective: The purpose of this study was to clarify inconsistent findings regarding the acute cognitive effects of subintoxicating alcohol doses (i.e., <80 mg/dl) by controlling for and evaluating variables that might modulate dose-related outcomes. Method: The current study examined the effects of sex/gender and alcohol concentration on select...
A limited number of publications have documented the effects of acute alcohol administration among older adults. Among these, only a few have investigated sex differences within this population. The current project examined the behavioral effects of acute low- and moderate-dose alcohol on 62 older (ages 55-70) male and female, healthy, light to mod...
Background Available evidence indicates women with substance use disorders may experience more rapid progression through usage milestones (telescoping). The few investigations of sex differences in treatment-seeking populations often focus on single substances and typically do not account for significant polysubstance abuse. The current study exami...
Historically, health interventions directed to improving outcomes among large samples/populations have focused primarily on program characteristics rather than patient characteristics that might modulate outcomes. One potentially critical factor is the cognitive capacity of program recipients. Although cognitive engagement is a prerequisite for pro...
In this chapter, we review existing research regarding sex differences in alcohol's effects on neurobehavioral functions/processes. Drawn largely from laboratory studies, literature regarding acute alcohol administration and chronic alcohol misuse is explored focusing on commonly employed neuropsychologic domains (e.g., executive function, visuospa...
Unlabelled: ABSTRACT. Objective: Despite substantial attention being paid to the health benefits of moderate alcohol intake as a lifestyle, the acute effects of alcohol on psychomotor and working memory function in older adults are poorly understood. Method: The effects of low to moderate doses of alcohol on neurobehavioral function were inves...
Background This review incorporates current research examining alcohol's differential effects on adolescents, adults, and aged populations in both animal and clinical models.Methods The studies presented range from cognitive, behavioral, molecular, and neuroimaging techniques, leading to a more comprehensive understanding of how acute and chronic a...
Introduction: Despite an extensive literature documenting the health benefits of a moderate drinking lifestyle, the acute effects of moderate consumption and how they change across the lifespan have remained understudied. Our previous report (Sklar et al., 2013) observed an increased sensitivity to alcohol among older drivers on basic components of...
Evidence from a growing body of literature suggests that alcohol, even at moderate-dose levels, disrupts the ability to ignore distractors. However, little work has been done to elucidate the neural processes underlying this deficit. The present study was conducted to determine if low-to-moderate alcohol doses affect sensory gating, an electrophysi...
Although its rates in the general population have decreased in recent decades, cigarette smoking remains a highly comorbid condition among persons with substance use dependencies. Recent data reported by our laboratory indicate that ~ 90% of men and women seeking treatment for alcohol and other substances are current smokers. Our initial studies of...
There is a substantial body of literature documenting the deleterious effects of both alcohol consumption and age on driving performance. There is, however, limited work examining the interaction of age and acute alcohol consumption. The current study was conducted to determine if moderate alcohol doses differentially affect the driving performance...
Previous cross-sectional MRI studies with healthy, young-to-middle-aged adults reported no significant differences between smokers and non-smokers on total hippocampal volume. However, these studies did not specifically test for greater age-related volume loss in the total hippocampus or hippocampal subregions in smokers, and did they did not exami...
Background: Available evidence suggests women may be more vulnerable to the effects of chronic alcohol consumption than men. The few investigations of gender differences in treatment-seeking populations have often involved study samples restricted by selection criteria (e.g., age, education). The current study examined gender differences in a hete...
Purpose of review: This article reviews recent findings regarding neurobehavioral factors which may be associated with risk for alcohol misuse, as well as those which may occur as a result of alcohol misuse during adolescence and emerging adulthood. Recent findings: Current research extends previous findings by engaging multiple assessment metho...
Although the biphasic effects of acute alcohol during ascending and descending Breath Alcohol Concentrations (BrACs) are well described, the plateau period between peak and steadily descending BrACs is generally unrecognized and under-studied by researchers. Naturalistic examinations indicate such periods persist for substantial intervals, with a t...
High ego-strength subjects, determined by the Cattell 16PF, were shown to be inferior to low ego-strength subjects in learning paired associations by the anticipation method, whereas the reverse was true when learning by the study/test method, when the list involved noncompeting materials. High ego strength is assumed to correlate with a high degre...
Objectives: The prevalence of substance abuse and other psychiatric disorders among physicians is not well-established. We determined differences in lifetime substance use, and abuse/dependence as well as other psychiatric disorders, comparing physicians undergoing monitoring with a general population that had sought treatment for substance use....
Although the environmental context effect typically refers to superior performance in the context of original learning, this result is not consistently found across all experimental paradigms or instructional sets. Thus far, a precise statement of the underlying mechanisms and boundaries of the context effect has been absent from the literature, wi... | https://www.researchgate.net/profile/Sara-Nixon |
The invention relates generally to wireless communications in wellbores. As technology has improved, various types of sensors and control devices have been placed in hydrocarbon wells, including subsea wells. Examples of sensors include pressure sensors, temperature sensors, and other types of sensors. Additionally, sensors and control devices on the sea floor, such as sand detectors, production sensors and corrosion monitors are also used to gather data. Information measured by such sensors is communicated to well surface equipment over communications links. Control devices can also be controlled from well surface equipment over a communications link to control predetermined tasks. Examples of control devices include flow control devices, pumps, choke valves, and so forth.
Exploring, drilling, and completing a well are generally relatively expensive. This expense is even higher for subsea wells due to complexities of installing and using equipment in the subsea environment. Running control lines, including electrical control lines, between downhole devices (such as sensor devices or control devices) and other equipment in the subsea environment can be complicated. Furthermore, due to the harsh subsea environment, electrical communications lines may be subject to damage, which would mean that expensive subsea repair operations may have to be performed.
| |
Working individuals have one thing common in their minds as the end of each financial year approaches, which is the income tax. Income tax is a percentage of the direct tax that applies to the income earned by an individual.
Every individual is required to declare their earnings to the government every year in terms of their annual income return and then file a tax return by calculating it on an income tax calculator. Thereafter, the government uses this collective money for various purposes.
Whether you are a government employee, work in a company, or runs a business, everyone has to pay taxes in India. Failing to do so every year is a punishable offense. The income tax that needs to be paid is dependent on various parameters such as income, source of income, age, investments, and exceptions.
India, like most other countries, has different tax slabs where the tax rate is higher for employees with higher incomes, this gets automatically calculated by using an income tax calculator by Scipbox.
Different types of Income and sources
Income can be classified into different categories such as:
- Direct income from salary: the monthly income an individual gets from their employer, it can have the following components – Basic Salary, Dearness Allowance (DA), Allowances on transportation, medical, etc., annuity & gratuity received in that financial year and any special allowances.
- Income from business/profession: the income earned through any business or a profession.
- Income from Rent on house property: any income someone earns from rental properties or commercial property.
- Income from capital gains: any income gained by sale or transfer of capital assets such as stocks, mutual funds, real estate, etc.
- Income from other sources: any kind of income that can not be put under the above-mentioned categories.
Tax Benefits available in India
The entire income is not taxable, there are some exceptions available. Income Tax Act allows some deductions from the income that has been taxed. This is a good practice to save up a lot of tax in a year. Section 80C and 80U of the income tax act,1961 specifies these deductions. Additionally, there are other deductions under Section 16, 24, etc. To understand them better, calculate the payable income tax by using an income tax calculator.
1. Tax Exemptions for tax benefits
- Standard Exemptions: After the union budget of 19-20, every individual has a standard exemption of Rs. 50,000 irrespective of their income.
- HRA exemption under Section 10: Any individual who lives in a rented property, can avail tax exemption under section 10. HRA exemption is the minimum amount of –
- The actual amount received by the employer
- 50% of the basic salary & dearness allowance for an individual living in a metropolitan city, 40% for individuals living in a non-metro city
- Actual rent paid less 10% of the basic salary & dearness allowance
Self-employed individuals can also claim rent exemptions on actual rent paid or Rs. 60,000, whichever is less. To claim these exemptions, the employee needs to submit a Rent agreement, Rent receipts, and PAN of the landlord.
2. Tax Deductions under different sections
Tax deductions help in saving tax on the total amount. One can claim a tax deduction on charity, insurance plans, medical bills, retirement schemes, or even NSCs. Following are the different sections under the income tax act:
- Section 80C: Investments made for Mutual funds, tax-saving fixed deposits, Provident funds, term life insurance premiums, pension schemes, etc. are eligible for tax deductions. The maximum deduction available under this section is Rs. 1,50,000.
- Section 80D: The premium paid for the health insurance policy can be claimed under tax deduction. Premiums of health insurance for self, spouse, dependent children & parents can be claimed as a tax deduction. The maximum premium for self, spouse, and children that can be claimed is Rs. 25,000. Additionally, for parents under the age of 60, the maximum deduction that can be claimed is Rs. 25,000 and for parents above the age of 60, Rs. 50,000.
- Section 80 CCD: This section especially encourages employees to invest in 2 pension schemes – National Pension Scheme (NPS) & Atal Pension Yojna (APY).
Under this sub-section of scheme Section80CCD(1):
- The maximum deduction that can be claimed under this is 10% of basic income + dearness allowance
- An additional benefit of Rs. 50,000
Furthermore, the sub-section80CCD(2) allows additional tax deduction of 10% of the basic salary + dearness allowance apart from Section80CCD(1).
Apart from the above-mentioned tax benefits, individuals can also claim an exemption on interest paid Education loans, Home loans, and different charities.
To calculate the exemptions and deductions, it is important to calculate the tax properly by using an income tax calculator. Following are the simple steps to calculate the income tax:
- Calculate the total income
- Deduct the allowable deductions and exemptions from income as a good tax planning activity
- Apply appropriate tax slabs as per the taxable income
- Deduct the tax rebates allowed under the Income Tax Act
- Deduct taxes already paid in the form of TDS, TCS, any advance taxes from the tax amount. The balance amount is tax payable or tax refund receivable
Wisely and timely calculating the income tax can help save up a lot of money. Using an income tax calculator makes all these calculations easy even for a layman. | https://www.bel-india.com/how-to-save-taxes-in-india-through-income-tax-calculator/ |
Annotated bibliography: the impact of trauma among undocumented
An annotated bibliography is a list of peer-reviewed sources with a brief summary (250 words minimum) accompanying each citation. The summary should highlight the valuable information within the source, while summarizing the main idea.
Please find 3 peer-reviewed articles on “The Impact of Trauma among Undocumented Immigrant Children”
For the Annotated Bibliography, Do not use any font greater than “12,” with 1 inch margins.
- Please note: the following sources will not be accepted, and should not be cited in the
annotated bibliography:
Wikipedia Citations from newspapers/magazines (e.g. New York Times, USA Today)
Citations from unofficial reports (e.g. reports compiled by a small non-profit organization operating locally).
Unofficial internet websites (only websites of government agencies and established private organizations can be used). If you are uncertain whether or not a particular source of information is acceptable, please consult with the instructor in advance.
- In addition, you are expected to use APA format to cite your sources, in the body of in the Annotated Bibliography
After you find the 3 peer-reviewed articles, do the annotated bibliography. Below you will find an example of an Annotated Bibilography. Please look at that so you can know how to do the annotated bibliography. Please also upload the 3 articles along with the answer when you are done.
- The Annotated Bibliography should also include detailed scholarly information about the chosen topic, a summary of the article, and no “opinions” about the article or factual statements without a citation.
- Be careful with including direct quotes on the bibliography! If a quote is included, it should be less than a sentence long. Annotated bibliographies are typically less than 200 words per article. The purpose of the bibliography is for you to paraphrase, explain and reflect on the article and NOT for you to add multiple quotes interspersed by a few sentences. Points will be deducted if the bibliography does not follow this policy.
This is due on Sunday November 31st at 11:00P.M Eastern Standard Time
Visit these websites to learn more on how to do an Annotated Bibliography and to see examples: | https://essaypro.co.uk/2022/07/13/annotated-bibliography-the-impact-of-trauma-among-undocumented/ |
We are living in a VUCA world. A world characterised by increasing Volatility (digital disruption and the threat of trade wars), Uncertainty (Brexit deal or no-deal), Complexity (global warming) and Ambiguity (fake news!) – just to mention a few examples.
It’s a world that requires from us greater capacity to cope in the face of new challenges. We need inspirational leaders who can settle the dust stirred up by VUCA by leading with what Professor Bill George, author of “True North” calls VUCA 2.0 – Vision, Understanding, Courage and Adaptability. Sadly, these attributes appear to be conspicuously lacking in political leaders currently on the world stage – with a few rare exceptions. For that matter, how many corporate leaders are richly endowed with these qualities?
So, where does this leave us?
In a single word – vulnerable. This is not a world for the faint hearted. It’s a world which requires from us reasonable levels of resilience if we hope to cope with the challenges, and significant levels of resilience if we want to be competitive and successful.
So, what is Resilience?
Resilience is not about being bullet proof and being able to handle anything that is thrown our way. That’s a dangerous aspiration as it would set us up for great disappointments in the face of setbacks – and there are always setbacks! There is no straight line between current reality and a goal or a vision in the same way as an airplane is never on a straight course between Perth and Sydney. Air traffic, turbulence, weather conditions and several other factors will cause that plane to deviate from its course but if the pilot and the navigation system are locked on to the destination, the flight path is adjusted accordingly to get there.
The more ambitious our goals and aspirations are, the more likely we are to encounter obstacles and setbacks.
Resilience is about a number of things.
It’s about:
Resilience is important because it:
So, how do we build resilience?
The good news is that resilience is not a trait that people either have or do not have – it involves behaviours, thoughts and actions that can be learned and developed in anyone.
External factors that raise resilience are:
Not everyone has access to these sources of resilience. Thankfully though, we all have access to internal factors that I will refer to as The 4 Keys to Building Resilience.
The 4 Keys are within our locus of control. We don’t need to rely on other people or on expensive tools, courses or resources to acquire and master these keys. They are well within our reach. All it takes is awareness and frequent practice, and we can significantly enhance our resilience.
What are the 4 Keys?
In future blogs, I will share with you the tangible and accessible tools and techniques that inform each of these valuable keys.
Stay tuned.
Alex Paizes
Leadership Development Specialist
If you are interested in learning more about Alex and his workshops on Essemy, please visit this link. | https://essemy.com.au/building-resilience/ |
Communication has enabled us to organize-to work in groups; and through organization, we have been able to overcome barriers to our existence that we could not have subjugated individually. But we need not discuss further how communication has contributed to our development as human beings. Its role is understandable to all of us. We have to articulate that communication is vital to our success and well-being in enlightened civilization.
Effective and efficient working of an organisation depends upon effective communication system. It is possible for a manager to frame good plans, take good decisions, and follow excellent organisation structure only with the help of proper link or communication with the working force. Communication is one of the most important functions of management like planning, organising, directing, control etc. Management’s responsibility to get the things done by and through the people is just possible by communication.
“Communication is something so simple and difficult that we can never put it in simple words,” says T.S. Mathews.
Learn about:- 1. Introduction to Communication 2. Definition of Communication 3. Nature and Scope 4. Elements 5. Objectives 6. Functions 7. Factors 8. Essentials 9. Types 10. Importance 11. Models 12. Guidelines 13. Barriers 14. Methods of Overcoming the Barriers.
Communication: Introduction, Definition, What is Communication, Nature, Elements, Barriers, Objectives, Functions, Importance, Types, Models and More…
Contents:
- Introduction to Communication
- Definition of Communication
- Nature and Scope of Communication
- Elements of Communication
- Objectives of Communication
- Functions of Communication
- Factors of Communication
- Essentials of Communication
- Types of Communication
- Importance of Communication
- Models of Communication
- Guidelines of Communication
- Barriers of Communication
- Methods of Overcoming the Barriers of Communication
Communication – Introduction
If you are like the majority of us, you spend more time in communicating than doing anything else. Probably you spend a hefty part of each day in one-to-one speaking, writing and listening. When you are not talking or listening, you are presumably communicating in supplementary ways like-understanding, lettering, gesturing, and drawing. Or perhaps, you are just taking in information by seeing, feeling, or smelling. All of these activities are forms of communication and certainly you do them right through most of your time.
Obviously, such activity, which we are engrossed in so much, has to be significant. Perhaps, it is the most important of all our activities. It is easy to make out that communication is what has enabled us to develop the civilized society. It is one activity that we human beings clearly do better than the other creatures, and it basically explains our dominant role in this universe.
Communication has enabled us to organize-to work in groups; and through organization, we have been able to overcome barriers to our existence that we could not have subjugated individually. But we need not discuss further how communication has contributed to our development as human beings. Its role is understandable to all of us. We have to articulate that communication is vital to our success and well-being in enlightened civilization.
Effective and efficient working of an organisation depends upon effective communication system. It is possible for a manager to frame good plans, take good decisions, and follow excellent organisation structure only with the help of proper link or communication with the working force. Communication is one of the most important functions of management like planning, organising, directing, control etc. Management’s responsibility to get the things done by and through the people is just possible by communication.
All the working instructions, orders reach the required destination i.e. the implementers through effective communication. Similarly all the suggestions, ideas, problems, difficulties, necessities, demands of the employees also go to the management through the communication system only.
Therefore there must be a sound and effective communication network established by the top level of management. In the absence of communication in different levels of management nothing can be achieved. Someone has rightly said that effective directing involves effective communication. Communication system keeps the members informed about the things happening within and outside the business organisation.
Planning, organising, decisions instructions, feed backs etc., on paper are otherwise static unless that achieve dynamism and momentum. This dynamism and momentum is preached in these through communication. Entire organisation is activated and is put on wheels by the power and energy of communication. Communication system plays a vital role in an organisation like a nervous system in a human body. Therefore skill of communicating becomes an essential quality for every executive.
The word “communication” has its root in the Latin word “Communis” i.e. “common”. It denotes imparting a common idea or it refers to the sharing of ideas, facts, opinions, information and understanding. The term communication refers to transmission of some information and understanding from one person to another.
Communication – Definition: Suggested by T.S. Mathews, W.H. Newman and C.F. Summer Jr
“Communication is something so simple and difficult that we can never put it in simple words,” says T.S. Mathews.
But we do need a definition to understand the term. In his book Communication in Business, Peter Little defines communication as follows-
“Communication is the process by which information is transmitted between individuals and /or organizations so that an understanding response results.”
Another very simple definition of ‘communication’ has been provided by W.H. Newman and C.F. Summer Jr.
“Communication is an exchange of facts, ideas, opinions, or emotions by two or more persons.”
‘Information’ is the keyword in the first definition communication consists in transmitting ‘information’. But this definition does not indicate the objects about which information is to be transmitted. This is precisely what is being done in the second definition. Communication transmits information not only about tangible facts and determinable ideas and opinions but also about emotions.
When a communicator passes on or transmits some information, he may also, either deliberately or unconsciously, be communicating his attitude or the frame of his mind. And sometimes the latter may be more relevant to the reality that is being communicated.
Often we may have come across words of high praise spoken in a scoffing tone. In such a case, the words signify nothing and the tone is the real thing. Similarly, high-sounding expressions of bravery may be only a mask to conceal a person’s timidity and cowardice that may be betrayed by his facial expressions.
The following definition offered by William Scott appears comprehensive and particularly satisfying to the students of ‘business communication’ since it touches all aspects of the communication process-
“Managerial communication is a process which involves the transmission and accurate replication of ideas ensured by feedback for the purpose of eliciting actions which will accomplish organizational goals.”
This definition highlights four imperative points:
1. The process of communication involves the communication of ideas.
2. The ideas should be accurately replicated (reproduced) in the receiver’s mind, i.e., the receiver should get exactly the same ideas as were transmitted. If the process of communication is perfect, there will be no dilution, exaggeration, or distortion of the ideas.
3. The transmitter is assured of the accurate replication of the ideas by feedback, i.e., by the receiver’s response, which is communicated, back to the transmitter. Here it is suggested that communication is a two-way process including transmission of feedback.
4. The purpose of all communication is to elicit action.
It is a fairly comprehensive definition and covers almost all aspects of communication.
But two comments can be made on it:
1. The concept of ideas should be adequately enlarged to include emotions also.
2. Even in administrative communication, the purpose may not always be to elicit action. Seeking information or persuading others to a certain point of view can be equally important objectives of communication.
Communication – Nature and Scope
The role of communication in organized activities is perhaps explained by a real-life illustration. By design, our illustration is both detailed and scant. It is detailed because it consists of examples of the minute and specific communication events that occur in business. It is scant because at best it covers only a sample of an almost infinite number of events.
For this review, we could select any organization, as communication is vital to every conceivable type. Our choice is the Typical Company, manufacturer of a line of quality what sits. The Typical Company is moderately large, with scores of departments and hundreds of workers doing a thousand and one tasks.
It employs crews of salespeople who sell the manufactured what sits to wholesalers all over the country. Like most companies in its field, Typical works to move its products from wholesaler to retailer and from retailer to the final consumer. And it works to keep the consumer happy with the purchase. The Typical Company is indeed typical.
Our review begins with the workday of Dan D. Worker, a clerk in Typical’s order department. (We could, of course, have selected any of Typical’s employees). Dan’s daily communication activities begin the moment he awakens. But for our purposes, we shall pick up Dan’s activities as he rides to work in a car pool with three co-workers. Of course, Dan and his car-pool companions communicate as they travel. Obviously, communication has a social use, and riding to work is a form of social occasion for Dan and his friends.
Most of their talk is about trivial matters. They talk primarily to entertain themselves and to while away the time. There is a joke or two, some comments about politics, a few words about an upcoming football game, and some talk about plans for a getaway weekend fishing trip. Such talk, of course, is of little direct concern to Typical, except perhaps as it affects the general happiness and welfare of the company’s workers.
In time, the conversation drifts to subjects more pertinent to Typical and its operations. Someone mentions a rumour about a proposed change in promotion policy. Then Dan and the others bring up their own collection of rumours, facts, and opinions on the subject. And in the process, they giving, receiving, or handling information. Nothing that he did directly involved making what sits, which, of course, is the Typical Company’s main reason for being. Yet the importance of his activities to Typical’s operations is unquestionable.
Obviously, Dan’s work assignment more directly involves communication than do many others at Typical. But there are many other communication-oriented assignments in the company, and every Typical employee’s workday is peppered with communication in one form or another. If we were to trace the workday of each Typical employee and combine our findings, we would come up with an infinitely complex picture of the communication that goes on at Typical. We would see that communication indeed plays a major role in Typical’s operations.
Communication is as necessary to an organisation as the blood-stream is to a person. It is a basic tool for motivation and an increase in the morale of the employees which largely depends upon the effectiveness of communication. Supervision and leadership are impossible without communication.
Communication is also a means of bringing about maximum production at the lowest cost by maintaining good human relations in the organisation, by encouraging suggestions and implementing them, whenever feasible. In fact it is impossible to have human relation without communication. Many conflicts and misunderstandings can be resolved to a great extent by a good communication skill on the part of the management. It becomes clear that communication has a very wide scope.
The scope of communication can be described as under:
1. One Way Communication:
In olden days that is before the industrial revolution, when the size of business organisation was limited, the scope of communication was also limited. At that time, one way communication was in existence. It was considered as a powerful tool in the hands of the management to get the things done through employees.
One way communication can be called as a downward communication, which usually provides no scope for the rank and file people to show their reaction or forward their opinions, point of views or suggestions to the top level people. In modern times one way communication has become outdated, obsolete, inefficient and ineffective.
In one way communication there is a transmission of ideas or information from executives to the subordinates, it is generally directive in the sense that it causes action to be initiated by subordinates. In today’s environment one way communication is not suitable at all.
2. Two Way Communication:
In the global business world two way communication is always welcomed. There is a wide scope for this type of communication. This upward and downward communication completes of circuit. In this circuit communication goes in upward and as well as in downward direction. The communicator and the receiver get an opportunity to interact with each other and exchange their ideas, opinions, viewpoints, suggestions and emotions etc.
Whenever the orders regarding work are issued by the top level to the bottom level of organisation under tow way communication the employees/workers get an opportunity to express and convey to the top level people, their reactions, feelings, opinion, ideas, suggestions etc. freely. It is the duty and responsibility of the supervisor to know, how and what to communicate to the higher executives.
Two way communication helps in building the mutual trust, co-operation, better understanding, mutual respect, between the management and employees which ultimately helps in improving industrial relations and peace. Last but not least, a good communication system should be like a two way traffic, and both the transmitter and the receiver have a joint role in making this communication effective.
3. Intra-Organisational (Internal) Communication:
In modern business organisation there are a number of departments established to perform the specialised business activities. Therefore it is necessary to integrate and co-ordinate the activities of these different departments for achieving the common objectives of the organisation.
It is necessary to establish intra-organisational /Internal/ interdepartmental/communication which helps in developing a link between various departments and brings in mutuality of interest, team spirit, team work, co-operative attitude among the personnel working in the organisation. This can be done either through formal or informal communication.
Intra-organisational communication may be called as intra-scalar communication. It is a communication from persons at one level in an organisation to others at the same level. It provides a means by which managers at the same level of an organisation co-ordinate their activities without referring all the matters to their superior, the main idea behind this is, that a lot of matters can be handle at the same level of an organisation which relieves superiors of necessary problems and they can devote their precious time to other important matters.
4. Extra-Organisational (External) Communication:
In the global business world, intra organisational communication is not sufficient and adequate. Every business organisation comes into contact with different segments of the society such as bankers, financial institutes, creditors, underwriters, shareholders, customers, solicitors, auditors, chartered accountants, traders, government authorities, suppliers, investors, community at large etc.
Therefore it becomes necessary to interact and communicate with these parties regularly to establish good rapport and relationship with them and thereby winning their trust and confidence. A Good extra organisational communication helps in enhancing the image and goodwill of the company. In simple words this extra-organisational communication is the communication between agencies outside the organisation and the people within it.
5 Key Elements of Communication – Communication is a Process, Communication Involves Transmitting Information and Understanding and Many More…
After studying the various definitions of communication the following key elements of communication become clear:
Element # 1. Communication is a Process:
It is called as a process, because it consists of a series of steps. It is not an independent event. The steps in the process are emergence of ideas, placing them in some logical sequence and transmitting them through some media, received by someone at the other end, and the reaction of that person after receiving information or message and again reversing the journey, communication is thus, called a process.
Element # 2. Communication Involves Transmitting Information and Understanding:
It means that the information should not only be transmitted and received but also understood.
Element # 3. Information Sender and Receiver may be Human or Non-Human Objects:
The concept of communication is quite broader. It is wider field of human interchange of facts and opinions and not the media like telephone, telex, telegraph, radio and others.
Element # 4. Communication Requires Some Channel or Medium:
Communication i.e. transmission can be made orally or in writing. Thus the words and paper assumes the nature of transmission media. But these are not the only mediums of communication. Even a silence can communicate some message. Radios, Televisions, Telexes, Telephones, letters etc. are general media of communication. Apart from this, attitudes, behaviours, actions, gestures, and silence are also effective mediums of communications. Communication can be made directly, consciously or unconsciously.
Element # 5. Communication has Three Interlocking Circuits:
Transmitting information – (i) Upwards (ii) Downwards and (iii) Intra scaler. Upward circuit is aimed at knowing the idea, comments, actions, reactions, attitudes, reports, complains and grievances from the lower level. Such a circuit flows upwards.
Downward circuit is meant for transmitting flow of instructions, directions, clarifications, interpretations of rules, orders, policies and procedures, to lower level who has to implement them. Such a circuit has a downward flow. Intra scalar or cross contact circuit is for exchange of information between departmental heads, members, executives or between workers all of equal rank.
Redfierd has given the following elements of communication:
1. A communicator – A person who passes on the information.
2. Transmission – It is actual issuing of orders, instructions, directions or information.
3. Stimuli – It is a message, order, report or information
4. A communicate – A receiver of the information.
5. Response – A feedback or reaction of the receiver.
A manager should understand above basic key elements in order to make communication effective.
10 Main Objectives of Communication
Predetermined objectives of the business organisation can be achieved through communication network only. Company’s objectives, plans, policies, procedures, rules regulations, budgets, orders, instructions, directions, programmes, top management’s expectations etc. pass through communication only.
The following are the objectives of the communication:
1. To transmit information and develop understanding among working group, which is a must for group effort.
2. To develop positive attitude which is necessary for motivating the employees and gaining their co-operation and job satisfaction.
3. To strictly prohibit the misinformation, rumours gossip. This helps in reducing the emotional tensions of employees.
4. Workers can be made mentally prepared for changes by communicating such information in advance.
5. Another very important objective of communication is to motivate the employees for new ideas, suggestions, creative thinking, new methods of working, improvement in the product, working conditions, encouraging new methods, and thereby reducing time, wastages in production activity.
6. Communication develops, maintains and improves better worker and management relations.
7. Communication ensures free exchange of information and ideas so that all the employees understand and accept them by responding to the status and authority of everyone in the organisation.
8. Communication helps to satisfy employees’ basic needs such as self-respect, status, recognition, attachment, sense of belonging, and identity etc.
9. Communication helps to entertain and maintain social relations among the employees.
10. Communication ensures security and conformity of plans, policies and objectives of the business organisation.
Communication – 4 Main Functions: Communication Provides Information, Command and Instructions, Motivational Function and Integrative Functions
Actually importance and functions cannot easily be separated from each other. Importance of anything is derived and assessed on the basis of functions provided and how they are used for the benefit of management. This importance is further strengthened by the following functions which “Thayer” – a management thinker, has told.
1. Communication Provides Information:
Communication provides information about the needs of the employees individually, specially in respect of guidance in their performance. Along with this, the management is able to collect information about the desires of the employees and can just assess their effect on employee morale and ultimately on performance. Naturally communication enables the management to take full care of the employee.
2. Command and Instructions:
It is the communication network that conveys the commands and instructions to the employees and gets a feedback. Their obligations, duties and responsibilities are made known to the employees by communication. All these help in easy and effective attainment of the objectives.
3. The Motivational Function:
This function is also known as influence and persuasion function. It motivates the employees towards better performance and to exhibit a certain behaviour. Through communication management can convince the employees that their working actions should be organisationally beneficial.
4. The Integrative Functions:
This function facilitates to integrate the efforts and activities of the employees in such a way, that business organisation’s objectives can be achieved ultimately. This function is possible only through the effective communication.
9 Major Factors and Principles of Communication – Clarity, Attention, Adequacy, Consistency, Integration, Timeliness, Informality, Feedback and Communication Network
The principles (factors) contributing to the effectiveness of communication are:
1. Clarity
2. Attention
3. Adequacy
4. Consistency
5. Integration
6. Timeliness
7. Informality
8. Feedback
9. Communication network
1. Clarity:
The main aim of communication is that the ideas communicated are understood by the person receiving it. Hence, there should not be any ambiguity. Apart from this, the language used in the message must be clear and convey the meaning intended. First of all, the sender must be clear, as to what he wants to communicate. The clarity, which is of paramount importance, develops in thought process of sender will lead to effective communication.
2. Attention:
In order to make the message effective, i.e., being understood by the receiver, attention of the recipient must be drawn to the idea of the message transmitted to him. It is a fact, that (human behaviour, emotion and attitude, etc.,) at the time of the receipt of message normally decide the degree of attention of the individual. Thus, the receiver’s attention is a vital factor that leads to proper action on the message.
3. Adequacy:
The information sent should be adequate in all respects, so as to enable the receiving end to take the desired action. Improper and incomplete information not only creates confusion but also leads to loss in the business activities.
4. Consistency:
This principle denotes that message should not be conflicting, whereas it should be consistent with the plans, policies and overall objectives of the organisation. Where the message itself is self-contradictory, it creates confusion and chaos. Thus, it defeats the very purpose of communication.
5. Integration:
Communication must promote cooperation amongst the people at work place so as to achieve its goal. It needs to be understood that communication is a means to an end and not the end in itself. The integration of various activities can be achieved through proper communication.
6. Timeliness:
Time factor is quite important in communication. A message meant to be acted upon at a particular time but happens to be received late, becomes a mere statement without any meaning whatsoever; since it has already lost its importance due to lapse of time.
7. Informality:
Formal communication is a vital line of organisational function. At the same time, the informal communication network has a prominent role in passing information, which is helpful for efficient management. Thus, the management must make best use of this system.
8. Feedback:
Feedback is essential for communication. Receipt of any message confirms the first part of communication. Whether a message has been understood by the receiver at the receiving end, as desired, is the next part. And a mandatory confirmation to this effect from the receiver, whether he agrees to the proposal needs to be informed to the originator of the message.
In an oral communication, the sender can get the feedback, which will be much earlier than a written communication. The sender immediately gets such a confirmation, moment the message is acknowledged. In written communication, the receiver may take a little time for his action, however, the confirmation needs to be sent forthwith on receipt of the message. Thus, the pace of oral communication is faster.
9. Communication Network:
This means the channel through which the communication reaches to the destination for which it is meant Management must take proper care in selecting the network system of communication according to its needs, reliability and effectiveness of the system. The organisation should figure out only such communication network which is user friendly and capable of contributing adequate encouragement to the users.
If transactional analysis in communication is followed properly, it can prevent barriers and make the communication effective. This will also minimize organisational problems and develop cooperation amongst the individuals.
10 Essentials of an Effective Communication – Two Way Communication, Mutual Trust, Clarity of Message, Timely Message, Channel of Communication and More…
Most of the communication become ineffective because many administrators are poor listeners.
This drawback can be removed by the management and listening ability can be developed in such administrators. Effectiveness of communication mostly depends upon the environment within the formal organisation structure. If lines of authority and channels of communication are not known to the concerned employees, then miscommunication, excessive communication or lack of communication takes place.
Communication tends to become more impersonal if the span is too wide. Establishment of ideal communication system requires planning, organising, cooperating and control. The test of successful communication is the manner of its reception and the action thereon and the awareness of the psychology and emotions of the parties involved. The effectiveness largely depends on reciprocal understanding, mutual exchange of ideas, facts etc.
The following are the essentials of an effective communication:
1. Two Way Communication:
In communication there must be two parties i.e. sender or transmitter and the receiver. Both have a joint role in making communication effective. It is a two way traffic. Mere transmission of facts, ideas, opinions etc., is not effective and meaningful communication. Channel must be open for knowing receivers views, opinions, ideas etc., then only effective communication can take place.
2. Mutual Trust:
For effective communication, there must be mutual understanding between the transmitter and the receiver of the message. Lack of mutual understanding between them signifies that there is a lacuna in the communication system. Presence of mutual trust between the superior and subordinates indicates healthy interpersonal relationship between them.
3. Clarity of Message:
First of all sender’s message must be clear to himself. He should thoroughly understand it. Feedback provision in the communication system is called a two-way traffic or process. Therefore the sender should also try to know the reaction of the receiver of the message.
In face to face communication it is easy to get feedback of the listener but in other cases the sender of the message has to do a lot get clues of the reactions of the receiver of the message. The feedback principle avoids the most likely errors in transmission of message and invokes effective participation of the subordinates.
4. Timely Message:
Time factor is the most important factor in good communication system. The message should be timely sent and received. Timeless communication is worse than no communication.
5. Completeness of Message:
Incomplete message requires repeated communications which ultimately results in delay in action and causes misunderstanding, unhealthily human relations, and inefficiency. The message to be communicated must be adequate and complete.
6. Consistency of Message:
The message to be communicated should always be consistent with the objectives, plans, policies and programmes of the enterprise.
7. Good Listener:
One of the essentials of effective communication is that the executives and supervisors must be good listeners. They must be attentive and patient when others are attempting to communicate.
8. More Emphasis on Feedback:
An effective communication is a two way process. Therefore it is required that the sender of the message allows the receiver to express his view or reaction in response to any message transmitted. More emphasis should be given on feedback from the receiver of the message.
9. Channel of Communication:
Effective communication depends mostly on selection of channel of transmission of message and the speed of transmission of message. But at the same time accuracy of message is also needed.
10. Continuing Process:
The goal of communication is complete understanding. Therefore, there should be a never-ending process of listening and reading. Communication should be constant, habitual and automatic.
Top 5 Types of Communications in Organisations (or Channels)
There are various forms or types of communications in organisations.
The important types or channels are as follows:
1. Oral, Written and Nonverbal Communication
2. Formal and Informal Communication
3. Downward and Upward Communication
4. Horizontal and Diagonal Communication
5. Internal and External Communication
Type # 1. Oral, Written and Nonverbal Communication:
Oral Communication:
It is the face to face communication between individuals. It may be in the form of direct talk when persons are physically present at one place. It may also include informed conversations, group discussions, meetings, telephone calls, intercom system or formal speeches. It is the most effective and most frequently used tool of the manager to get his job done. It provides opportunity for the exchange of information, points of view, and instructions between the superior and his subordinates.
It is the powerful means of exchange of ideas because the receiver not only hears the message but also observes the physical gestures of the speaker. It is an effective way of changing attitudes, beliefs and feelings. Theo Haiman writes, “The human voice can impart the message with meaning and shading which even long pages of written words simply cannot convey”.
Written Communication:
It is a communication through written words. It is generally in the form of instructions, letters, memos, formal reports, rules, policy manuals, information bulletins, office notes, and notices and so on. It is more orderly and binding on subordinates. By written communication, it is possible to communication with several persons simultaneously. Written communication is necessary when the action called for is complicated.
Non-Verbal Communication:
Non-verbal communication is not expressed orally or in writing. It is conveyed through human and environmental elements. Non-verbal expressions include facial expressions, clothes, posture, tone of voice, body movements etc. Our non-verbal messages can show anger, frustration, arrogance, shyness, fear, indifference, mischief or intimacy. Physical movements or body language is every effective in communicating message.
The important forms of non-verbal messages are – (1) sign language which includes signs or symbols such as flag, a nod of the head etc. It replaces words. (2) Action language, which consists of body movements, and (3) Object language which consists of physical items such as clothes, furniture or physical possessions that convey some messages.
Managers should be conscious of the nonverbal messages subordinates transmit to them whether intentionally or otherwise.
Type # 2. Formal and Informal Communication:
Formal Communication:
Formal communication refers to the flow of information through the formally established channels or chain of command. Formal channels of communication are planned and established by the organisation. The formal lines of communication most often follow the reporting relationships in the organisation.
It is official communication and travels in three directions – downward, upward, and laterally. It is associated with the superior and subordinate relationships. It is hierarchical in nature. It is generally in writing and takes the form of orders, instructions, policy manuals, handbooks, formal directives, reports etc. It is linked with formal status and positions. It is required to do one’s job.
Informal Communication – The Grapevine:
Informal communication refers to communication among people through informal contacts. It is also referred to as “grapevine”, bush telegraph” or the “rumour mill”, it takes place without regard to hierarchical structure. It is related to personal’ rather than positional’ matters. It does not follow the formal channels established by management.
In fact, informal communication arises due to informal relations. It is the result of social interaction of people. It takes place on account of natural desire of people communicate with each other. Herbert Simon writes, “The informal communication system is built around the social relationship of the members of the organisation.”
Informal communication is structureless, unofficial and unplanned. It is spontaneous network of personal contacts. It crosses the barriers of status and hierarchy. It often flows between friends and intimates. It does not follow formally delegated lines of authority and responsibility.
Type # 3. Downward and Upward Communication:
Downward Communication:
It flows from individuals at higher levels of the hierarchy to those at lower levels. It is from the superior to the subordinate. From the top management it filters down to workers through the various hierarchical levels in between. It follows the organisation’s formal chain of command from top to bottom. It reflects the authority-responsibility relationships shown in the organisation chart.
According to Megginson and Mosley, most downward communication involves information in one of the following categories:
(a) Information related to policies, rules, procedures, objectives, and other types of plans.
(b) Work assignments and directives.
(c) Feedback about performance
(d) General information about the organisation, such as its progress or status.
(e) Specific requests for information form lower levels.
(f) Efforts to encourage a sense of mission and dedication to the organisational goals.
It is a subordinate-initiated communication. It flows from subordinates to superiors and continues up the organisational hierarchy. It is primarily nondirective. It is usually found in participative and democratic environments.
In general, the following types of information are involved with upward communication:
(a) Problems and issues faced by employees.
(b) The level of performance and achievement of employees.
(c) Ideas and suggestions for improvement, in the organisation.
(d) Feelings of employees about their jobs, fellow employees, and the organisation.
(e) Requests for assistance or information.
(f) Expression of employee attitudes, grievances and disputes that influence performance.
Type # 4. Horizontal and Diagonal Communication:
Horizontal and diagonal communications are known as cross-wise communications.
Horizontal Communication:
Horizontal or lateral communication refers to the flow of information- (a) among peers within the same work group or (b) between and among departments on the organizational level. This kind of communication does not follow the organisational hierarchy but cuts across the chain of command.
It is used to speed up message, to improve understanding, and to coordinate efforts. It is used not only to inform but to request support. It is essentially co-ordinative in nature and is the result of specialization in organisations. Horizontal communication is of three kinds- (a) intradepartmental problem solving (b) interdepartmental co-ordination and (c) staff advice to line departments.
Diagonal Communication:
When the flow of information is among persons at different levels who have no direct reporting relationships, it is called diagonal communication. It cuts diagonally across an organisations chain of command. Most frequently, it occurs as a result of line and staff relationships. Thus, it cuts across functions and levels in organisation structure. For example, when a supervisor in the accounting department communicates directly with a regional advertising manager, he is engaged in diagonal communication.
It maintains efficiency and speed in working. It expedites action and prevents others from being used merely as conduits between senders and receivers. The main problem with this form of communication is that it departs from the normal chain of command.
Type # 5. Internal and External Communication:
Internal commutation is that which takes place within the organisation, among different managers, among different departments, between a superior and his subordinates. It includes vertical as well as horizontal communication. It is meant for internal units of a concern. External communication means communication with outsiders, including suppliers, customers, professional bodies, the government and the public.
As the external environment has become more dynamic and turbulent, organisations are required to make a regular exchange of informations with outside groups and individuals. Szilagyi says, “Managers can no longer take an “avoidance” view, hoping that the problem will blow away with the next breeze. The issues of our times will need open and straightforward information and communication if they are to be solved.”
Importance of Communication
1. Without communication-
a. Employees cannot know what their associates are doing.
b. Management cannot receive information on inputs and outputs.
c. Management cannot give instructions.
d. Co-operation and coordination also become impossible.
e. People cannot express their feelings to others.
f. People cannot satisfy their social needs.
2. Effective communication tends to encourage better performance and job satisfaction.
3. People understand their jobs better and feel more involved with the environment through communication.
4. The manager’s main instrument for operating his affairs is information.
5. The management functions like planning, organising, leading and controlling are intimately involved with, and dependent on communication.
6. Communication is the key to effective teamwork, because both are based on the information, understanding, consultation and participation.
7. Communication skill is an essential skill at every level of organizational functions.
8. It is essential to communicate what the leader wants people to do, how to do, where to do and more important, why to do.
9. Communication provides the key to facilitate the exchange of ideas, information as well as meeting of minds. Hence it can aptly be described as the “ears and eyes” of the management.
10. It plays a vital role in planning. The making of a plan requires facts and figures, which can only be made available through effective communication.
11. It integrates the formal organizational structure.
12. It is responsible for holding the members of a primary social group together.
13. It also plays a pivotal role in rational decision-making, organizational control, as well as building and maintaining employee morale.
14. If organisation fails to provide careful attention to communication, a defensive climate will prevail.
7 Models of Communication – Code Model, Inferential Model, Schramm Model, Lasswell Model, Katz-Lazarsfeld Model, Westley-MacLean Model and Berlo’s SMCR Model
1. Code Model of Communication:
The code model of communication, or the Shannon-Weaver model of communication, was developed in 1948 by Claude Shannon and popularized by Warren Weaver.
It is easy to understand the code model of communication if we understand how the Morse code is transmitted from the sender to the receiver.
Morse code, is a coding system in which a sender transmits to the receiver a message that comprises a combination of dots and dashes to represent numbers and alphabets. The receiver must then decode the message. For example, the sender encodes and sends the following message to the receiver – (dotdotdot/dashdashdash/dotdotdot). The receiver, who is familiar with the Morse code, would easily decode the message (SOS).
Therefore, encoding and decoding in the Morse code is said to be effective if the message sent is the same as the message received. Furthermore, the obstruction or noise can only occur in the communication channel.
Nobel laureate Claude Elwood Shannon was a mathematician and an electronics engineer. He was working on fire-control systems and cryptography at Bell Labs during World War II, when he first conceptualized the model in 1945 (in a classified memo). In 1948, he published his two-part article “A Mathematical Theory of Communication” and coauthored the book The Mathematical Theory of Communication with Weaver in 1949.
This model was based on two critical hypotheses:
1. The obstruction to communication (or noise) only occurred in the communication channel.
2. Communication was said to be effective if the recovered message was the same as the sent message.
Clearly, this model fell short on several factors. First, it assumed that the flow of information is direct and unidirectional, whereas that is not the case. Second, the model assumed that if the symbols of the sent message and the received message were the same, it was the sufficient and necessary condition to say that the communication was successful.
In other words, the model proposed that like the Morse code, every word would have the same meaning for both the sender and the receiver. This was clearly the biggest shortcoming of the model. That is why a need for a more advanced model was felt, which led to the development of subsequent models.
2. Inferential Model of Communication:
While we may like to think that communication is a straight-forward exchange, unfortunately, it is not. Take, for instance, the simple phrase –
Ravi, the door is open.
Can you derive a meaning from the phrase? The answer would obviously be yes. Now see how many meanings you can derive from this simple phrase. In fact, if you ask different people, chances are you will get several distinct inferences for the same phrase. So, how many different meanings were you able to derive in total?
This exercise demonstrates that the “drawn” inference varies from one individual to another. According to the model, the message that the sender constructs, travels through the medium and reaches the receiver, who then tries to deconstruct the message. However, this deconstructed message can never be identical to the original message.
The difference in the “sent” message and “received” message arises because the sender and the receiver are two distinct individuals with their own perceptions, beliefs, and understanding. As a result, even in an ideal scenario, the constructed and the deconstructed message will never be identical, owing to the cognitive differences between the two individuals.
For example, when the boss constructs the message (instruction), he/she uses his/her own judgment, based on his/her knowledge, perception, experience, beliefs, etc., to encode the message. However, the subordinates use their individual cognitive skills to decode the instructions. Therefore, this decoding will be different for each subordinate. As a result, some subordinates grasp the instructions better, while others do not.
3. Schramm Model of Communication:
The model proposed by Wilber Schramm (1954), also takes a linear view to the flow of communication. However, contrary to the Shannon—Weaver model, Schramm’s model talks about an overlapping area, which he denoted as the Field of Common Experience. According to Schramm, in order for communication to be effective, both the sender and the receiver must have a common frame of reference. In other words, communication is most effective when both the sender and the receiver have things in common, such as culture, language, beliefs, education, and attitudes.
Furthermore, while the Shannon—Weaver model lays more emphasis on the channel, the Schramm model lays more emphasis on the experiences of the sender and the receiver.
Schramm, along with Osgood, also proposed another model. The model, for the first time, viewed communication as circular, instead of linear. Therefore, according to this model, communication does not end with the sender sending a message and the receiver receiving that message. The receiver must then interpret the message and respond to it, thereby making it a circular flow of communication.
The circular model, which addressed bipartite communication, was clearly an improvement on the other existing models.
4. Lasswell Model of Communication:
The Lasswell model, along with the Shannon-Weaver model, is categorized as the transmission model. Lasswell’s model, proposed in 1948, describes the communication process in terms of the following five questions – Who? Says What? In Which Channel? To Whom? With What Effect? In other words, the sender says something using some channel to a receiver, which has an effect on the receiver. The model was put forth as a model for mass communication and is still popular in the propaganda studies.
5. Katz-Lazarsfeld Model of Communication:
The Katz—Lazarsfeld model, which was first conceptualized by Paul Lazarsfeld in 1944, was subsequently elaborated by Elihu Katz and Lazarsfeld in 1955 in the book Personal Influence. The model, while linear, gives a two-step flow of communication. The model hypothesizes that mass media messages first travel to opinion leaders, who then spread the message to passive, yet like-minded masses.
6. Westley-MacLean Model of Communication:
The Westley-MacLean model, proposed in 1957 by Bruce Westley and Malcolm MacLean, took a different view from that of the Katz—Lazarsfeld model’s two- step flow of communication. Instead of focusing on mass communication, it lays a greater emphasis on interpersonal communication. According to the Westley-MacLean model, the sender encodes a message and sends it to the receiver.
The receiver on his part decodes, interprets, and encodes the message again and sends it back either to the sender or to other individuals in a modified form. This is quite similar to the game of Chinese Whispers or the Broken Telephone, which we played as kids.
7. Berlo’s SMCR Model of Communication:
David Berlo, in 1960, gave us the Source-Message-Channel-Receiver (SMCR) model. Berlo, who was strongly influenced by the stimulus—response theory, used the concept of learning while developing the model, which considers several aspects within the Sender, the Message, the Channel, and the Receiver. Therefore, effective communication will depend on these sub points contained within SMCR.
While the components themselves were not new, what made the SMCR enduring were the subcomponents contained within the components of Sender-Message-Channel-Receiver. The model viewed communication as a process that was affected by the various subcomponents of SMCR.
According to Berlo’s model, the way the sender encoded a message was dependent on his/her communication skills, attitudes, knowledge, social system, and culture. Similarly, the receiver decoded the message using his/her communication skills, attitudes, knowledge, social system, and culture.
Guidelines to Overcome Communication Barriers
Effective communication is the responsibility of all persons in the organisation, managers as well as nonmanagers, who work toward a common aim. Whether communication is effective can be evaluated by the intended results.
The following guidelines can help overcome the barriers to communication:
1. Senders of messages must clarify in their minds what they want to communicate. This means that one of the first steps in communicating is to clarifying the purpose of the message and making a plan to achieve the intended end.
2. Effective communication requires that encoding and decoding be done with symbols that are familiar to the sender and the receiver of the message. Thus the manager should avoid unnecessary technical jargon, which is intelligible only to the experts in their particular field.
3. The planning of the communication should not be done in a vacuum. Instead, other people should be consulted and encouraged to participate to collect the facts, analyse the message, and select the appropriate media.
For example, a manager may ask a colleague to read an important memo before it is distributed throughout the organisation. The content of the message should fit the recipients’ level of knowledge and the organisational climate.
4. It is important to consult the needs of the receivers of the information. Whenever appropriate, one should communicate something that is of value of them, in the short run as well as in the more distant future.
At times, unpopular actions that affect employees in the short run may be more easily accepted if they are beneficial to them in the long run For instance, shortening the workweek may be more acceptable if it is made clear that this action will strengthen the competitive position of the company in the long run and avoid layoffs.
5. There is a saying that the tone makes the music. Similarly, in communication the tone of voice, the choice of language, and the congruency between what is said and how it is said influence the reactions of the receiver of the message.
An autocratic manager ordering subordinate supervisors to practice participative management will create a credibility gap that will be difficult to overcome.
6. Too often information is transmitted without communicating, since communication is complete only when the message is understood by the receiver. And one never knows whether communication is understood unless the sender gets feedback.
This is accomplished by asking questions, requesting a reply to a letter, and encouraging receivers to give their reactions to the message.
7. The function of communication is more than transmitting information. It also deals with emotions that are very important in interpersonal relationship between superiors, subordinates, and colleagues in an organisation.
Furthermore, communication is vital for creating an environment in which people are motivated to work toward the goals of the enterprise while they achieve their personal aims. Another function of communication is control. As explained in the discussion of management by objectives (MBO), control does not necessarily mean top-down control
Instead, the MBO philosophy emphasizes self-control, which demands clear communication with an understanding of the criteria against which performance is measured.
8. Effective communicating is the responsibility not only of the sender but also of the receiver of the information. Thus, listening is an aspect that needs additional comment.
15 Major Communication Barriers – Lack of Planning, Unclarified Assumptions, Semantic Distortion, Poorly Expressed Messages and Many More…
Perceptive managers always look for the causes of communication problems instead of just dealing with symptoms. Barriers can exist in the sender, in the transmission of the message, in the receiver, or in the feedback.
Specific communication barriers are:
Barrier # 1. Lack of Planning:
Good communication seldom happens by chance. Giving the reasons for a directive, selecting the most appropriate channel, and choosing proper timing can greatly improve understanding and reduce resistance to change.
Barrier # 2. Unclarified Assumptions:
Often overlooked, yet very important, are the uncommunicated assumptions that underlie messages.
Unclarified assumptions may result in confusing and the loss of goodwill.
Barrier # 3. Semantic Distortion:
Another barrier to effective communication is semantic distortion, which can be deliberate or accidental. An advertisement that states ‘We sell for Less’ is deliberately ambiguous; it raises the question – ‘Less than what ‘ Again, words may evoke different responses, e.g., government, police etc.
Barrier # 4. Poorly Expressed Messages:
No matter how clear the idea in the mind of the sender of communication is, it may still be marked by porly chosen works, omissions, lack of coherence, poor organization of ideas, awkward sentence structure, platitudes, unnecessary jargon, and a failure to clarify the implications of the message. This lack of clarity and precision, which can be costly, can be avoided through greater care in encoding the message.
Barrier # 5. Communication Barriers in the International Environment:
Communication in the international environment becomes even more difficult because of languages, cultures and etiquettes.
Barrier # 6. Loss by Transmission and Poor Retention:
In a series of transmission from one person to the next, the message becomes less and less accurate. Poor retention of information is another serious problem. Thus, the necessity of repeating the message and using several channels is rather obvious.
Barrier # 7. Poor Listening and Premature Evaluation:
There are many talkers but few listeners. Listening demands full attention and self-discipline. It also requires that the listener avoids premature evaluation of what another person has to say. A common tendency is to judge to approve or disapprove what is being said- rather than trying to understand the speaker’s frame of reference. Yet listening without making hasty judgments can make the whole organization more effective and more efficient. Listening with sympathy, can reduce some of the daily frustrations in organized life and result in better communication.
Barrier # 8. Impersonal Communication:
Effective communication is more than simply transmitting information to employees. It requires face-to-face communication in an environment of openness and trust.
Barrier # 9. Distrust, Threat and Fear:
Distrust, threat and fear undermine communication. In a climate containing these forces, any message will be viewed with scepticism. Distrust can be the result of inconsistent behaviour by the superior, or it can be due to past experiences in which the subordinate was punished for honestly reporting unfavourable but true information to the boss. Similarly, in the light of threats – whether real or imaginary-people tend to tighten up, become defensive and distort information. What is needed is a climate of trust, which facilitates open and honest communication.
Barrier # 10. Insufficient Period for Adjustment to Change:
The purpose of communication is to effect change that may seriously concern employees; shifts in the time, place, type and order of work or shifts in group arrangements or skills to be used. For maximum efficiency, it is important not to force change before people can adjust to its implications.
Barrier # 11. Information Overload:
People respond to information overload in various ways. First, they may disregard certain information. Second, if they are overwhelmed with too much information, they make errors in processing it. Third, they may delay processing information either permanently or with the intention of catching up in the future. Fourth, they may filter information. Finally, they respond to information overload by simply escaping from the task of communication.
Barrier # 12. Selective Perception:
This is a tendency of people to perceive what they expect to perceive. In communication this means that they hear what they want to hear and ignore other relevant information.
Barrier # 13. Influence of Attitude:
Attitude is predisposition to act or not to act in a certain way. It is a mental position regarding facts, people, things or ideas. Clearly, if people have made their minds, they cannot objectively listen to what is said.
Barrier # 14. Difference in Status and Power:
Differences in status and power between the sender and the receiver of communication constitute other barrier. Also, when information has to pass through several levels in the organisation hierarchy, it tends to be distorted.
Barrier # 15. Secrecy:
There is tremendous amount of information which is withheld from people whom it concerns, ostensibly because it will cause jealousy or uncertainty. Atmosphere of secrecy creates a barrier and also adverse effect on communication effectiveness.
It is important to remember also that the effective reception of information is determined in part by the personality, habits, values and mental states of those for whom it is intended. Anger, frustration and fear tend to act as – “noise” which distorts reception, so that what has been designed as a simple, uncontroversial message may be transformed, within an unexpected context; into its opposite, arousing hostility and conflict. “The meanings of words are not in the words, they are in us”- incorrect decoding invariably produces distortion, the intensity and duration of which can destroy the effectiveness of communication.
Methods and Steps to Overcome Communication Barriers
Some managerial actions may minimise the effect of barriers to some extent. So the management should take necessary steps to overcome the barriers.
They are explained below:
1. The management should clearly define its policy to the employees. It should encourage the free flow of information. Then, the employees at all levels of management can realise the full significance of communication.
2. The management sets up a system through which only essential information could be applied. Besides, these are supplied in a prescribed manner.
3. All the information should be supplied through a proper channel. But, it should not be insisted upon always. The reason is that in the case of emergency, proper channel process may cause a delay in the supply of information. Proper channel system can be insisted on only for routine information.
4. Every person in the management shares the responsibility of good communication. Top management people should check from time to time whether there is any barrier or not in the free flow of information. It can be achieved only if there is strong support from the top management.
5. Adequate facilities should be provided by the management. In other words, the available communication facilities should be properly utilised.
6. Communication is an inter-personal process. Each person has confidence in another person. There should be mutual understanding. In large organisations, the disparity status pattern may be reduced through forming good friendship between the superior and the subordinates.
7. The communication should be in a known language for both the receiver and the communicator. Ambiguous words should be avoided while supplying the information. | https://www.artofmarketing.org/communication-2/communication/13977 |
Any personal information you provide including and similar to your name, address, telephone number and e-mail address will not be released, sold, or rented to any entities or individuals outside of Henderson Law.
External Sites
Remember The Risks Whenever You Use The Internet
While we do our best to protect your personal information, we cannot guarantee the security of any information that you transmit to Henderson Law. In addition, other Internet sites or services that may be accessible through the Henderson Law website have separate data and privacy practices independent of this website, and therefore we disclaim any responsibility or liability for their policies or actions.
Please contact those vendors and others directly if you have any questions about their privacy policies. | https://www.hendersonlawllc.com/privacy-policy/ |
I recently received a request to write a column about the role of nutrition in health and disease of the teeth of companion animals.
Nutrition can play a role in proper formation of the immature teeth. During amelogenesis (development of the organic enamel matrix and its subsequent mineralization), lack of proper nutrition can result in enamel hypoplasia or hypomineralization.
Poor nutrition is not the most common clinical cause of enamel hypoplasia or hypomineralization, but it can contribute. Enamel hypoplasia can also be due to hereditary causes, as described in the Samoyed.1
Other causes of enamel hypoplasia include premature birth, exposure to epitheliotropic (such as canine distemper virus), fever, trauma, and exposure to certain antibiotics during development of enamel in the young animal.
Abnormalities
Developmental abnormalities of the enamel include hypoplasia, hypomineralization, or both. Hypoplasia results in a defect in the thickness or presence of enamel, whereas hypomineralization can result in flaky, less strong enamel that can wear easily. Consider hypoplasia as a problem of quantity (or even lack of presence of enamel in specific areas of the tooth), whereas hypomineralization is a problem with the quality of the enamel due to incomplete or lacking maturation of the enamel.
Once the enamel is formed, there is no further laydown of enamel throughout life. However, exchange of minerals through the surface of the enamel can occur to some degree.
Perhaps the aspect of nutrition most significantly affecting the teeth after eruption is a diet’s ability, or inability, to reduce plaque and/or calculus accumulation on the teeth. Most veterinarians will share their intuition that clinical patients eating only canned food will tend to have more plaque, calculus, and gingivitis.
Studies that support this popularly held belief are surprisingly limited. One review in the Australian Veterinary Journal concluded, “There is reasonable evidence that soft diets are associated with increased frequency and severity of periodontal disease.”2
However, a comprehensive study in the Journal of Veterinary Dentistry of 1,350 client-owned dogs showed few apparent differences of the association of calculus, gingival inflammation, and periodontal bone loss in dogs fed dry food only compared with those fed other than dry food only.3
Chewing materials
In the same study, there was less accumulation of calculus, less gingival inflammation, and less periodontal bone loss in dogs given access to more types of chewing materials (rawhides, bones, biscuits, chew toys) compared with dogs given access to fewer or no chewing materials.3
The benefits and risks of access to certain chew items must be weighed by pet owners since chewing on certain rawhides and bones may result in more immediate concerns, such as gastrointestinal issues or broken teeth.
Given chewing materials have been shown to decrease calculus and gingivitis, it makes sense the size of kibble and design of the kibble may play a role in preventing periodontal disease. If chewed upon, kibble can provide a cleansing effect on the teeth that do the chewing.
Mechanical effects of a diet can be achieved by a larger kibble that decreases the chance of a pet swallowing the kibble whole, and when chewed, the kibble might be designed to not readily break into tiny pieces, encouraging additional chewing.
Another strategy to prevent calculus accumulation is the addition of compounds, such as sodium hexametaphosphate to prevent mineralization of plaque. Plaque is a mixture of food particles, saliva, and bacteria that creates the “fuzzy” feeling on your teeth shortly after you drink a sugar-filled beverage or eat some food. When we brush our teeth, this fuzzy layer is removed before it has a chance to mineralize and become calculus.
Though plaque is the “bad actor” that incites periodontal inflammation, calculus provides a rough surface on the teeth to allow for additional plaque accumulation. Sodium hexametaphosphate is a calcium sequestrant preventing plaque from becoming its mineralized form called calculus, also known by the lay term of “tartar.”
One study in the American Journal of Veterinary Research found anticalculus effects attributable to sodium hexametaphosphate were only significant when it was used as a surface coating instead of being incorporated into the food.4 This same study found the feeding of a single daily snack of sodium hexametaphosphate-coated plain biscuits (0.6 percent HMP) decreased calculus formation by nearly 80 percent.
Similar to the American Dental Association (ADA), the Veterinary Oral Health Council (VOHC) consists of a group of volunteer experts in veterinary dentistry who review available research to confirm whether products have the ability to decrease plaque and/or calculus formation. See www.vohc.org to find a list of products proven to decrease plaque and/or calculus accumulation in companion animals.
In next month’s column, I will discuss whether diet plays a role in development of tooth resorption in cats.
John Lewis, VMD, DAVDC, FF-OMFS practices and teaches at Veterinary Dentistry Specialists and Silo Academy Education Center, both located in Chadds Ford, Pa.
References
- Pedersen NC, Shope B, Liu H. An autosomal recessive mutation in SCL24A4 causing enamel hypoplasia in Samoyed and its relationship to breed-wide genetic diversity. Canine Genet Epidemiol. 2017; Nov 22;4:11.
- Watson AD. Diet and periodontal disease in dogs and cats. Aust Vet J. 1994;71(10):313-318.
- Harvey CE, Shofer FS, Laster L. Correlation of diet, other chewing activities and periodontal disease in North American client-owned dogs. J Vet Dent. 1996;13(3):101-105.
- Stookey GK, Warrick JM, Miller LL. Effect of sodium hexametaphosphate on dental calculus formation in dogs. Am J Vet Res. 1995;56(7):913-918. | https://www.veterinarypracticenews.ca/nutritions-role-in-dental-health/ |
fft question
hi,
can anyone tell me how to perform 2d fft? what i understood for 2d fft
is that for each rows we perform individual fft's and place the result
on the respective rows and similarly for the columns. what i mean is
suppose i have an array of 2d data as:
0000
0110
0110
0000
now what i am trying to do is calculate fft (4 point fft instead of 16
point fft) of the first row (0000) and place the result in first row,
calculate the fft for the second row (0110) and place the result in
second row and so on. similarly for column calculate the fft for first
column (0000) and place the result in the first column and so on. so
will this work and will i get the same result as of 16 point fft?
thanks
Posted by glen herrmannsfeldt●July 30, 2010
niyander <[email protected]> wrote:
The FFT is separable in rectangular coordinates. You separately
do the X and Y transforms, in either order, to get the appropriate
XY (2D) transform.
No, it is different from a 16 point 1D FFT on the data, but it
is the right transform for 2D data.
If you think about the normal modes for a square drum head
then you will have some idea about the results of the transform.
-- glen
> niyander <[email protected]> wrote:
>
> The FFT is separable in rectangular coordinates. =A0You separately
>
>
>
>
Actually, what he is saying will not give the same result as a 2-D
fft. After he fft's the rows, the first column is no longer (0000).
Dirk
>> The FFT is separable in rectangular coordinates. �You separately
>> do the X and Y transforms, in either order, to get the appropriate
>> XY (2D) transform.
(snip)
> Actually, what he is saying will not give the same result as a 2-D
> fft. After he fft's the rows, the first column is no longer (0000).
Oh, that is what he meant? No, you can't do that.
First you transform each row, leaving the result in that row.
Then, with the row results in each row, you transform each column.
(Or columns and then rows.)
Each element will then have an X and Y frequency, with one
corner being the (0,0) or DC term. These correspond to the
possible vibrational modes of a square drum head. The (0,0)
mode is the one where the whole drum surface moves up and down
together, and has the lowest frequency. The (0,1) and (1,0)
modes are degenerate, that is, both have the same frequency,
such that any linear combination of them will also have that
frequency.
For a rectangular drum head with non-commensurate side lengths
there will be 16 different modes, all with different frequencies.
The (i,j) gives the number if X and Y nodes, respectively.
Well, you should actually use the sine transform for drum heads,
but the modes will be somewhat similar.
-- glen
| |
Join the mailing list for weekly wisdom and updates.
Give Yourself a Break
Any time I get sick, inevitably my dad will call and ask how I am feeling and then follow up with “you must have been doing too much and not getting your rest”. It is one of his dad’isms, one of his favorite sayings and pieces of wisdom: Lots of rest, and low stress=no sickness (not that he necessarily always follow his own advice). But he is right, last week after weeks of pushing myself to hard, not getting enough rest and not engaging in self care I got sick. SICK: stuffy nose, cough, headache, barely move you are so tired sick. So I rested. I rested because clearly my body was telling me I needed to. The body is an amazing thing–if you don’t take a break and rest it will make it very clear to you when you need to.
The irony of it was earlier that week I was just talking to a friend about the need to take a break every now and then. Not just a physical restful break but an emotional, mental break. That when we are going through transition (job change/search, loss, relationship issues, general life struggles) sometimes we can’t be ‘processing all our emotions’ or reacting and acting in the most mentally healthy of ways. Sometimes we need to allow ourselves time off from life.
I remember a few years ago I heard Ilyana Vanzant speak and she was talking about the power of re-charging, taking a break, taking care of ourselves. So when she started sharing her favorite re-charging practices, I thought I was going to hear some wonderfully soothing practices of meditation, yoga or prayer. And then she said hands down her favorite relaxation activities was to put on her pajamas and sprawl across the bed watching Law and Order repeats.
I was amazed! I was relieved! Here is a self-help guru, a woman who teaches on spiritual practices telling us that she LOVES watching Law and Order repeats and finds great comfort in it! It was then I realized, there is no right way. We all are doing the best we can with what we have and we all need to give ourselves a break!!! Remember that change is hard, transitions are exhausting and there is no ‘right’ way to move forward. Sometimes when we push too hard we can end up exhausted and frustrated. It is counter-intuitive but in order to make quality decisions and to know what’s best for ourselves and others, we need to give ourselves a chance to rest, regroup and relax. Whether that means sitting watching TV while playing Plants vs. Zombies (my personal favorite), having coffee with a friend, taking a nap, doing yoga or going for a run. Whatever allows you to relax and disconnect for awhile is helpful.
What is your favorite way to regroup and relax? | http://live-happier.com/give-yourself-a-break-2/ |
UK Air Pollution Could Be Cited as the Cause of Death in a Historic Case
The reported evidence comes after the death of a 9-year-old girl in London.
Illegally high levels of air pollution could be cited as the cause of death of a 9-year-old girl in London, in what would a historic case in the UK.
Ella Kissi-Debrah, from Lewisham, died following an asthma attack in February 2013. She had been admitted to hospital on many occasions over the three previous years.
But new evidence has reportedly suggested a “striking association” between particularly high levels of air pollution near her home, and the timings of her hospital visits.
Take action: Educating Girls Strengthens the Global Fight Against Climate Change
“The dramatic worsening of her asthma in relation to air pollution episodes would go a long way to explain the timing of her exacerbations across her last four years,” say legal documents quoted by the Guardian, and submitted to the attorney general by Kissi-Debrah’s family in a call for a new inquest into how she died.
"Illegal levels of air pollution linked to child's death - BBC News." Thinking of everyone at the Ella Roberta Family Foundation and their ongoing struggle for justice. We must stop air pollution cutting lives short. https://t.co/GGqwqnCNiqpic.twitter.com/V9JCR3SfrA— Aaron Kiely (@AaronAtFoE) July 3, 2018
Stephen Holgate, professor of immunopharmacology at the University of Southampton, has submitted new evidence, saying there was a “real prospect that without illegal levels of air pollution Ella would not have died.”
He also expressed his “firm view” that her death certificate should reflect air pollution as a causative factor. While air pollution is believed to lead to the premature deaths of 40,000 people every year in the UK, this would be the first time in the UK for it to be cited as cause of death.
Read more: UK Makes Last-Ditch Effort to Avoid Steep Air Pollution Fines
Kissi-Debrah’s family home was reportedly 25 metres from one of London’s air pollution hotspots, the South Circular.
Human rights lawyer Jocelyn Cockburn, the family’s representative in their application for a new inquest, said that “Ella’s case illustrates the hard-hitting impact of air pollution.”
The evidence is now reportedly to be reviewed by the attorney general’s office.
The news comes as the results of a survey by Conservative think tank Bright Blue showed that the majority of younger voters would back a party working to combat air pollution.
Read more: Millions of British Children Are Breathing Toxic Air, Says Unicef
The poll — which surveyed 4,007 adults between Feb. 28 and March 5 — showed that 54% of under-40s would support a party making real effort to curb air pollution, according to the Independent. It also showed that 70% of adults are concerned about air pollution and its impacts.
“The public clearly believe national government should play a bigger role — in fact the biggest role — in introducing measures to reduce air pollution,” said Bright Blue researcher Eamonn Ives.
The World Health Organisation has labelled air pollution a public health emergency, with the UK having come under fire for repeatedly breaching legal limits.
In May, a week after being threatened with fines and legal action by the European Union, the UK announced a new plan to reduce air pollution nationwide.
Read more: Every Car in London Costs the NHS £8,000 Because Air Pollution Blows
The government’s new clear air strategy included efforts to reduce the number of people living in cities, as well as tackling sources like wood-burning stoves, heavy industry, and farming.
The government predicted that these efforts would save the country around £1 billion in annual pollution costs, from health consequences to polluted waterways.
The UK had previously also announced a ban on the sale of petrol and diesel cars by 2040, and is expected to publish a “Road to Zero” strategy to outline how that will happen. | https://www.globalcitizen.org/en/content/uk-air-pollution-cause-of-death-inquest/ |
Individuals with type 2 diabetes who are night owls not only go to bed later than morning types, or larks, but also have a more sedentary lifestyle, suggests new research that points to a role for lifestyle modifications.
Looking at more than 630 patients who wore an accelerometer for a week, UK scientists found people with an evening chronotype spent nearly 30 minutes/day more being sedentary than those with a morning chronotype, and the former spent 56% less time engaged in moderate-to-vigorous activity per day.
"The link between later sleep times and physical activity is clear: go to bed late and you're less likely to be active," said second author Alex V. Rowlands, PhD, of the Sansom Institute for Health Research, University of South Australia, Adelaide, in a press release.
"As sleep chronotypes are potentially modifiable, these findings provide an opportunity to change your lifestyle for the better, simply by adjusting your bedtime," he added.
The research was published by Joseph Henson, PhD, NIHR Leicester Biomedical Research Centre and Diabetes Research Centre, University of Leicester, UK, and colleagues in BMJ Open Diabetes Research & Care.
Henson said maintaining a healthy weight and blood pressure in diabetes is vital and "makes understanding the factors that can mitigate a person's propensity to exercise extremely important."
Moreover, the findings underline the "massive need for large-scale interventions to help people with diabetes initiate, maintain, and achieve the benefits of an active lifestyle. For people who prefer to go to bed later and get up later, this is even more important," he added.
Circadian Misalignment Can Be Altered
The team note previous studies have shown that people with an early evening chronotype may be more susceptible to metabolic alterations linked to obesity.
As previously reported by Medscape Medical News, a pooled analysis of more than 140,000 nurses suggested that rotating night work every 5 years increased the risk of developing type 2 diabetes by 30%.
Another study suggested that the increased risk of diabetes seen among individuals with an evening preference is likely due to a combination of poor diet, erratic eating patterns, and irregular sleeping patterns.
However, it is unclear whether physical behaviors differ in individuals with type 2 diabetes based on chronotype.
The researchers therefore examined data from the ongoing Chronotype of Patients with Type 2 Diabetes and Effect on Glycemic Control (CODEC) observational study of individuals from the UK.
Participants were asked to wear an accelerometer on their nondominant wrist for 7 days to measure physical behaviors, divided into sleep and sedentary, light, and moderate-to-vigorous physical activity.
Complete data were available for 635 individuals from the study. The average age of participants was 63.8 years and 34.6% were women. The average body mass index was 30.9 kg/m2.
Chronotype preference, as assessed using the Morningness-Eveningness Questionnaire, indicated that 25% of participants had a morning chronotype, 52% an intermediate, and 23% an evening chronotype.
Compared with participants with a morning chronotype, those with an evening preference spent an extra 28.7 minutes/day in sedentary time and 33.5 minutes/day less performing light-intensity physical activity.
Evening chronotypes also engaged in less moderate-to-vigorous physical activity by 9.7 minutes/day versus people with a morning chronotype — a 56% reduction.
The intensity of the most active 60 minutes of the day was also lower for individuals with an evening chronotype, as was the average acceleration and intensity gradient of exercise.
Evening chronotypes also had later sleep onset than morning chronotypes, by an average of 1 hour and 44 minutes, and the most active 30 minutes of the day consequently occurred an average of 1 hour and 42 minutes later.
The team say that the lower physical activity levels seen in evening chronotypes may be influenced by social and physical environmental factors.
"Moreover, personal/socially imposed alterations in sleep, as demonstrated by the differences in sleep and physical activity timing, may result in a 'circadian misalignment,'" they note.
They suggest, for example, "An enforced early wakeup may reduce the likelihood of engaging in physical activity due to the resulting tiredness or time constraints of family responsibilities in the evening."
"This may make a natural preference for engaging in physical activity later in the day more difficult to achieve."
They nevertheless believe that physical activity could be one way of encouraging people with an evening preference to adopt a morning chronotype.
"Due to its wide-ranging health benefits, minimal cost and side effects, and accessibility, physical activity may be an attractive nonpharmacological treatment option that could also theoretically improve circadian misalignment, through alterations in temperature regulation and/or hormone levels," they conclude.
The research was supported by the National Institute for Health Research Leicester Biomedical Research Centre.
The authors have reported no relevant financial relationships.
BMJ Open Diab Res Care 2020;8:e001375. Full text
For more diabetes and endocrinology news, follow us on Twitter and Facebook.
Medscape Medical News © 2020 WebMD, LLC
Send comments and news tips to [email protected].
Cite this: Night Owls With Diabetes Have More Sedentary Lifestyle Than Larks - Medscape - Oct 05, 2020. | https://www.medscape.com/viewarticle/938536 |
5 Simple Ways to Develop Your Landscape Style
- Posted in :
- photography
This year, the world will shoot somewhere between one and two trillion photos. This is roughly equal to the total number of film images taken in the entire history of photography. In the past 24 hours alone, at least two billion of these files have been uploaded to the internet, most of which can now be accessed by anyone, anywhere at any time.
This huge explosion in the number of images we have right at our fingertips has made it immensely difficult for our own work to stand out, especially in a popular genre like landscapes. Fortunately, there is something very simple you can do to make your work more noticeable. And that is to develop your own style. We explore the five indispensable steps for defining and building your unique style.
1. Define Your Approach
Perhaps the single most important step to building your signature style is to decide what you are going to shoot and how you are going to shoot it. Landscapes is a very broad genre, and narrowing your work down to a more specific area can help give it definition. Ansel Adams, probably the best-known landscape photographer of all time, specialised in shooting large-format black and whites in Yosemite National Park. It might have been niche, but he owned it, and that made his work remarkable. Your challenge is to do the same thing as Adams, although giving up work to live a solitary life with the bears is, of course, entirely optional.
The importance of shooting what you enjoy can not be overstated, so it is worth doing some photographic soul-searching to decide exactly what that is. Some people are fascinated by the coast, so like the ocean to feature in their landscape images. Others prefer to work in urban or woodland landscapes, or like to shoot everything within a specific area like the Lake District or the Highlands. You might even decide to focus on specific subject like lighthouses or windmills, or to shoot landscapes exclusively with people in. Alternatively, you could specialise in night-time landscapes, or experiment with capturing abstract scenes.
This doesn’t necessarily mean that if you don’t narrow your approach you will dilute your photographic style – there are lots of ways to differentiate your work – but this is probably the easiest and quickest way to standardise what you shoot, and become known for it. Shooting the same subject over and over again is also a sure-fire way to get extremely skilled at it, so you can expect the overall quality of your results to improve. If you try to photograph everything, you will probably find you can never really explore any subject in the proper depth.
Introduce a technique
Once you have chosen a landscape location or subject, you can try introducing a specific photographic technique to help set your images apart. Perhaps the most obvious example is to use ND filters to achieve a slow shutter speed, allowing moving objects such as water, clouds and grasses to blur. This particular approach has become very popular in recent years, and can give your shoots a really professional edge.
Mads Peter Iversen (see below) is known for his waterfall photography, and he regularly uses long exposures as part of his unique style. In other words, he is combined a specific subject with a particular technique, just like Ansel Adams did.
Control what you share
Developing your style is not just about what you shoot. In fact, it is much more about what you share. When it comes to your online presence, try to think of yourself as a brand, and ask yourself whether what you are posting is consistent with that brand’s style. If you have a shot that simply is not ‘you’, resist the temptation to post it, even if it is a great image.
2. Select the Right Tools
By focusing your approach to landscape photography, you have just taken an important step towards making your images more consistent and more recognisably ‘you’. But it doesn’t end there. You need to be thinking about evolving your unique style at every stage of the shooting and editing process. With this in mind, let’s take a look at the role gear plays in the look of your shots.
Choose your focal length
For most landscapers, a wide-angle lens is the go-to glass for the vast majority of scenes, as the very wide angle-of-view makes it possible to include distant background and extreme foreground in the frame, and allows more of the scene to be included.
The problem is, shooting at the same focal length as everyone else doesn’t exactly separate you from the pack, so why not consider less commonly used lenses that can give your shots a more distinctive look instead. One option is to go for a standard 50mm prime, or ‘nifty fifty’, which has a similar focal length to the human eye. These lenses aren’t often used for landscape photography, but they are cheap, portable and extremely sharp. Standard lenses also mean less visible perspective distortion than wide-angles, and the result tend to look very natural, as it is how we are used to seeing the world.
Alternatively, you could try a longer telephoto lens. These are great for isolating a very small part of a landscape, and they tend to produce a compressed perspective where the nearest and furthest parts of the frame appear to be closer together than they really are.
Try a different camera
Most of us use a DSLR or CSC to capture our landscapes, but you can get a very stylised look with other cameras. If you are attracted by film, why not invest in a vintage medium-format camera? They are relatively inexpensive, and will get you very different results to a DSLR. You will get that analogue aesthetic and exceptional dynamic range, plus a very en vogue square format.
3. Master Light and Colour
From the Greek word ‘photo’ meaning ‘light’, and ‘graphia’ meaning ‘drawing’, photography is all about how you record the world with the light you have available. A landscape can look incredibly different depending on the lighting, so time of day, the season, the weather can make all the difference between a dull, lifeless snapshot and an epic, eye-catching image.
Choose the right light
The classic time of day for shooting landscapes is around sunrise or sunset, often termed ‘golden hour’. On clear days, the low angle of the sun casts long shadows, increases contrast in the landscape and bathes everything in a warm glow. Other photographers prefer textured, stormy skies, which tend to produce emotive, dramatic images with plenty of character.
A good strategy to start with is to look out for unusual or dramatic lighting conditions and just go out hunting. You might decide that you enjoy shooting at all different times of the year in every type of weather conditions. Lots of landscapers work like this and it is a good strategy. If you do it well, it will definitely give your work a stylised look.
Be original
It is worth being mindful that shooting around golden hour means you are competing with 90% of the world’s landscape photographers, so it will be much harder to make your images stand out. It is unlikely, for example, that you will become nationally renowned for your long exposure coastal sunsets, as it has done so much. But if you choose to specialise in photographing misty beech forests in autumn, which are equally as beautiful, you simply won’t have that problem.
4. Develop Your Editing Look
Over the past few years, post-processing has played an increasingly dominant role in the look of our images. In fact, many landscapers spend far more time behind a computer screen than out on location, often producing images that bear only a passing resemblance to the original RAW.
There are a million-and-one processing tools at your disposal to adjust and shape an image’s pixels in virtually any way you like. By building up editing habits, you will eventually develop a distinctive processing style common to every single shot. If you don’t do this, you risk your portfolio seeming incohesive, even if there is significant variation in your subject matter or approach. A good first step is to look around at how other photographers edit their work. Try to spot processing traits that run through their entire portfolio. You will probably notice that the very best landscapers rarely overprocess their shots, so keep this in mind as your style evolves. Consider editing as working a bit like make-up for your images – if the viewer notices it is there, you have probably applied too much!
Stand out with mono
Despite the fact that colour photography has been commonplace since the 1950s, there are still many highly successful landscape pros who edit their shots in black and white. In the absence of colour, the viewer is more likely to focus on other visual properties, including pattern, tone, shape and texture. You might decide that this suits the types of landscape you photograph, and choose to make mono a part of your style. Alternatively, you might differentiate yourself by using another editing technique, such as replicating the fashionable retro look, or using the HDR technique.
The use of colour is also a really effective editing tool, either by tweaking the saturation, or by adjusting white balance for a warmer or cooler look.
Change the format
If you are not an accomplished Photoshopper, you may want to try a different approach to make your photo stand out from the crowd. Changing the shape or format of your images is a simple yet effective way of enhancing your shots, and it could be exactly what your portfolio needs. A good example of a format change is panoramas, where you can either simply crop the top and bottom off an image, or stich several images together. Alternatively, you could crop off the sides on an image for an Instagram-friendly square, an in-fashion format that was also common in the days of film. This is really easy to do in Photoshop – just select the Croop Tool from the Tools palette, then at the top choose 1×1(Square) from the drop-down box. You can move the position of the crop area by dragging it.
5. Do Something Different
There are only two ways to stand out from the crowd – to be better, or be different. By all means strive for both, but it is the latter where you are more likely to enjoy success – there is far less competition there!
Ideally you will come up with a concept for your landscape images that no one has thought of yet, although in reality it will probably be your twist on an existing idea. It actually doesn’t need to be original, just unusual enough to catch the viewer’s eye.
Find your niche
How you actually go about being different is something only you can decide, and it will probably develop gradually over time, rather than being a ‘eureka’ moment. Our advice would be to search the internet for great ideas then try replicating them yourself, perhaps pushing them in a slightly different direction.
Put it all into action
So that brings us to the end of our five tips for developing your own photographic style. But it is really just the beginning of your journey towards creating a more recognisable and remarkable landscapes portfolio. And if you are doing it right, it is a journey that you should never actually ever complete. Our advice is to use elements of every single one of these tips, and really push yourself in a particular niche. Get out there, perfect it, and really make it yours. | https://www.everilluminated.com/5-simple-ways-to-develop-your-landscape-style/ |
Teaching British values at our school is an important way to enable students to embrace the key values that they need to be equipped for life in modern British society. students at our school develop self-knowledge, are better able to make the right choices and make contributions to the wider school and their community by studying and promoting the British values of: democracy; the rule of law; individual liberty; mutual respect; and acceptance for those with different faiths and beliefs.
The DfE has recently reinforced the need “to create and enforce a clear and rigorous expectation on all schools to promote the fundamental British values of democracy, the rule of law, individual liberty and mutual respect and acceptance of those with different faiths and beliefs.”
The Government set out its definition of British values in the 2011 Prevent Strategy, and these five values will be reiterated this academic year.
The government set out its definition of British values in the 2011 Prevent Strategy, and these values have been reiterated by the Prime Minister this year. At St Paul's, these values, alongside our Christian Values that underpin our Trust Deed, are reinforced regularly and in the following ways:
Democracy:
Children learn to understand and value the democratic process right from the start of their school life. They have the opportunity to have their voices heard through our Pupil Council and Pupil questionnaires. Older children hold positions of responsibility throughout the school. These children are elected by the other children in some cases. Our school behaviour policy involves a strong sense of choice in actions and subsequent consequences of these choices.
The Rule of Law:
The importance of Laws ( rules that everyone must keep) whether they be those that govern the class, the school, or the country, are consistently reinforced throughout regular school days, as well as when dealing with behaviour and through school assemblies. Children are taught the value and reasons behind laws, that they govern and protect us, the responsibilities that this involves and the consequences when laws are broken. Visits from people who help us to carry out these laws within our school and wider community are regular parts of our curriculum & calendar and help reinforce this message.
Individual Liberty:
Within school, children are actively encouraged to make choices, knowing that they are in a safe and supportive environment. As a school, we educate and provide boundaries for young children to make choices safely, through provision of a safe environment and empowering education. Children are encouraged to know, understand and exercise their rights and personal freedoms and advise how to exercise these safely, for example through our E-Safety and PSHE lessons. Whether it is through choice of challenge, of how they record, of participation in our numerous extra-curricular clubs and opportunities, pupils are given the freedom to make choices.
Mutual Respect:
Part of our school ethos and behaviour policy revolves around the value of 'Respect', and children have been part of discussions and assemblies related to what this means and how it is shown. Posters around the school promote respect for others and this is reiterated through our classroom and learning rules, as well as our behaviour policy.
Tolerance of those of Different Faiths and Beliefs:
As a Christian school we have a responsibility to promote a Christian ethos in a context of individual choice, respect and tolerance of other faiths and other beliefs. This is achieved through enhancing pupils understanding of their place in a culturally diverse society and by giving them opportunities to experience such diversity. Assemblies and discussions involving prejudices and prejudice-based bullying have been followed and supported by learning in RE and PSHE. Members of different faiths or religions are encouraged to share their knowledge to enhance learning within classes and the school. | https://stpaulswigan.org.uk/curriculum/british-values |
While each classroom has listed curriculum goals, the philosophy of our program is that each child is unique and will therefore acquire the foundations for learning at his/her own pace. We create the environment/activities, the children will remain challenged and engaged, and WHEN they are developmentally ready to absorb/apply the learning skills, they will.
Learning Goals
Faith Development
- Learn to worship and honor God
- Listen to Bible stories to know and love God
- Introduce children to chapel twice a week
- Build up children’s faith through God’s word
- Learn to recite prayers and sing chapel songs
Social/Emotional Development
- Make “play” the foundation of all learning
- Develop role playing and pretend play
- Learn to express a variety of feelings with gestures or words
- Learn to interact with care taker and other adults
- Manage emotional responses with assistance or independently
- Engage in simple interactions with other children
- Build personal independence
- Develop awareness of others’ feelings
- Learn to behave according to social expectations
Emergent Cognitive Development
- Develop understanding of how things work or move
- Learn to engage in efforts to solve conflicts
- Able to mimic, imitate and repeat actions as a way of communicating needs/wants
- Develop the ability to store information and later retrieve it
- Understand that different amounts are represented by numbers
- Learn to group, sort, connect and categorize based on the attribute
- Learn to identify the alphabet by repetitive exposure through a print-rich environment
- Develop “symbolic” or “pretend play” to understand social roles
- Grow the ability to exercise control over their attention span
- Participate in personal care routines
Language Development
- Understand and carry out one-step requests
- Use words to tell others about their needs, wants and interests
- Build up vocabulary to speak in simple sentences (3-4 words)
- Learn the basic rules of conversational turn-taking when communicating
- Listen and participate while being read to
- Grow children’s attention through songs, chants, poems, nursery rhymes and stories
- Develop interest for the print environment recognizing some letters – starting with the first letter of his/her own name
Perceptual & Motor Development
- Use the senses to change their interaction with the environment
- Develop body awareness to start toilet training
- Learn to move large muscles with basic control and coordination
- Ability to move the small muscles with one or both hands
- Learn to walk/run and swing arms for balance
- Learn awareness of own personal space and their peers’ space
Schedule
|Time||Activity|
|7:00 – 8:15 am||Opening Class – Teacher directed activity, 2 learning centers open (change daily)|
|8:15 – 8:55 am||Outside Play – free choice table toys, trikes, balls, Legos (all students combined)|
|8:55 – 9:15 am||Gathering song/prayer, divide into classrooms Potty/Diapering/Hand washing
|
|9:15 – 9:30 am||Snack
|
*Chapel – held in Sanctuary Tues/Thur @ 9:30 (snack is served after chapel on T&TH)
|9:30 – 9:40 am||1st Morning Circle (Introduce Bible stories, songs, letters, shapes, numbers and colors)|
|9:40 – 10:30 am||Learning Centers (Activities will vary)
|
Manipulatives: puzzles, legos, blocks, dramatic play, numbers, etc…
Fine Motor: Painting with many materials, coloring, pegboard etc…
Sensory & Science: exploration with play-dough, sand, water, measuring, pouring, beans, mixing colors, etc. | https://school.redeemer-lutheran.net/dolphin/ |
Here are some key points about physiology. More detail and supporting information is in the main article. The study of physiology is, in a sense, the study of life. It asks questions about the internal workings of organisms and how they interact with the world around them.
Human Physiology/The respiratory system
Physiology tests how organs and systems within the body work, how they communicate, and how they combine their efforts to make conditions favorable for survival. Human physiology, specifically, is often separated into subcategories; these topics cover a vast amount of information. Researchers in the field can focus on anything from microscopic organelles in cell physiology up to more wide-ranging topics, such as ecophysiology, which looks at whole organisms and how they adapt to environments.
The most relevant arm of physiological research to Medical News Today is applied human physiology; this field investigates biological systems at the level of the cell, organ, system, anatomy , organism, and everywhere in between.
In this article, we will visit some of the subsections of physiology, developing a brief overview of this huge subject. Firstly, we will run through a short history of physiology. As a medical discipline, it goes back at least as far as the time of Hippocrates, the famous "father of medicine" - around BC. Hippocrates coined the theory of the four humors, stating that the body contains four distinct bodily fluids: black bile, phlegm, blood, and yellow bile.
Any disturbance in their ratios, as the theory goes, causes ill health. Claudius Galenus c. He is widely referred to as the founder of experimental physiology. It was Jean Fernel , a French physician, who first introduced the term "physiology," from Ancient Greek, meaning "study of nature, origins.
Gross Anatomy of the Lungs
Fernel was also the first to describe the spinal canal the space in the spine where the spinal cord passes through. He has a crater on the moon named after him for his efforts - it is called Fernelius. Another leap forward in physiological knowledge came with the publication of William Harvey's book titled An Anatomical Dissertation Upon the Movement of the Heart and Blood in Animals in Harvey was the first to describe systemic circulation and blood's journey through the brain and body, propelled by the heart. Perhaps surprisingly, much medical practice was based on the four humors until well into the s bloodletting, for instance.
In , a shift in thought occurred when the cell theory of Matthias Schleiden and Theodor Schwann arrived on the scene, theorizing that the body was made up of tiny individual cells. There are a great number of disciplines that use the word physiology in their title. Below are some examples:. The topics mentioned above are just a small selection of the available physiologies. The field of physiology is as essential as it is vast. Anatomy is closely related to physiology.
Anatomy refers to the study of the structure of body parts, but physiology focuses on how these parts work and relate to each other. Scientists have now identified a unique cell death mechanism behind the destruction of nerve insulation in MS, and an anti-inflammatory that blocks it. Stem cells are basic cells that can become almost any type of cell in the body.
Human Physiology - Respiration
Human stem cells can come from an embryo or an adult human. They have…. The immune system defends our body against invaders, such as viruses, bacteria, and foreign bodies. The white blood cells are a key component. Referees Four potential referees must be indicated with names and email addresses. Indicate in the field Reason why a given referee is appropriate for reviewing your manuscript.
Opposed reviewers can be entered on a separate site. Peer review This journal operates a single blind review process. All contributions will be initially assessed by the editor for suitability for the journal. Papers deemed suitable are then typically sent to a minimum of two independent expert reviewers to assess the scientific quality of the paper.
The Editor is responsible for the final decision regarding acceptance or rejection of articles. The Editor's decision is final. More information on types of peer review. Use of word processing software It is important that the file be saved in the native format of the word processor used. The text should be in single-column format.
- Aliens in Underpants Save the World?
- A Quest for France (Book I of the Sword of the Black Wolf Series).
- Related files!
Keep the layout of the text as simple as possible. Most formatting codes will be removed and replaced on processing the article.
Learn more about Respiratory Physiology
In particular, do not use the word processor's options to justify text or to hyphenate words. However, do use bold face, italics, subscripts, superscripts etc. When preparing tables, if you are using a table grid, use only one grid for each individual table and not a grid for each row.
If no grid is used, use tabs, not spaces, to align columns. The electronic text should be prepared in a way very similar to that of conventional manuscripts see also the Guide to Publishing with Elsevier. Note that source files of figures, tables and text graphics will be required whether or not you embed your figures in the text. See also the section on Electronic artwork.
To avoid unnecessary errors you are strongly advised to use the 'spell-check' and 'grammar-check' functions of your word processor. Article Structure The text must be clear and concise, conforming to accepted standards of English style and usage. Non-native English speakers may be advised to seek professional help with the language see Language Polishing, below.
Manuscripts must be double spaced throughout with wide margins. Pages should be numbered in the following order: -Title page separate page : Full title, not to exceed characters and spaces; list of authors, marking corresponding author; laboratory of origin with full postal address if more than one, indicate each author's affiliation by superscript a,b References in the Abstract should give authors, year, journal, volume, and inclusive pages, e. Parisian et. The Introduction should introduce the problem and should present a brief yet comprehensive account of its history, quoting the relevant and important papers in the area.
The Methods should be complete, but should resort to earlier publications if possible. Subdivision - numbered sections Divide your article into clearly defined and numbered sections.
- Plan Your Financial Future: A Comprehensive Guidebook to Growing Your Net Worth?
- The physiological effects of slow breathing in the healthy human.
- Siren;
- Human Prey!
Subsections should be numbered 1. Use this numbering also for internal cross-referencing: do not just refer to 'the text'. Any subsection may be given a brief heading. Each heading should appear on its own separate line. Material and methods Provide sufficient details to allow the work to be reproduced by an independent researcher. Methods that are already published should be summarized, and indicated by a reference. If quoting directly from a previously published method, use quotation marks and also cite the source.
Any modifications to existing methods should also be described. Results Results should be clear and concise. Discussion This should explore the significance of the results of the work, not repeat them. A combined Results and Discussion section is often appropriate. Avoid extensive citations and discussion of published literature. Conclusions The main conclusions of the study may be presented in a short Conclusions section, which may stand alone or form a subsection of a Discussion or Results and Discussion section. Appendices If there is more than one appendix, they should be identified as A, B, etc.
Formulae and equations in appendices should be given separate numbering: Eq. Similarly for tables and figures: Table A. Concise and informative. Titles are often used in information-retrieval systems. Avoid abbreviations and formulae where possible. Please clearly indicate the given name s and family name s of each author and check that all names are accurately spelled.
You can add your name between parentheses in your own script behind the English transliteration. Present the authors' affiliation addresses where the actual work was done below the names. Indicate all affiliations with a lower-case superscript letter immediately after the author's name and in front of the appropriate address. | https://finingfuncturta.tk/physiology-of-respiration-human-physiology-quick-review.php |
Kerry Cathers is the mastermind behind A Curiosity of Crime, a wonderfully informative website and newsletter which any writer of historical crime – or indeed anyone with an interest in historical crime – will find fascinating. Poisons? Forensics? Head measuring? Kerry’s your person!
Welcome to By the Letter, Kerry, and thank you for sharing with us how it all came about, and what’s in store.
Update: Kerry’s book A Writer’s Guide to 19th century murder by arsenic is now available – get it here!
Where the idea came from
As far back as I can remember, I’ve been fascinated by history and by crime and now I get to combine the two in my website for writers of historical crime fiction: A Curiosity of Crime.
The idea for the website came out of an interview with Ed Adach, forensic detective with Toronto Police Services. I came away from it thinking about providing a reference text geared to writers. What I found was a lot of information for mysteries set in the twenty-first century, but nothing for historical ones. I came across a comment that there are no books because there isn’t enough nineteenth-century forensics to write about; that forensics is a modern science.
My reaction was twofold. First: That can’t possibly be true! Second: There’s more to solving nineteenth-century crime (and more to historical crime fiction) than a CSI department and modern science.
So, I created A Curiosity of Crime.
What’s there
The site contains reference pages including a list of famous people, a glossary, and a timeline. That will expand into case studies, true crime snippets, and biographies in 2023. There are blogs on topics such as toxicology, anthropometry, and the state of forensics in the nineteenth century. I am currently working on adding to these covering subjects such as hypnotism in the law, the war between physicians and the law about where the line was drawn for “criminal insanity” and others for crimes of passion and vitriol.
Though the main areas of discussion for the site are science, forensics, medical, policing and judicial system, the content includes societal influences, and the mindset of the nineteenth century. For example, how do social status and gender impact verdicts, how did “scientific” attitudes about evolution impact criminality and policing, and why did so many people oppose the establishment of police forces?
And a monthly newsletter too!
There is a monthly newsletter about aspects of nineteenth-century society that can enrich historical fiction; things an author might not have thought about, or would have had to read dozens of pages of research to stumble across. Each month has a theme, and so far, it’s included topics such as thievery in nineteenth-century France, coroner’s juries, and autopsies.
As well as discussion on a particular theme, the newsletter includes a review of a research source, vocabulary or slang, and a pet peeve which is an error that has become part of mass media’s presentation of history. (I consider my pet peeves section a cheap form of therapy.)
The inspiration for the newsletter comes from my own experience reading historical fiction. I love it when there’s an interesting tidbit which enriches the text, but also makes me think, “Wow, never knew that.” As for the topics, they’re pretty much things I come across when doing research; something that never occurred to me to research or it puts ideas, or actions, or social mores into context and explain the “why” of things. Mostly, it’s what I find fascinating.
A book on arsenic is imminent!
I am in the final stages of publishing the first in a series of reference books. The first one is A Writer’s Guide to Nineteenth-Century Murder by Arsenic, and will be available soon. It’s intended to give writers the technical knowledge they need for their characters to carry out and solve the murder. There is a list of ways to kill someone with arsenic and ways you can’t, along with forms of evidence (including autopsies), and guidance on the implications each option has on your story.
(Sign up to Kerry’s newsletter to make sure you know when the book is out.)
Get in touch
If writers have questions about history, or how to fit it into their story, they can contact me at [email protected]. Starting in 2023, they’ll be able to book consultation sessions with me.
Sign up to Kerry’s newsletter here.
Read more author interviews here – and get to meet new authors in my monthly newsletter, sign up here! | https://cherylburman.com/author-guest-spots/a-curiosity-of-crime/ |
CROSS REFERENCE TO RELATED APPLICATIONS
FIELD OF THE INVENTION
BACKGROUND OF THE INVENTION
PATENT LITERATURE
SUMMARY OF THE INVENTION
DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION
EXAMPLES
This is the U.S. National Phase application of PCT/JP2017/022134, filed Jun. 15, 2017, which claims priority to Japanese Patent Application No. 2016-126353, filed Jun. 27, 2016 and Japanese Patent Application No. 2016-126354, filed Jun. 27, 2016, the disclosures of each of these applications being incorporated herein by reference in their entireties for all purposes.
The present invention relates to a ferritic stainless steel sheet. In particular, the present invention relates to a ferritic stainless steel sheet excellent in shape of weld zone. In addition, in the preferable embodiments of the present invention, the present invention also relates to a ferritic stainless steel sheet excellent in the surface quality of a weld zone after working.
Since a ferritic stainless steel sheet is less expensive than an austenitic stainless steel sheet, which contains a large amount of expensive Ni, ferritic stainless steel sheets are used in many applications. For example, a ferritic stainless steel sheet is used in a wide range of various applications , such as home electrical appliances, kitchen appliances, architectural members, architectural hardware, and structural members.
There may be a case where a stainless steel sheet is used in such a manner that the steel sheet is formed into members having predetermined shapes by performing press forming and then the several members are assembled by performing welding. Welding is important for obtaining sound products, and in particular, shape of weld zone is very important. For example, in the case where a weld zone has a shape defect such as an undercut, since there is a decrease in the strength of a welded joint or there may be a case where a crack or fatigue fracturing starts at the weld zone due to stress concentration, it is necessary to take an appropriate treatment. In addition, shape of weld zone is also important in the case of members which are used in such a manner that the members are polished after welding. For example, in the case where there is a sag such that a weld metal is lower than the level of the butted portion of base metals, since burning removal polishing (the removal of temper color through polishing) is not sufficiently performed, there may be a case where it is difficult to achieve sufficient corrosion resistance of the weld zone.
Moreover, since stainless steel sheets are used in applications in which sufficient corrosion resistance is required, the weld zone of a steel sheet is also required to have sufficient corrosion resistance. Since welding is performed not only between materials of the same kind but also between materials of different kinds, for example, with an austenitic stainless steel sheet, it is necessary to achieve sufficient corrosion resistance of a weld zone not only of materials of the same kind but also of materials of different kinds.
Therefore, various investigations have been conducted to achieve sufficient weldability and sufficient corrosion resistance of a weld zone of materials of different kinds.
As an example of a technique regarding weldability, Patent Literature 1 discloses a method in which sufficient ductility of a weld zone is achieved by controlling the contents of O, Al, Si, and Mn of low-Cr steel to which Ti and/or V is added to control welding penetration depth.
As an example of a technique for improving corrosion resistance of a weld zone, Patent Literature 2 discloses a method in which corrosion resistance is improved by suppressing the precipitation of Cr carbonitrides through the addition of Nb.
Patent Literature 3 discloses a technique in which the corrosion resistance and workability of a weld zone is improved by suppressing the formation of black spots in a weld zone formed by performing TIG welding as a result of optimizing the contents of Al, Ti, Si, and Ca.
PTL 1: Japanese Unexamined Patent Application Publication No. 8-170154
PTL 2: Japanese Patent No. 5205951
PTL 3: Japanese Patent No. 5489759
In the case of conventional ferritic stainless steel sheets, for various applications such as for kitchen apparatus, parts of burning appliances, refrigerator front doors, battery cases, and architectural hardware, it may not be possible to achieve good shape of weld zone. In addition, it may not be possible to achieve good corrosion resistance of a weld zone of different materials welding.
In the applications described above, it is difficult to effectively use the technique disclosed in Patent Literature 1, and there is a risk that it may not be possible to achieve excellent corrosion resistance of a weld zone of different materials. It is also difficult to use the techniques disclosed in Patent Literature 2 and Patent Literature 3, and no consideration is given to suppress the occurrence of weld zone shape defects such as a sag and an undercut in the respective cases of a technique involving steel to which Nb is simply added and a technique in which the formation of black spots is controlled.
An object according to aspects of the present invention is to provide a ferritic stainless steel sheet excellent in shape of weld zone and corrosion resistance of a different materials weld zone by welding with austenitic stainless steel.
The present inventors conducted intensive investigations regarding the influences of the chemical composition of steel on shape of weld zone and the corrosion resistance of a weld zone to solve the problems described above and, as a result, found that it is possible to improve shape of weld zone and to inhibit a deterioration in the corrosion resistance of a weld zone with a material of a different kind by specifying constituent chemical elements of the chemical composition and by optimizing the balance among the contents of Nb, Ti, Zr, Si, and Al. It is possible to realize an improvement in weld zone shape and corrosion resistance of a weld zone with a material of a different kind by optimizing the contents of Ti, Zr, Si, and Al, which have an influence on weld metal flow in a weld zone, and by optimizing the balance among the contents of Nb, Ti, and Zr, which contribute to inhibiting sensitization as a result of forming carbonitrides.
In addition, for various applications such as for kitchen appliances, home electrical appliances, and architectural hardware, there may be a case where work such as forming is performed after welding has been performed and satisfactory designability in the worked state is required. When strain is introduced into a weld zone of a conventional ferritic stainless steel sheet, for example, in the case where the steel sheet is subjected to press forming for the purpose of obtaining a predetermined shape after welding has been performed or in the case where the steel sheet is subjected to light work for the purpose of achieving dimensional precision of parts, it may not be possible to achieve good surface quality. Moreover, in the case where there is a deterioration in surface quality after strain has been introduced into a weld zone, that is, in the case where there is an increase in surface roughness, there is a risk of a deterioration in the corrosion resistance of a weld zone after having been subjected to work. That is, there is room for improvement regarding the surface quality of a weld zone after having been subjected to work.
The present inventors diligently conducted additional investigations regarding the influence of the chemical composition of steel on the surface quality of a weld zone after having been subjected to work such as forming and, as a result, found that it is possible to suppress a deterioration in the surface quality of a weld zone after having been subjected to work such as forming by specifying the chemical composition and by optimizing the combined contents of Ti, Nb, Zr, and Al.
Hereinafter, work such as forming which is performed on a weld zone may simply be referred to as “work on a weld zone”.
The present inventors conducted additional investigations and completed the present invention. The subject matter of aspects of the present invention is as follows.
C: 0.003% to 0.020%,
Si: 0.01% to 1.00%,
Mn: 0.01% to 0.50%,
P: 0.040% or less,
S: 0.010% or less,
Cr: 20.0% to 24.0%,
Cu: 0.20% to 0.80%,
Ni: 0.01% to 0.60%,
Al: 0.01% to 0.08%,
N: 0.003% to 0.020%,
Nb: 0.40% to 0.80%,
Ti: 0.01% to 0.10%,
Zr: 0.01% to 0.10%, and
the balance being Fe and inevitable impurities,
[1] A ferritic stainless steel sheet having a chemical composition containing, by mass %,
in which relational expression (1) below is satisfied:
3.0≥Nb/(2Ti+Zr+0.5 Si+5Al)≥1.5 (1),
here, in relational expression (1), each of the atomic symbols denotes the content (mass %) of the corresponding chemical element.
[2] The ferritic stainless steel sheet according to item [1], in which relational expression (2) below is satisfied:
2Ti+Nb+1.5Zr+3Al≥0.75 (2),
here, in relational expression (2), each of the atomic symbols denotes the content (mass %) of the corresponding chemical element.
[3] The ferritic stainless steel sheet according to item [1] or [2], in which the chemical composition further contains, by mass %, V: 0.01% to 0.30%.
Mo: 0.01% to 0.30% and
Co: 0.01% to 0.30%.
[4] The ferritic stainless steel sheet according to any one of items [1] to [3], in which the chemical composition further contains, by mass %, one or both of
B: 0.0003% to 0.0050%,
Ca: 0.0003% to 0.0050%,
Mg: 0.0005% to 0.0050%,
REM: 0.001% to 0.050%,
Sn: 0.01% to 0.50%, and
Sb: 0.01% to 0.50%.
[5] The ferritic stainless steel sheet according to any one of items [1] to [4], in which the chemical composition further contains, by mass %, one or more of
In the case of the ferritic stainless steel sheet according to aspects of the present invention, it is possible to achieve excellent shape of weld zone and to significantly improve the corrosion resistance of a weld zone with a material of different kind formed by performing welding with austenitic stainless steel compared with the case of conventional ferritic stainless steel sheets.
In addition, in the case of the ferritic stainless steel sheet according to preferable embodiments of the present invention, it is possible to significantly improve the surface quality of a weld zone after having been subjected to work compared with the case of conventional ferritic stainless steel sheets. That is, in the case of the ferritic stainless steel sheet according to aspects of the present invention, it is possible to significantly decrease the degree of a deterioration in the surface quality of members which are required to have sufficient designability after having been subjected to work.
As described above, in the case of the ferritic stainless steel sheet according to aspects of the present invention, it is possible to significantly improve the properties of a product thereof, which has a significant effect on the industry.
Hereafter, the embodiments of the present invention including the most favorable embodiment will be described.
First, the reasons for specifying the chemical composition of the steel according to aspects of the present invention as described above will be described. “%” regarding the chemical composition denotes “mass %”, unless otherwise noted.
C: 0.003% to 0.020%
Since C causes a deterioration in the corrosion resistance of a weld zone due to sensitization, it is preferable that the C content be as low as possible. Therefore, in accordance with aspects of the present invention, the C content is set to be 0.020% or less, or preferably 0.015% or less. On the other hand, since steel-making costs increase by excessively decreasing the C content, the lower limit of the C content is set to be 0.003%, or preferably 0.005%.
In addition, since C is a solid-solution-strengthening chemical element which is effective for suppressing the growth of recrystallized grains, there is an increase in the diameter of crystal grains in a weld zone in the case where the C content is excessively low, which results in a deterioration in the surface quality of a weld zone after having been subjected to work. Therefore, to improve the surface quality of a weld zone after having been subjected to work, it is necessary that the C content be 0.003% or more, or preferably 0.005% or more.
Si: 0.01% to 1.00%
Although Si contributes to the deoxidation of steel, it is not possible to realize such an effect in the case where the Si content is less than 0.01%. Therefore, the Si content is set to be 0.01% or more, preferably 0.05% or more, or more preferably 0.10% or more. On the other hand, in the case where the Si content is excessively high and more than 1.00%, a large amount of Si oxides is formed when welding is performed, and the oxides are taken into a weld fusion zone, which results in a negative effect on the corrosion resistance of a weld zone. In addition, since there is an increase in the hardness of steel in the case where the Si content is high, there is a deterioration in workability. Therefore, the Si content is set to be 1.00% or less, preferably 0.50% or less, or more preferably 0.25% or less.
In addition, since Si is a solid-solution-strengthening chemical element which is effective for suppressing the growth of recrystallized grains, there is an increase in the diameter of crystal grains in a weld zone in the case where the Si content is excessively low, which results in a deterioration in the surface quality of a weld zone after having been subjected to work. Therefore, to improve the surface quality of a weld zone after having been subjected to work, it is preferable that the Si content be 0.03% or more, or more preferably 0.05% or more.
Mn: 0.01% to 0.50%
Since Mn has a negative effect on corrosion resistance as a result of forming MnS, the Mn content is set to be 0.50% or less, preferably 0.30% or less, or more preferably 0.25% or less.
Since Mn is a solid-solution-strengthening chemical element, and solid solute Mn existing in steel in a weld zone contributes to an increase in strength, Mn is effective for achieving excellent shape of weld zone by suppressing a weld fusion zone from sagging. However, it is not possible to realize such an effect in the case where the Mn content is less than 0.01%. Therefore, the Mn content is set to be 0.01% or more, preferably 0.05% or more, or more preferably 0.10% or more
In addition, since Mn is a solid-solution-strengthening chemical element which is effective for suppressing the growth of recrystallized grains, there is an increase in the diameter of crystal grains in a weld zone in the case where the Mn content is excessively low, which results in a deterioration in the surface quality of a weld zone after having been subjected to work. Therefore, to improve the surface quality of a weld zone after having been subjected to work, it is preferable that the Mn content be 0.03% or more, or more preferably 0.05% or more.
P: 0.040% or less
Since there is a negative effect on corrosion resistance in the case where the P content is more than 0.040%, the P content is set to be 0.040% or less, or preferably 0.030% or less. Since it is desirable that the P content be as low as possible, there is no particular limitation on the lower limit of the P content.
S: 0.010% or less
Since S has a negative effect on corrosion resistance as a result of forming inclusions, that is, MnS, it is desirable that the S content be as low as possible. Therefore, in accordance with aspects of the present invention, the S content is set to be 0.010% or less, preferably 0.0050% or less, or more preferably 0.0040% or less. Since it is desirable that the S content be as low as possible, there is no particular limitation on the lower limit of the S content.
Cr: 20.0% to 24.0%
Cr is a chemical element which improves corrosion resistance and which is indispensable in a ferritic stainless steel sheet. Since such an effect becomes marked in the case where the Cr content is 20.0% or more, the Cr content is set to be 20.0% or more, or preferably 20.5% or more. On the other hand, in the case where the Cr content is more than 24.0%, there is a significant decrease in elongation. Therefore, the Cr content is set to be 24.0% or less, preferably 22.0% or less, or more preferably 21.5% or less.
Cu: 0.20% to 0.80%
Cu contributes to an improvement in corrosion resistance. In addition, since solid solute Cu existing in steel in a weld zone contributes to an increase in strength, Cu is effective for achieving excellent shape of weld zone by suppressing a weld fusion zone from sagging. Such effects are realized in the case where the Cu content is 0.20% or more. Therefore, the Cu content is set to be 0.20% or more, preferably 0.30% or more, or more preferably 0.40% or more. On the other hand, since there is a decrease in elongation in the case where the Cu content is excessively high, the Cu content is set to be 0.80% or less, preferably 0.60% or less, or more preferably 0.50% or less.
Ni: 0.01% to 0.60%
Ni contributes to an improvement in corrosion resistance, and such an effect is realized in the case where the Ni content is 0.01% or more. Therefore, the Ni content is set to be 0.01% or more, preferably 0.05% or more, or more preferably 0.10% or more. On the other hand, since there is a decrease in elongation in the case where the Ni content is excessively high and more than 0.60%, the Ni content is set to be 0.60% or less, or preferably 0.40% or less.
Al: 0.01% to 0.08%
Although Al contributes to the deoxidation of steel, it is not possible to realize such an effect in the case where the Al content is less than 0.01%. Therefore, the Al content is set to be 0.01% or more. On the other hand, in the case where the Al content is excessively high and more than 0.08%, a large amount of Al oxides is formed when welding is performed, and the Al oxides are taken into a weld fusion zone, which results in a negative effect on the corrosion resistance of a weld zone. Therefore, the upper limit of the Al content is set to be 0.08%. It is preferable that the Al content be 0.06% or less, more preferably 0.05% or less, or even more preferably 0.04% or less.
In addition, since Al is a chemical element which suppress the growth of crystal grains in a weld zone through a pinning effect caused by Al-based precipitates, Al is effective for improving the surface quality of a weld zone after having been subjected to work in the case where the Al content is 0.01% or more. Therefore, to improve the surface quality of a weld zone after having been subjected to work, the Al content is set to be 0.01% or more, or preferably 0.02% or more. On the other hand, in the case where the Al content is excessively high, since Al-based inclusions are locally concentrated in a weld zone, inhomogeneous growth of crystal grains occurs. As a result, since an inhomogeneous microstructure, in which coarse crystal grains and fine crystal grains is formed, there is a deterioration in the surface quality of a weld zone after having been subjected to work. Therefore, to improve the surface quality of a weld zone after having been subjected to work, the upper limit of the Al content is set to be 0.00%, or preferably 0.06%.
N: 0.003% to 0.020%
Since N causes a deterioration in the corrosion resistance of a weld zone due to sensitization, it is desirable that the N content be as low as possible. Therefore, in accordance with aspects of the present invention, the N content is set to be 0.020% or less, or preferably 0.015% or less. On the other hand, since steel-making costs increase by excessively decreasing the N content, the lower limit of the N content is set to be 0.003%, or preferably 0.005%.
In addition, since N is a solid-solution-strengthening chemical element which is effective for inhibiting the growth of recrystallized grains, there is an increase in the diameter of crystal grains in a weld zone in the case where the N content is excessively low, which results in a deterioration in the surface quality of a weld zone after having been subjected to work. Therefore, to improve the surface quality of a weld zone after having been subjected to work, it is necessary that the N content be 0.003% or more, or preferably 0.005% or more.
Nb: 0.40% to 0.80%
Since Nb is a carbonitride-forming chemical element, Nb suppresses a deterioration in the corrosion resistance of a weld zone due to sensitization as a result of fixing C and N. In addition, since solid solute Nb existing in steel in a weld zone contributes to an increase in strength, Nb is effective for achieving excellent shape of weld zone by suppressing a weld fusion zone from sagging. The effects described above are realized in the case where the Nb content is 0.40% or more. Therefore, the Nb content is set to be 0.40% or more, preferably 0.45% or more, or more preferably 0.50% or more. On the other hand, since there is a decrease in elongation in the case where the Nb content is excessively high, the Nb content is set to be 0.80% or less, preferably 0.75% or less, or more preferably 0.70% or less.
In addition, Nb is a chemical element which is effective for suppressing the growth of crystal grains in a weld zone through a pinning effect caused by Nb-based precipitates. Such an effect is realized in the case where the Nb content is 0.40% or more. Therefore, to improve the surface quality of a weld zone after having been subjected to work, the Nb content is set to be 0.40% or more, or preferably 0.55% or more.
Ti: 0.01% to 0.10%
Since Ti is, like Nb, a carbonitride-forming chemical element, Ti suppresses a deterioration in the corrosion resistance of a weld zone due to sensitization as a result of fixing C and N. In addition, since solid solute Ti existing in steel in a weld zone contributes to an increase in strength, Ti is effective for achieving excellent shape of weld zone by suppressing a weld fusion zone from sagging. The effects described above are realized in the case where the Ti content is 0.01% or more. Therefore, the Ti content is set to be 0.01% or more. On the other hand, since surface defects due to inclusions occur in the case where the Ti content is more than 0.10%, the upper limit of the Ti content is set to be 0.10%. It is preferable that the Ti content be 0.05% or less, or more preferably 0.04% or less.
In addition, Ti is a chemical element which is effective for suppressing the growth of crystal grains in a weld zone through a pinning effect caused by Ti-based precipitates. To improve the surface quality of a weld zone after having been subjected to work, the Ti content is set to be 0.01% or more, or preferably 0.02% or more. On the other hand, in the case where the Ti content is excessively high, since Ti-based inclusions are locally concentrated in a weld zone, inhomogeneous growth of crystal grains occurs. As a result, since an inhomogeneous microstructure, in which coarse crystal grains and fine crystal grains coexist, is formed, there is a deterioration in the surface quality of a weld zone after having been subjected to work. Therefore, to improve the surface quality of a weld zone after having been subjected to work, the Ti content is set to be 0.10% or less, preferably 0.08% or less, more preferably 0.06% or less, or even more preferably 0.04% or less.
Zr: 0.01% to 0.10%
Since Zr is, like Nb and Ti, a carbonitride-forming chemical element, Zr suppresses a deterioration in the corrosion resistance of a weld zone due to sensitization as a result of fixing C and N. In addition, since solid solute Zr existing in steel in a weld zone contributes to an increase in strength, Zr is effective for achieving excellent shape of weld zone by suppressing a weld fusion zone from sagging. The effects described above are realized in the case where the Zr content is 0.01% or more. Therefore, the Zr content is set to be 0.01% or more. On the other hand, since surface defects due to inclusions occur in the case where the Zr content is more than 0.10%, the upper limit of the Zr content is set to be 0.10%, Or preferably 0.05%.
Zr is a chemical element which is important for achieving good surface quality of a weld zone. Zr suppresses crystal grains from excessively growing as a result of being finely precipitated in a cooling process starting at the time of solidification in a weld fusion zone. With this, Zr contributes to achieving good surface quality of a weld zone after having been subjected to work. To realize such an effect, the Zr content is set to be 0.01% or more, or preferably 0.02% or more. On the other hand, in the case where the Zr content is excessively high, since Zr-based inclusions are locally concentrated in a weld zone, inhomogeneous growth of crystal grains occurs, which results in an inhomogeneous microstructure, in which coarse crystal grains and fine crystal grains coexist, being formed. As a result, not only surface defects occur after welding has been performed, but also there is a deterioration in the surface quality of a weld zone after having been subjected to work. Therefore, the Zr content is set to be 0.10% or less, preferably 0.08% or less, or more preferably 0.06% or less.
Ti and Zr are chemical elements which form carbonitrides in steel and which improve the corrosion resistance of a weld zone with a material of a different kind formed by performing welding with austenitic stainless steel. Therefore, to achieve sufficient corrosion resistance of a weld zone, it is preferable that the contents of Ti and Zr be equal to or more than certain amounts. Moreover, by adding Ti and Zr not separately but in combination, since it is possible to finely disperse precipitates in Weld metal by suppressing the formation of coarse Ti-based precipitates through the formation of Zr-based precipitates, it is possible to achieve the good corrosion resistance. Since Nb is also important regarding the corrosion resistance of a weld zone with a material of a different kind formed by performing welding with austenitic stainless steel, it is necessary that Nb be added in a predetermined amount. In particular, to achieve unprecedentedly excellent corrosion resistance of a weld zone of materials of different kinds, Nb, which forms carbides later than Zr and Ti in the cooling and solidification process of weld fusion metal, is important.
The basic chemical composition is described above, and the chemical elements described below may be further added in accordance with aspects of the present invention.
V: 0.01% to 0.30%
Since V is a carbonitride-forming chemical element, V suppresses a deterioration in the corrosion resistance of a weld zone due to sensitization. To realize such an effect, it is preferable that the V content be 0.01% or more. On the other hand, since there is a deterioration in workability in the case where the V content is excessively high, it is preferable that the upper limit of the V content be 0.30%, or more preferably 0.20%.
Mo: 0.01% to 0.30%
Mo is effective for improving corrosion resistance. In addition, since solid solute Mo existing in steel in a weld zone contributes to an increase in strength, Mo is effective for achieving excellent shape of weld zone by suppressing a weld fusion zone from sagging. To realize such effects, it is preferable that the Mo content be 0.01% or more. On the other hand, since there is a decrease in elongation in the case where the Mo content is excessively high, it is preferable that the Mo content be 0.30% or less, more preferably 0.20% or less, or even more preferably 0.15% or less.
Co: 0.01% to 0.30%
Co is effective for improving corrosion resistance. In addition, since solid solute Co existing in steel in a weld zone contributes to an increase in strength, Co is effective for achieving excellent shape of weld zone by suppressing a weld fusion zone from sagging. To realize such effects, it is preferable that the Co content be 0.01% or more. On the other hand, since there is a decrease in elongation in the case where the Co content is excessively high, it is preferable that the Co content be 0.30% or less, more preferably 0.20% or less, or even more preferably 0.15% or less.
B: 0.0003% to 0.0050%
B is a chemical element which improves hot workability and secondary workability, and it is preferable that the B content be 0.0003% or more, or more preferably 0.0010% or more, to realize such an effect. In the case where the B content is more than 0.0050%, there is a risk of a deterioration in toughness. Therefore, it is preferable that the B content be 0.0050% or less, or more preferably 0.0030% or less. [0057]
Ca: 0.0003% to 0.0050%
Ca is a chemical element which is effective for deoxidation, and it is preferable that the Ca content be 0.0003% or more, or more preferably 0.0005% or more, to realize such an effect. In the case where the Ca content is more than 0.0050%, there is a risk of a deterioration in corrosion resistance. Therefore, it is preferable that the Ca content be 0.0050% or less, or more preferably 0.0020% or less.
Mg: 0.0005% to 0.0050%
Mg contributes the deoxidizing of steel. To realize such an effect, it is preferable that the Mg content be 0.0005% or more, or more preferably 0.0010% or more. In the case where the Mg content is more than 0.0050%, there is a risk of a deterioration in manufacturability due to a deterioration in the toughness of steel. Therefore, it is preferable that the Mg content be 0.0050% or less, or more preferably 0.0030% or less.
REM (rare-earth metal): 0.001% to 0.050%
REM (rare-earth metal: one of the chemical elements having atomic numbers of 57 through 71 such as La, Ce, and Nd) is a chemical element which improves high-temperature oxidation resistance. To realize such an effect, it is preferable that the REM content be 0.001% or more, or more preferably 0.005% or more. In the case where the REM content is more than 0.050%, there is a risk that surface defects may occur when hot rolling is performed. Therefore, it is preferable that the REM content be 0.050% or less, or more preferably 0.030% or less.
Sn: 0.01% to 0.50%
Sn is effective for suppressing surface roughening due to work from occurring by promoting the formation of a deformation zone when rolling is performed. To realize such an effect, it is preferable that the Sn content be 0.01% or more, or more preferably 0.03% or more. In the case where the Sn content is more than 0.50%, there is a risk of a deterioration in workability. Therefore, it is preferable that the Sn content be 0.50% or less, or more preferably 0.20% or less.
Sb: 0.01% to 0.50%
Sb is, like Sn, effective for suppressing surface roughening due to work from occurring by promoting the formation of a deformation zone when rolling is performed. To realize such an effect, it is preferable that the Sb content be 0.01% or more, or more preferably 0.03% or more. In the case where the Sb content is more than 0.50%, there is a risk of a deterioration in workability. Therefore, it is preferable that the Sb content be 0.50% or less, or more preferably 0.20% or less.
In the chemical composition, the balance is Fe and inevitable impurities.
The requirement of aspects of the present invention is not satisfied only by the fact that each of the constituent chemical elements satisfies the requirement regarding the range of its content described above, and it is also necessary that relational expression (1) below be satisfied. Here, in relational expression (1), each of the atomic symbols denotes the content (mass %) of the corresponding chemical element.
3.0≥Nb/(2Ti+Zr+0.5Si+5Al)≥1.5 (1)
Relational expression (1) above relates to the condition necessary for achieving excellent shape of weld zone without shape defects such as a sag and an undercut in a weld fusion zone by controlling the balance among the contents of Nb, Ti, Zr, Si, and Al. The coefficients in relational expression (1) above are empirically derived.
Although a detailed reason is not clear, there is a tendency for a weld fusion zone to sag in the case where the Nb content is low.
Solid solute Nb existing in steel in a cooling process starting at the time of solidification in a weld fusion zone contributes to an increase in strength.
Therefore, it is considered that, in the case where the Nb content is low, a sag occurs in a weld fusion zone due to the high-temperature strength of the weld fusion zone being low. In addition, Ti, Zr, Si, and Al are chemical elements which tend to form oxides. In the case where the contents of Ti, Zr, Si, and Al are excessively high, formed oxides may cause shape defects in a weld fusion zone by deteriorating the fluidity of fusion metal. In particular, there may be a case where an undercut occurs at the interface between an austenitic stainless steel sheet and fusion metal when welding is performed between materials of different kinds. Therefore, to achieve excellent shape of weld zone, it is preferable that the total content of Ti, Zr, Si, and Al be low with the Nb content being high. In the case where the calculated value in relational expression (1) is less than 1.5, the occurrence of the shape defects of a weld zone becomes marked. In contrast, in the case where the calculated value in relational expression (1) is 1.5 or more, excellent weld zone shape is achieved. Therefore, the calculated value in relational expression (1) is set to be 1.5 or more, or preferably 1.6 or more.
In the case where the contents of Ti, Zr, Si, and Al are excessively low, there is a decrease in the amount of precipitates formed in a cooling process starting at the time of solidification in a weld fusion zone. That is, there is coarsening of crystal grains due to a decrease in the amount of precipitates, which have a pinning effect. Moreover, since there is a decrease in the amount of solid solute Nb in steel due to an increase in the amount of Nb precipitates, there is a decrease in the high-temperature strength of a weld fusion zone. It is considered that a sag occurs in a weld fusion zone for the reasons described above. In addition, in the case where the Nb content is excessively high, there may be a case where the shape defects of a weld fusion zone occur. In particular, there may be a case where an undercut occurs at the interface between an austenitic stainless steel sheet and fusion metal when welding is performed between materials of different kinds. Although a detailed reason is not clear, it is considered that, since there are influences on fusion metal flow and wettability with a base metal through the surface tension of molten steel and the stability of arc in a weld pool, shape defects in weld fusion zone occur. Therefore, to achieve excellent weld zone shape, it is preferable that the total content of Ti, Zr, Si, and Al be appropriately high without the Nb content being excessively high. In the case where the calculated value in relational expression (1) is more than 3.0, the occurrence of the shape defects of a weld zone becomes marked. In contrast, in the case where the calculated value in relational expression (1) is 3.0 or less, excellent shape of weld zone is achieved. Therefore, the calculated value in relational expression (1) is set to be 3.0 or less, preferably 2.9 or less, or more preferably 2.8 or less.
In accordance with aspects of the present invention, by satisfying relational expression (2) below after having satisfied relational expression (1) above, it is possible to realize excellent surface quality of a weld zone after having been subjected to work. Here, in relational expression (2), each of the atomic symbols denotes the content (mass %) of the corresponding chemical element.
2Ti+Nb+1.5Zr+3Al≥0.75 (2)
Relational expression (2) above is effective for achieving good surface quality in a weld zone after having been subjected to work. In the case where the calculated value in relational expression (2) above is less than 0.75, there is an insufficient improvement in the surface quality of a weld zone after having been subjected to work. In contrast, in the case where the calculated value in relational expression (2) above is 0.75 or more, excellent surface quality of a weld zone after having been subjected to work is achieved. It is preferable that the calculated value in relational expression (2) be 0.80 or more. On the other hand, to suppress hardness from excessively increasing and to achieve good elongation, it is preferable that the upper limit of the calculated value in relational expression (2) be 1.00.
Ti, Nb, Zr, and Al may be precipitated in steel in the form of carbonitrides and oxides. The precipitates improve the homogeneity of a microstructure in a weld zone because of a pinning effect.
However, in the case of steel to which Ti is simply added, the following problems may occur in a weld fusion zone. That is, Ti-based precipitates which start to be precipitated at a high temperature and then combine with each other to have a large diameter, and Ti-based precipitates which are precipitated at a low temperature during a cooling process to have a small diameter, coexist. Since the Ti-based precipitates combined to have a large diameter and the Ti-based precipitates having a small diameter have different influence on grain growth, a mixed-grain microstructure having variations in crystal grain diameter, in which grains having a large diameter and grains having a small diameter coexist, is formed, which results in a deterioration in the surface quality of a weld zone after having been subjected to work.
In addition, in the case of steel to which Nb is simply added, Nb starts to be precipitated at a lower temperature than that at which Ti does. Therefore, it is expected that a pinning effect caused by Nb-based precipitates having a small diameter is realized in a lower temperature range than that in which Ti starts to be precipitated. However, since it is not expected that the pinningeffect caused by the precipitates is realized in a high temperature range in which Nb is not precipitated, a certain amount of crystal grains having a large diameter is formed, which results in a deterioration in the surface quality of a weld zone after having been subjected to work.
In the case of steel to which Zr is simply added, Zr, like Ti, starts to be precipitated at a high temperature. Therefore, as in the case of steel to which Ti is simply added, steel to which Zr is simply added has a mixed-grain microstructure having variations in crystal grain diameter, in which grains having a large diameter and grains having a small diameter coexist, which results in a deterioration in the surface quality of a weld zone after having been subjected to work.
In the case of steel to which Al is simply added, Al starts to be precipitated at a lower temperature than that at which Ti does as in the case of steel to which Nb is simply added. Therefore, also in the case of steel to which Al is simply added, since it is not expected that a pinning effect caused by precipitates is realized in a high temperature range, a certain amount of coarsened crystal grains is formed, which results in a deterioration in the surface quality of a weld zone after having been subjected to work.
Moreover, in the case where predetermined amounts of Ti, Nb, Zr, and Al are not added and, accordingly, the amount of precipitates is very small, since a certain amount or more of precipitates are not homogeneously dispersed and precipitated in steel, there are regions in which precipitates are locally concentrated. As a result, a mixed-grain microstructure having variations in distribution of precipitates and in crystal grain diameter is formed.
In the case where a weld zone has an inhomogeneous mixed-grain microstructure, there are regions having many crystal grain boundaries and regions having only a few crystal grain boundaries. In this case, since strain introduced by work is concentrated at crystal grain boundaries and within some of the crystal grains, homogeneous deformation does not occur, which makes it difficult to achieve good surface quality.
On the other hand, by adding Ti, Nb, Zr, and Al in combination, it is possible to more homogeneously disperse a certain amount or more of precipitates in a cooling process of a weld zone. As a result, it is possible to form a microstructure having relatively uniform distribution of precipitates and relatively uniform crystal grain diameter. The coefficients in relational expression (2) above are derived from experimental results and in consideration of the affinities of these chemical elements for oxygen and nitrogen.
The ferritic stainless steel sheet according to aspects of the present invention can suitably be used in applications involving various kinds of work such as tensile work, bending work, drawing, and bulging. Although there is no particular limitation on the thickness of the steel sheet, the thickness may usually be 0.10 mm to 6.0 mm.
In addition, the ferritic stainless steel sheet according to aspects of the present invention can suitably be used in applications involving welding. There is no particular limitation on the conditions used for welding, and the conditions may be determined as needed. It is preferable that welding be performed by using a TIG welding method. In addition, a welded member, which is formed by combining a ferritic stainless steel sheet and an austenitic stainless steel sheet, is manufactured by performing TIG welding. Therefore, above-mentioned TIG welding may also be a method for manufacturing a welded member according to aspects of the present invention. Although the condition used for performing TIG welding may be appropriately decided, an example of a preferable condition is as follows.
welding voltage: 8 V to 15 V
welding current: 50 A to 250 A
welding speed: 100 mm/min to 1000 ram/min
electrode: tungsten electrode having a diameter of 1 mm ϕ to 5 mmϕ
shielding gas (Ar gas) on the back and front sides: 5 L/min to 40 L/min
It is preferable that, for example, SUS304, SUS304L, SUS316, or SUS316L be used as an austenitic stainless steel sheet for TIG welding described above. SUS 304 is used in examples below. Since SUS304 has weldability similar to that of other three kinds of austenitic stainless steel, it is reasonably presumed that the effects of aspects of the present invention which is realized by using SUS304 is also realized by using other kinds of austenitic stainless steel sheets.
Here, the ferritic stainless steel sheet according to aspects of the present invention may be used for welding with a material of the same kind or a material of a different kind, that is, stainless steel such as austenitic stainless steel, martensitic stainless steel, precipitation hardening stainless steel, or duplex stainless steel.
There is no particular limitation on the method used for manufacturing the ferritic stainless steel sheet according to aspects of the present invention. Hereafter, a preferable method for manufacturing the ferritic stainless steel sheet, in particular, the cold-rolled ferritic stainless steel sheet, according to aspects of the present invention will be described.
After molten steel having the chemical composition described above has been prepared by using a known method such as one using a converter, an electric furnace, or a vacuum melting furnace, secondary refining is performed by using, for example, a VOD (Vacuum Oxygen Decarburization) method. Subsequently, a steel material (slab) is manufactured by using a continuous casting method or an ingot-casting-slabbing method. This steel material is heated to a temperature of 1000° C. to 1250° C., and then hot-rolled to have a thickness of 2.0 mm to 8.0 mm with a finishing delivery temperature of 700° C. to 1050° C. The hot-rolled steel sheet manufactured as described above is annealed at a temperature of 850° C. to 1100° C., pickled, cold-rolled, and then subjected to cold-rolled-sheet annealing at a temperature of 800° C. to 1050° C. After cold-rolled-sheet annealing has been performed, pickling is performed to remove scale. The cold-rolled steel sheet which has been subjected to scale removal may be subjected to skin pass rolling.
Hereafter, the present invention will be specifically described on the basis of examples. The scope of the present invention is not limited to the examples below.
Molten steels having the chemical compositions (with the balance being Fe and inevitable impurities) given in Tables 1 through 3 were prepared by using a small vacuum melting furnace and made into 50-kg steel ingots. These ingots were heated to a temperature of 1200° C. and hot-rolled into hot-rolled steel sheets having a thickness of 4.0 mm. Subsequently, the hot-rolled steel sheets were subjected to hot-rolled-sheet annealing in which the hot-rolled steel sheets were held at a temperature of 1050° C. for 60 seconds, pickled, cold-rolled into cold-rolled steel sheets having a thickness of 1.0 mm, and subjected to cold-rolled-sheet annealing in which the cold-rolled steel sheets were held at a temperature of 950° C. for 30 seconds. After having been subjected to polishing to remove scale on the surface, the cold-rolled steel sheets were polished to a #600 finish by using emery paper and used as sample materials.
A test piece having a side length in the rolling direction (L-direction) of 200 mm and a side length in a direction (C-direction) perpendicular to the rolling direction of 90 mm was taken from each of the steel sheets obtained as described above. The test piece was welded with a sheet of SUS304 having a thickness of 1.0 mm, a side length in the rolling direction of 200 mm, and a side length in a direction perpendicular to the rolling direction of 90 mm to form a butt-welded joint, the mutual sides having a length of 200 mm of the test piece and the sheet of SUS 304 being butted by performing TIG welding at a welding voltage of 10 V, a welding current of 90 A to 110 A, and a welding speed of 600 mm/min, with a tungsten electrode having a diameter of 1.6 mmϕ, and front and back shielding gas (Ar gas) at a flow rate of 20 L/min. Therefore, the welding direction (the direction of the weld bead) was parallel to the rolling direction.
(1) Shape of Weld Zone
A test piece having a thickness of 1.0 mm, a width of 15 mm, and a length of 10 mm was taken from the butt-welded joint obtained as described above so that the length direction of the test piece was parallel to the welding direction and the weld bead was at the center in the width direction, and the cross section of the test piece perpendicular to the welding direction was observed after having been etched by using aqua regia. A case where a weld fusion zone had a part being 0.15 mm or more lower than the positions of the base metals butted on the right- and left-hand sides thereof was judged as a case of a sag (refer to the figure at section (A) “SAG”). In addition, a case where the thickness of a weld fusion zone at the position where the weld fusion zone is in contact with the base metal was 0.15 mm or more thinner than that of the base metal was judged as a case of an undercut (refer to the figure at section (B) “WITH UNDERCUT”). A case of a sag or a case of an undercut was judged as a case of insufficient shape of weld zone “x”. On the other hand, a case which was not judged as a case of unsatisfactory weld zone shape was judged as a case of good weld zone shape “O” (refer to the figure at section (C) “EXCELLENT IN SHAPE OF WELD ZONE”). The results are given in the column “Weld Zone Shape” in Tables 1 through 3.
(2) Corrosion Resistance of Weld Zone
A test piece having a thickness of 1.0 mm, a width of 60 mm, and a length of 80 mm was taken from the butt-welded joint so that the length direction of the test piece was parallel to the welding direction and the weld bead was on the whole central line in the width direction, the front surface (on the side of the electrode at the time of welding) of the test piece was polished by using #600 emery paper, the whole back surface and the regions having a width of 5 mm measured from the outer circumferential edges of the test piece were sealed, and the test piece was subjected to a combined cyclic corrosion test in such a manner that a unit corrosion test cycle was repeated 30 times, where the unit corrosion test cycle consisted of salt spraying (35° C., 5% NaCl, 2 hours), drying (60° C., 4 hours), and wetting (50° C., 4 hours), to determine a rusted area ratio in a surface region having a width of 20 mm with the weld bead on the central line of the surface region. A case where the rusted area ratio was 10% or less was judged as a case of good corrosion resistance of a weld zone “O”. A case where the rusted area ratio was more than 10% was judged as a case of unsatisfactory corrosion resistance of a weld zone “x”. The results are given in the column “Corrosion Resistance” in Tables 1 through 3.
(3) Surface Quality of Weld Zone after having been Subjected to Work
A JIS No. 5 tensile test piece was taken from the butt-welded joint so that the tensile direction was perpendicular to the welding direction and the weld bead was at the center in the length direction of the test piece, the surface of the test piece was polished by using #600 emery paper, the polished test piece was subjected to a tensile plastic strain of 20%, and the maximum height roughness Rz in the welding direction in a weld zone was determined. The term “weld zone” refers to a weld fusion metal zone and a welded heat affected zone.
A case where the maximum height roughness Rz in a weld zone after applying tensile stress was 10 μm or less was judged as a case of excellent surface quality “O”. A case where the maximum height roughness Rz in a weld zone after applying tensile stress was more than 10 μm was judged as a case of no marked improvement in surface quality “x”. The results of the test of surface quality are given in the column “Surface Quality” in Tables 1 through 3. Here, the maximum height roughness Rz was determined in accordance with JIS B 0601 (2013). The length of determination was 5 mm, the determination was performed three times for each sample, and the simple average value of the three determined values was defined as the maximum height roughness Rz of the sample.
As indicated in Tables 1 through 3, all of the example steels of the present invention had excellent shape of weld zone and excellent corrosion resistance of a weld zone of materials of different kinds. Moreover, in the case where relational expression (2) was also satisfied, the surface quality of a weld zone after having been subjected to work was also excellent. In contrast, the comparative steels, which were outside the range of the present invention, were poor in shape of weld zone or the corrosion resistance of a weld zone or both properties.
TABLE 1
Steel
Chemical Composition (mass %)
Grade
C
Si
Mn
P
S
Cr
Cu
Ni
Al
N
Nb
Ti
Zr
Other
A1
0.009
0.13
0.14
0.025
0.0027
21.1
0.42
0.41
0.03
0.007
0.55
0.01
0.01
—
A2
0.008
0.08
0.16
0.027
0.0023
21.9
0.63
0.45
0.02
0.009
0.54
0.01
0.03
—
A3
0.015
0.19
0.47
0.022
0.0018
23.8
0.55
0.33
0.02
0.015
0.61
0.01
0.01
V: 0.08
A4
0.009
0.17
0.15
0.014
0.0035
20.9
0.41
0.28
0.02
0.008
0.54
0.01
0.02
Mo: 0.15
A5
0.019
0.22
0.31
0.033
0.0071
21.3
0.43
0.55
0.02
0.006
0.47
0.01
0.07
Mg: 0.0011
A6
0.017
0.15
0.17
0.035
0.0044
21.5
0.41
0.32
0.02
0.012
0.57
0.02
0.01
Ca: 0.0015
A7
0.014
0.05
0.19
0.024
0.0015
20.4
0.32
0.21
0.03
0.009
0.49
0.04
0.02
Sn: 0.19
A8
0.011
0.07
0.21
0.028
0.0032
21.9
0.44
0.44
0.02
0.006
0.52
0.03
0.02
Sb: 0.15
A9
0.012
0.14
0.11
0.029
0.0057
21.2
0.36
0.32
0.01
0.008
0.47
0.08
0.01
B: 0.0025
A10
0.008
0.33
0.13
0.032
0.0034
22.4
0.26
0.41
0.02
0.011
0.61
0.01
0.01
Co: 0.11
A11
0.006
0.03
0.15
0.025
0.0025
22.1
0.33
0.35
0.01
0.007
0.41
0.09
0.01
REM: 0.0025
A12
0.004
0.15
0.14
0.021
0.0018
21.9
0.27
0.36
0.02
0.005
0.52
0.03
0.03
—
A13
0.009
0.13
0.17
0.016
0.0035
22.3
0.42
0.18
0.02
0.009
0.58
0.02
0.02
V: 0.16,
Mo: 0.05
A14
0.012
0.09
0.22
0.013
0.0061
20.7
0.38
0.25
0.03
0.012
0.57
0.01
0.01
V: 0.05,
Mg: 0.0015
A15
0.011
0.18
0.46
0.027
0.0022
21.8
0.39
0.16
0.02
0.014
0.56
0.01
0.04
Mo: 0.05,
Mg: 0.0012
A16
0.014
0.22
0.23
0.024
0.0014
21.5
0.43
0.44
0.01
0.008
0.57
0.01
0.05
Mo: 0.16,
Co: 0.05,
Mg: 0.0024
A17
0.008
0.25
0.15
0.035
0.0026
21.1
0.45
0.22
0.02
0.009
0.58
0.01
0.03
V: 0.04,
Mo: 0.04,
Co: 0.06,
B: 0.0007,
Ca: 0.0008,
Mg: 0.0011,
REM: 0.0018,
Sn: 0.07,
Sb: 0.09
A18
0.009
0.15
0.15
0.025
0.0031
20.9
0.42
0.21
0.02
0.008
0.55
0.01
0.01
V: 0.03,
Mo: 0.05,
Ca: 0.0005
Relational
Relational
Shape
Steel
Expression
Expression
of Weld
Corrosion
Surface
Grade
(1)
(2)
Zone
Resistance
Quality
Note
A1
2.2
0.68
∘
∘
x
Example
Steel
A2
2.8
0.67
∘
∘
x
Example
Steel
A3
2.7
0.71
∘
∘
x
Example
Steel
A4
2.4
0.65
∘
∘
x
Example
Steel
A5
1.6
0.66
∘
∘
x
Example
Steel
A6
2.5
0.69
∘
∘
x
Example
Steel
A7
1.8
0.69
∘
∘
x
Example
Steel
A8
2.4
0.67
∘
∘
x
Example
Steel
A9
1.6
0.68
∘
∘
x
Example
Steel
A10
2.1
0.71
∘
∘
x
Example
Steel
A11
1.6
0.64
∘
∘
x
Example
Steel
A12
2.0
0.69
∘
∘
x
Example
Steel
A13
2.6
0.71
∘
∘
x
Example
Steel
A14
2.5
0.70
∘
∘
x
Example
Steel
A15
2.2
0.70
∘
∘
x
Example
Steel
A16
2.5
0.70
∘
∘
x
Example
Steel
A17
2.1
0.71
∘
∘
x
Example
Steel
A18
2.7
0.65
∘
∘
x
Example
Steel
TABLE 2
Steel
Chemical Composition (mass %)
Grade
C
Si
Mn
P
S
Cr
Cu
Ni
Al
N
Nb
Ti
Zr
Other
B1
0.011
0.17
0.15
0.022
0.0018
21.2
0.38
0.22
0.05
0.005
0.65
0.01
0.01
—
B2
0.004
0.08
0.22
0.033
0.0021
20.4
0.41
0.25
0.04
0.008
0.59
0.05
0.04
—
B3
0.015
0.04
0.31
0.038
0.0015
22.5
0.44
0.31
0.05
0.011
0.63
0.05
0.01
V: 0.14
B4
0.017
0.09
0.45
0.027
0.0018
23.2
0.51
0.19
0.05
0.015
0.58
0.03
0.01
Mo: 0.16
B5
0.018
0.12
0.38
0.015
0.0019
23.8
0.63
0.28
0.05
0.012
0.66
0.02
0.09
Mg: 0.0011
B6
0.013
0.05
0.29
0.011
0.0015
23.5
0.57
0.37
0.02
0.009
0.54
0.09
0.01
Ca: 0.0018
B7
0.012
0.26
0.18
0.013
0.0023
23.3
0.44
0.46
0.04
0.008
0.64
0.01
0.01
Sn: 0.08
B8
0.008
0.05
0.04
0.022
0.0027
22.8
0.46
0.55
0.05
0.007
0.57
0.02
0.01
Sb: 0.02
B9
0.006
0.06
0.06
0.026
0.0029
22.5
0.32
0.34
0.04
0.005
0.69
0.05
0.01
B: 0.0008
B10
0.005
0.36
0.09
0.029
0.0035
22.1
0.41
0.21
0.03
0.003
0.68
0.02
0.05
Co: 0.15
B11
0.007
0.24
0.11
0.035
0.0046
21.8
0.39
0.18
0.04
0.004
0.69
0.04
0.02
REM: 0.0015
B12
0.009
0.11
0.13
0.032
0.0051
21.9
0.29
0.33
0.04
0.006
0.75
0.01
0.03
Mo: 0.26
B13
0.014
0.13
0.15
0.029
0.0042
21.5
0.22
0.51
0.04
0.009
0.62
0.02
0.04
V: 0.02,
Mo: 0.06
B14
0.016
0.12
0.18
0.025
0.0037
21.6
0.25
0.22
0.04
0.011
0.61
0.04
0.04
V: 0.06,
Mg: 0.0015
B15
0.006
0.09
0.21
0.022
0.0034
21.2
0.39
0.33
0.04
0.012
0.65
0.02
0.03
V: 0.15,
Ca: 0.0012,
Mg: 0.0008
B16
0.004
0.14
0.24
0.019
0.0031
21.1
0.41
0.25
0.03
0.008
0.61
0.05
0.01
Mo: 0.15,
Ca: 0.0015
B17
0.009
0.06
0.12
0.015
0.0029
20.9
0.35
0.24
0.05
0.007
0.57
0.02
0.04
V: 0.05,
Mo: 0.05,
Co: 0.05,
B: 0.0015,
Ca: 0.0007,
Mg: 0.0006,
REM: 0.0022,
Sn: 0.09,
Sb: 0.08
B18
0.012
0.21
0.14
0.013
0.0027
20.8
0.41
0.17
0.04
0.005
0.61
0.03
0.02
Mg: 0.0007
B19
0.014
0.25
0.16
0.011
0.0025
20.5
0.49
0.19
0.04
0.003
0.69
0.02
0.04
Mo: 0.08,
Mg: 0.0006,
Sn: 0.07
B20
0.018
0.15
0.18
0.011
0.0015
20.3
0.56
0.21
0.05
0.004
0.58
0.01
0.03
Mg: 0.0013,
Sb: 0.03
B21
0.019
0.44
0.33
0.011
0.0021
20.9
0.43
0.23
0.03
0.006
0.72
0.01
0.01
Ca: 0.0004,
Mg: 0.0022,
Sb: 0.03
B22
0.017
0.15
0.47
0.017
0.0016
21.1
0.41
0.25
0.05
0.008
0.65
0.02
0.02
V: 0.22,
Ca: 0.0016,
Mg: 0.0018,
Sb: 0.05
B23
0.018
0.08
0.39
0.019
0.0015
21.6
0.37
0.15
0.04
0.011
0.58
0.04
0.01
V: 0.06,
B: 0.0036,
Ca: 0.0022,
Mg: 0.0035
B24
0.011
0.11
0.26
0.022
0.0018
21.2
0.41
0.32
0.04
0.013
0.69
0.02
0.02
Co: 0.25
B25
0.003
0.07
0.17
0.026
0.0029
21.4
0.44
0.04
0.04
0.009
0.57
0.06
0.01
Mo: 0.16,
REM: 0.0045
B26
0.011
0.12
0.12
0.025
0.0035
21.1
0.41
0.18
0.03
0.008
0.59
0.03
0.05
V: 0.04,
Mo: 0.03,
Ca: 0.0006
B27
0.008
0.15
0.13
0.031
0.0023
21.6
0.46
0.22
0.02
0.007
0.44
0.01
0.03
—
B28
0.013
0.22
0.11
0.035
0.0024
20.8
0.36
0.18
0.01
0.006
0.51
0.01
0.02
—
Relational
Relational
Shape
Steel
Expression
Expression
of Weld
Corrosion
Surface
Grade
(D
(2)
Zone
Resistance
Quality
Note
B1
1.8
0.84
∘
∘
∘
Example
Steel
B2
1.6
0.87
∘
∘
∘
Example
Steel
B3
1.7
0.90
∘
∘
∘
Example
Steel
B4
1.6
0.81
∘
∘
∘
Example
Steel
B5
1.5
0.99
∘
∘
∘
Example
Steel
B6
1.7
0.80
∘
∘
∘
Example
Steel
B7
1.8
0.80
∘
∘
∘
Example
Steel
B8
1.8
0.78
∘
∘
∘
Example
Steel
B9
2.0
0.93
∘
∘
∘
Example
Steel
B10
1.6
0.89
∘
∘
∘
Example
Steel
B11
1.6
0.92
∘
∘
∘
Example
Steel
B12
2.5
0.94
∘
∘
∘
Example
Steel
B13
1.8
0.84
∘
∘
∘
Example
Steel
B14
1.6
0.87
∘
∘
∘
Example
Steel
B15
2.1
0.86
∘
∘
∘
Example
Steel
B16
1.8
0.82
∘
∘
∘
Example
Steel
B17
1.6
0.82
∘
∘
∘
Example
Steel
B18
1.6
0.82
∘
∘
∘
Example
Steel
B19
1.7
0.91
∘
∘
∘
Example
Steel
B20
1.5
0.80
∘
∘
∘
Example
Steel
B21
1.8
0.85
∘
∘
∘
Example
Steel
B22
1.7
0.87
∘
∘
∘
Example
Steel
B23
1.8
0.80
∘
∘
∘
Example
Steel
B24
2.2
0.88
∘
∘
∘
Example
Steel
B25
1.6
0.83
∘
∘
∘
Example
Steel
B26
1.8
0.82
∘
∘
∘
Example
Steel
B27
2.0
0.57
∘
∘
x
Example
Steel
B28
2.6
0.59
∘
∘
x
Example
Steel
TABLE 3
Steel
Chemical Composition (mass %)
Grade
C
Si
Mn
P
S
Cr
Cu
Ni
Al
N
Nb
Ti
Zr
Other
A19
0.017
0.03
0.16
0.017
0.0012
21.5
0.38
0.29
0.02
0.006
0.55
0.01
0.02
Mg: 0.0008
A20
0.014
0.18
0.08
0.026
0.0024
21.4
0.51
0.44
0.04
0.011
0.42
0.05
0.06
—
A21
<u style="single">0.024</u>
0.21
0.15
0.021
0.0035
20.8
0.46
0.23
0.01
0.014
0.61
0.02
0.02
—
A22
0.006
0.11
0.12
0.029
0.0031
21.1
0.45
0.22
0.04
<u style="single">0.025</u>
0.49
0.01
0.01
—
A23
0.008
0.12
0.15
0.025
0.0025
20.9
0.41
0.21
0.01
0.007
0.66
0.01
0.01
V: 0.04,
Ca: 0.0006
A24
0.011
0.18
0.13
0.018
0.0032
21.3
0.43
0.18
0.04
0.009
0.42
0.02
0.01
V: 0.03,
Ca: 0.0011
A25
0.012
0.16
0.15
0.026
0.0027
21.2
0.42
0.22
0.04
0.008
0.55
<u style="single">—</u>
0.01
—
A26
0.013
0.14
0.13
0.024
0.0035
20.8
0.41
0.23
0.02
0.007
0.58
0.02
<u style="single">—</u>
—
A27
0.007
0.18
0.12
0.021
0.0022
21.5
0.39
0.42
<u style="single">—</u>
0.008
0.51
0.03
0.03
—
B29
0.012
0.02
0.32
0.022
0.0022
21.2
0.31
0.16
0.04
0.008
<u style="single">0.35</u>
0.01
0.01
—
B30
0.011
0.11
0.15
0.022
0.0018
22.1
0.38
0.22
0.04
0.005
0.59
0.04
<u style="single">—</u>
—
B31
<u style="single">0.022</u>
0.17
0.22
0.021
0.0021
21.1
0.42
0.21
0.05
0.006
0.61
0.01
0.01
—
B32
0.008
0.08
0.24
0.011
0.0022
21.5
0.45
0.25
0.04
<u style="single">0.025</u>
0.62
0.02
0.03
—
B33
0.011
0.21
0.15
0.035
0.0024
21.1
0.41
0.22
0.04
0.008
0.59
<u style="single">—</u>
0.04
—
B34
0.007
0.07
0.21
0.027
0.0038
20.9
0.43
0.19
0.02
0.009
0.66
<u style="single">0.12</u>
0.02
—
B35
0.008
0.18
0.17
0.022
0.0015
21.3
0.42
0.35
<u style="single">—</u>
0.007
0.59
0.06
0.04
—
Relational
Relational
Shape
Steel
Expression
Expression
of Weld
Corrosion
Surface
Grade
(1)
(2)
Zone
Resistance
Quality
Note
A19
<u style="single">3.5</u>
0.66
x
∘
x
<u style="single">Comparative</u>
<u style="single">Steel</u>
A20
<u style="single">0.9</u>
0.73
x
∘
x
<u style="single">Comparative</u>
<u style="single">Steel</u>
A21
2.8
0.71
∘
x
x
<u style="single">Comparative</u>
<u style="single">Steel</u>
A22
1.7
0.65
∘
x
x
<u style="single">Comparative</u>
<u style="single">Steel</u>
A23
<u style="single">4.7</u>
0.73
x
∘
x
<u style="single">Comparative</u>
<u style="single">Steel</u>
A24
<u style="single">1.2</u>
0.60
x
∘
x
<u style="single">Comparative</u>
<u style="single">Steel</u>
A25
1.9
0.69
x
x
x
<u style="single">Comparative</u>
<u style="single">Steel</u>
A26
2.8
0.68
x
x
x
<u style="single">Comparative</u>
<u style="single">Steel</u>
A27
2.8
0.62
x
∘
x
<u style="single">Comparative</u>
<u style="single">Steel</u>
B29
1.5
0.51
x
x
x
Comparative
Steel
B30
1.8
0.79
x
x
x
Comparative
Steel
B31
1.7
0.80
∘
x
∘
Comparative
Steel
B32
2.0
0.83
∘
x
∘
Comparative
Steel
B33
1.7
0.77
x
x
x
Comparative
Steel
B34
1.7
0.99
x
∘
x
Comparative
Steel
B35
2.4
0.77
x
∘
x
Comparative
Steel
BRIEF DESCRIPTION OF DRAWINGS
The FIGURE is an observation example of the cross-sectional shape of a weld zone formed by performing TIG welding in an example. A ferritic stainless steel sheet is on the right hand side, and a SUS304 steel sheet is on the left hand side. Observation examples with a sag (A), with an undercut (B), and with excellent shape of weld zone (C) are given. | |
A study has found that just 14% of consultant physicians in Ireland are in favour of legalising euthanasia, with a strong majority also opposing assisted suicide.
The research found that 67% of those who responded opposed legalising euthanasia with just 14% in favour, while almost 19% remained neutral. When it came to physician assisted suicide, 17% were in favour of making it legal while 56% opposed that and almost 27% remained neutral.
If those who remained neutral were stripped out, the data showed that 83% of consultant physicians who took a position opposed euthanasia with 17% in favour, while 77% opposed Assisted Suicide.
A table published by researchers collating responses according to medical speciality revealed some disparity in opinion amongst consultants. Doctors with most experience of working with older people or cancer patients or in the provision of palliative care were far less likely to support either assisted suicide or euthanasia.
Only 3% of consultants working in geriatrics supported assisted suicide, while just 6% of oncology specialists supported euthanasia, though 13% of the speciality did support assisted suicide. 90% of palliative care consultants remain opposed to both euthanasia and assisted suicide.
The study noted that a relatively large proportion of consultants were undecided,remaining neutral on the topic. “Most considered that even if tightly regulated, the practices of euthanasia and physician-assisted suicidewould still be open to potential abuse,reflecting concerns raised by the RCPI [Royal College of Physicians in Ireland] ,” researchers said.
They also noted that: “the position of Irish physicians on these issues appears to stand in marked contrast to the attitudes of the Irish public,” and commented that “internationally, numerous studies have consistently demonstrated lower levels of support for euthanasia and physician-assisted suicide among physicians than among the general public.”
“The reasons for this divergence of opinion between physicians and the general public is important to understand. It is possible that the difference relates to physicians’ greater experience with end-of-life care. Studies outside of Ireland have frequently found stronger opposition to euthanasia and physician-assisted suicide among physicians with greater experience of caring for terminally ill patients,” the researchers said.
“While the matter was not directly addressed in this study, it is nevertheless noticeable that opposition was partic larly strong among specialties such as Palliative Care, Geriatric Medicine and Medical Oncology that would be expected to have a greater proportion of older and dying patients.”
The research, published in the Irish Medical Journal, used data collected by distributing a questionnaire to all consultant physicians listed in the Irish Medical Directory under general internal medicine specialties. Some 238 consultants responded: an overall response rate of 28.7% (238/830). | https://thelifeinstitute.net/news/2021/huge-majority-of-irish-consultants-oppose-assisted-suicide-just-14-in-favour-of-euthanasia |
The virtual Democratic National Convention that kicks off tonight won’t just be missing the applause and crowds that viewers are used to seeing at home. It’ll be without the unparalleled quadrennial opportunity to build relationships among political activists from across the country.
The real action for those attending wasn’t on the convention floor, but in the parties, side meetings, casual encounters and skybox celebrations that literally go from dawn-to-dawn alongside the speeches and the nominations that make the headlines. For the delegates, journalists, office seekers and sponsors, convention days were always long on networking and short on sleep. Who wants to rest with endless opportunities to make such valuable connections in a short few days?
As someone who has been attending political conventions since 1996, for decades as a journalist and in the last cycle as a speech coach, I’ve built countless lasting relationships while socializing on the sidelines. Some of those I met as a 20-year-old cub reporter at my first convention embedded with the Michigan delegation in a hotel outside San Diego were among my best sources throughout my career. They were the local politicians and delegates I could call on nearly two decades later when I was a White House reporter seeking the country’s real political pulse instead of presidential spin.
The truth is there wasn’t much in the way of real news that happened at these gatherings. I pre-wrote most of my articles before ever touching down in the convention city with the broad themes I knew to expect – then just filling in a few quotes and color from the ground. I wanted to minimize the time I was stuck in front of my computer filing an article for the day and maximize my time making new friends.
The other purpose of the convention is firing up the grassroots attendees to aggressively sell the ticket in the final push – another lost opportunity when the ground game can win or lose elections.
But with so many Americans still stuck at home, marquee speeches could still see big broadcast numbers. | http://capitalincontext.fgsglobal.com/nedras-notes-the-democratic-unconvention-unfolds/ |
The Wangzhou and OCAT Xi’an Pavilion is located in Fengdong New District, Xi’an, Shaanxi Province. The project is surrounded by tourist attractions and natural sites, including Happy Valley, The Book of Songs, Fenghe Wetland Park, Fengdong Agricultural Expo Park, Kunming Pool, and the large-scale scenic spot on Gaojing Site. These locations together form a regional cultural tourism group.
The main function of the Wangzhou and OCAT Xi’an Pavilion is to be a cultural exhibition center. The hall covers an area of 1632.5 square metres, 62.5 metres long from east to west, 42.5 metres wide from north to south, with a total construction area of 2875 square metres. The overall building height is 13.25 metres. The first floor is a multi-functional lecture hall and the second floor is an exhibition hall to display OCAT contemporary art.
In our design concept, Wangzhou and OCAT Xi’an Pavilion is not only a building that carries the history and culture of "Zhou" dynasty, but also a building that displays contemporary art and culture. The form of the building is square in shape, and it is a modern building with a concise facade. Through the interpretation of the Zhou Dynasty culture, we designed a building that sits on a sloped grassy hill which represent the concept of a historical monument or stele, which signifies the history and culture of the “Zhou" people.
The design has three access entrances, the main entrance is located at west. Approaching the building from the west, a set of gentle steps leads visitors to the main entrance, which consists of a gentle upward sloping courtyard. The secondary entrance is north facing, which connects the inner commercial street to the multi-functional hall. The entrance towards the east is for logistics. The design connects and organises space through three natural courtyards. The exhibition hall and the courtyard permeate each other, blurring the spatial boundary between indoors and outdoors. The space provides an amazing experience for the visitor whether it’s for a scenic stroll or a moment of relaxation and meditation.
While the building remains still, the movement of the natural light and shadows expresses the character and dynamisms of the building. The material choice of plain white Italian travertine, black slate and hi-light glass emphasise how light affects the space and shape of the building. The large facade is not just a functional segmentation interface but carries with it an expression of architectural language. The minimalist facade that catches light, reflects natural landscaping, and provides a canvas upon which natural beauty can express itself.
The design of the internal layout is open and flexible. It not only provides the capacity for hosting varies exhibitions, but also gives visitors a unique viewing experience. The Wangzhou and OCAT Xi’an Pavilion captures the cultural imprints of the Zhou dynasty. The splendid culture of Zhou is embedded within in the architectural language of lines, shades and reflections, representing once again an ‘imprint’ of the splendid culture of the Zhou dynasty.
IAPA PTY. LTD. as one of the leading Australian practitioners in architectural design, urban design, landscape design, interior design, we provide internationally standard services to any leading developers anywhere in the world. After years of committed practice, IAPA core members with international educational backgrounds have established cooperation and partnerships with internationally design institutes. Through local knowledge and international support, IAPA is constantly advancing in the Chinese design arena through strong architectural design statements. The team has extensive experience in commercial, residential and retail developments and prides itself in ensuring that each individual client’s requirements are met, and their expectations exceeded. | https://www.theplan.it/award-2021-culture/wangzhou-and-ocat-xian-pavilion-a-monolithic-travertine-volume-iapa-pty-ltd |
Michael Twohig, a professor of psychology in the Emma Eccles Jones College of Education and Human Services, recently presented his inaugural lecture in the home of Noelle and John Cockett, president and first gentleman of Utah State University, to commemorate and celebrate Dr. Twohig’s promotion to full professor.
Dr. Twohig works in the Combined Clinical/Counseling Ph.D. program within the Psychology Department, focusing his research on developing effective treatments for anxiety disorders—primarily obsessive-compulsive disorder and other OC-spectrum disorders. He is licensed as a psychologist in the state of Utah and works with graduate and undergraduate students at USU to train the next generation of psychologists. “I’m trying to move the field of clinical psychology forward in a science-based way,” Dr. Twohig said.
While working toward a Ph.D. in clinical psychology at the University of Nevada, Reno, Dr. Twohig was introduced to Dr. Steven Hayes, a researcher at the university who is recognized for developing of Acceptance and Commitment Therapy. While still relatively unknown at the time, the ACT method of using mindfulness and acceptance techniques to manage psychological disorders intrigued Dr. Twohig. As his educational career advanced, Dr. Twohig began applying ACT techniques to OC-spectrum disorders.
Following the completion of his doctoral degree, Dr. Twohig moved to British Columbia to complete his clinical internship at a premiere OCD treatment and research center in Vancouver. A year later, Dr. Twohig moved to Logan to begin his first academic position.
“I really appreciate where I work,” Dr. Twohig said. “I appreciate what people at all levels of USU have done for me, whether that’s my department head, dean, the provost, or the president. They’ve all done significant things to help me.”
Dr. Twohig has seen the ACT method grow significantly since he began his research in the field 15 years ago. While at a conference Dr. Twohig attended recently, a presenter stated that ACT is now the third most-used method of therapy in North America.
Outside of his time at the university, Dr. Twohig enjoys the outdoor lifestyle that life in Cache Valley provides. He and his family can often be found bicycling, skiing, snowboarding, hiking, running and climbing through the mountains around Logan. | https://cehs.usu.edu/news/2018/mike-twohig |
We hear more and more about self-care nowadays and it’s interesting to really reflect on what this means and its importance in our everyday lives.
There still seems to be some stigma attached to the idea of self-care and it’s common to feel that putting yourself first is selfish and that we should be looking after others. But how can we do this if we have no time for ourselves. In the words of the American drag queen and TV personality, Ru Paul, “If you can’t love yourself, how in the hell are you gonna love somebody else?”
This phrase really resonates with me and reminds me of something that a client mentioned to me recently – they were afraid to put themselves first in relationships, but have recently realised how important it is for them to “put on the oxygen mask first before I can help others”. In other words, by ensuring that we feel able to cope, that our stress levels are low and that we are feeling happy and resilient, then we are in a much better place to care for others.
Me-time is often last on the agenda and I’m here to remind you that it really should be top of your list! Self-care doesn’t have to involve a huge time commitment, nor does it have to mean shelling out the cash. It’s about making a commitment to putting yourself first, even for a short time.
It’s a good idea to get started with self-care by dividing it into bite-size chunks:
- Physical – when you’re caring for your body, you’ll think and feel better as well
- How much sleep are you getting?
- Do you have enough exercise?
- Are you managing your health?
- Are you eating well?
- Social – close connections are key to your wellbeing
- Are you getting enough time with your friends?
- Are you keeping in touch with friends and family?
- Mental – what’s going on in your mind can really influence your psychological wellbeing
- Be kind to yourself, practice self-compassion
- Make time to do things that mentally stimulate you – chat with a friend, do a crossword puzzle, read a book, learn about something new
- Emotional – it’s important to be able to process your emotions regularly
- Don’t bottle everything up inside
- Talk to a partner, friend or family member, or even a stranger
You don’t have to tackle everything all at once. Your self-care plan will need to be able to fit in with your life and your needs and we’re all different. Think about making small changes like going to bed 15 minutes earlier than usual and listening to music rather than scrolling social media; or taking a walk around the block at lunch-time; or saying no to something you don’t want to do. The more you can work self-care into your schedule, the more you will begin to enjoy your life. | https://alisonburrell.co.uk/self-care-isnt-just-about-having-an-early-night/ |
Blum & Poe seeks a full-time Registrar with at least 3 years experience to begin immediately. The ideal candidate must be detail-oriented, have strong organizational and multitasking skills, and be able to work well in a dynamic, multi-location environment. This person will work closely with the Exhibitions Registrar, Director of Logistics, and the Operations team.
INVENTORY
Oversee all incoming and outgoing artworks
Manage and conduct condition reports, filing in server and updating the database as necessary
Oversee and coordinate artwork conservation, ensure safe care and handling of artworks outside of B&P exhibitions/Art Fairs, and inform the appropriate staff as necessary
Track artwork reserves and sales; send monthly reserve report to principal’s and relevant staff, reflect any changes in the database
Inform relevant staff regarding inventory updates
Conduct annual physical inventory at all storage facilities
Organize and manage the gallery’s on-site inventory and off-site storage spaces, including incoming and outgoing inventory on a daily basis
Manage and conduct database entries thoroughly
LOANS AND INSURANCE
Oversee and manage loans of artworks pertaining to offsite exhibitions
Assist in organizing and overseeing all traveling exhibitions
Manage global fine art insurance policy
Manage certificates of insurance
CONSIGNMENTS & RESALES
Oversee consignments for all offsite exhibitions and general inventory, including drawing up initial consignments, managing expirations, renewals, policies, returns, and alerting appropriate staff regarding expirations and renewals
Oversee long-term consignment agreements and renewals, as well as secondary consignments
GENERAL
Oversee offsite exhibitions as required
Travel to assist with exhibitions, installations, production, and site visits as needed
Oversee production of artworks alongside the Director of Exhibition Production
Oversee commissions including database entry, track and follow-up, and the relevant staff and clientele informed
Oversee and generate installation manuals alongside Exhibition Manager
Minimum 3 years experience as a Registrar at a gallery, museum or arts institution
Bachelor’s degree
Outstanding interpersonal, written, and verbal communication skills
Proficiency with Microsoft Office, Adobe Creative Suite, and Filemaker Pro
Extensive familiarity with fine art handling best practices and condition reporting/conservation
Must be a supportive team player, able to address new challenges and maintain a level head under pressure
Detail-oriented with strong organizational skills
Ability to take initiative, multitask, and work graciously in a fast-paced environment, deadline-driven environment
Strong sense of discretion and confidentiality
Please submit your resume, cover letter, and 3 professional references to [email protected].
We will be following up with requests for interviews in the coming weeks in order to fill this position as soon as possible.
Only serious and qualified candidates will be considered.
Salary is commensurate with experience and includes excellent benefits and paid time off.
This a full-time position, 5 days per week. The applicant must be flexible and available outside of the designated days on an as needed basis.
Blum & Poe is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. | https://www.jobs.art/posts/registrar-blum-poe |
Reusing language concepts, modular languages
The optional task of Language Workbench Challenge 2013 opens the space for more advanced language designs: It emphasizes modularity of languages and possibility to reuse parts of the DSM solution with a very typical scenario: one language for specifying logic and another for specifying layout.
Having now in my hands the submission of LWC 2013 implementation in MetaEdit+, I played with the combined languages of QL and QLS. QL stands for Questionnaire Language (see earlier blog entry) for defining questions and their sequential order and QLS stands for Question Layout and Style for defining the visual layout of the questions. In the metamodel, implemented by my colleague, these languages (and generators) are tightly integrated.
The combination of the languages allows creating different layout options for the same guestions and their logic. Consider the examples below: questions and question logic can be the same but for layout there are differences - not only for visualization but also for example how the questionnaire is splitted into different pages/steps. Naturally also logic can be different as support for variability space is built directly into the languages.
This kind of integration works usually better than keeping the logic and layout disconnected at design time or using model to model transformations. With this language implementation developers using MetaEdit+ can work in parallel: some focusing on question logic and others on layout - and work seamlessly using the same questionnaire design information on both logic and layoyt. At any point of time either group can also generate and run the questionnaires to try them out. Integrated languages also enable better reasoning, checking and trace among the design elements.
Visit LanguageWorkbenches.net to see the submissions to the third challenge. The website shows also earlier years' submissions allowing you to compare how tools perform and implement the tasks given. I personally have not been involved in organizing these events (just implemented one solution), but what would make me happy in hopefully coming next challenges would be language design tasks dealing with:
- Language evolution (so far at LWC languages have been created from the scratch)
- Model evolution when DSL/DSM is refined/maintained (so far there has not been interest to maintain models done with earlier version of the languages while this is what happens in practice)
- Multiple language engineers (there are often multiple language engineers defining the same language)
- Scalability: large models and multiple persons use the language, multiple persons modify the language
- Different representations, not only graphical or text, but also matrixes, tables, and their mixtures
While this year looked more like framework and runtime development challenge than language development challenge (my colleague estimated only 20% to language development part in MetaEdit+), perhaps even bigger differences among the language workbenches would be visible when implementing larger languages - integrated and obviously modular. Join LWC 2013 next week to see how all the solutions work. | https://www.metacase.com/blogs/jpt/blogView?showComments=true&entry=3542609172 |
Mitogen-activated protein kinase/extracellular signal-regulated kinase induced gene regulation in brain: a molecular substrate for learning and memory?
The mitogen-activated protein kinase/extracellular signal-regulated kinase (ERK) pathway is an evolutionarily conserved signaling cascade involved in a plethora of physiological responses, including cell proliferation, survival, differentiation, and, in neuronal cells, synaptic plasticity. Increasing evidence now implicates this pathway in cognitive functions, such as learning and memory formation, and also in behavioral responses to addictive drugs. Although multiple intracellular substrates can be activated by ERKs, nuclear targeting of transcription factors, and thereby control of gene expression, seems to be a major event in ERK-induced neuronal adaptation. By controlling a prime burst of gene expression, ERK signaling could be critically involved in molecular adaptations that are necessary for long-term behavioral changes. Reviewed here are data providing evidence for a role of ERKs in long-term behavioral alterations, and the authors discuss molecular mechanisms that could underlie this role.
| |
Child, adolescent, and family health social workers practice in a variety of settings, including prenatal clinics, well-baby centers, pediatric intensive care units, school-based health centers, programs for pregnant and parenting teens, and child development centers. They also practice in settings for children with chronic illnesses, disabilities, and handicapping conditions in state and local departments of public health, and in child advocacy organizations. Depending on the setting and their position, they may provide direct services, organize parents and other constituencies, administer programs, formulate policy or advocate for improved services.
The Child, Adolescent, and Family Health Subspecialization is part of the Health Specialization.
This subspecialization is available to students in both the Clinical and Macro concentrations.
Employee Assistance Program (EAP) ▾
Chair: Jodi Frey, PhD, LCSW-C, CEAP
Contact Information: (410) 706-3607 or [email protected]
EAP Digital Archive Site
Overview
The Employee Assistance Program (EAP) Sub- Specialization is internationally recognized as the largest graduate social work program in the world dedicated to preparing social workers for the EAP field. In recent years, there has been rapid growth in the demand for human services in the workplace. EAP social workers provide clinical and macro services for employees and employers, including, but not limited to assessment and shortterm assessment and counseling for mental health and substance use, workplace and relationship stress, worker well-being, crisis intervention, and organizational change management. Additionally, EAP social workers partner with diverse work organizations, including unions, to develop and implement policies, consult with managers, and assess organizational functioning. EAPs fill a critical role in the workplace, supporting employers’ most valuable asset, their employees. The EAP sub-specialization is offered within the Behavioral Health specialization. In addition to focusing on EAPs, the sub-specialization includes learning on topics of workplace behavioral health, work/life, wellbeing and management. Faculty members at the School of Social Work are recognized experts in EAP and related fields. EAP social work graduates join a rapidly expanding field and are well qualified to create and manage EAPs in private and public settings, including global programs. Employment opportunities also exist at all corporate and government levels, as well as internationally.
This subspecialization is available to students in both the Clinical and Macro concentrations.
Additional Information about the EAP Field
-- The Employee Assistance Professionals Association (EAPA) website: www.eapassn.org and The Employee Assistance Society of North American (EASNA) website: www.easna.org. | https://www.ssw.umaryland.edu/academics/specializations/sub-specializations/ |
TECHNICAL FIELD
BACKGROUND
SUMMARY
BRIEF DESCRIPTION OF THE DRAWINGS
DETAILED DESCRIPTION
The present disclosure relates generally to a method and an apparatus for obtaining a heart rate.
In recent, with advances in digital technology, various electronic devices such as a mobile communication terminal, a Personal Digital Assistant (PDA), an electronic notebook, a smart phone, a tablet Personal Computer (PC), and a wearable device are widely used. To support and expand functions, hardware and/or software of the electronic device steadily improve. For example, the electronic device can include one or more sensors and collect its state or user's biometric information using sensor data obtained from the sensor.
For example, the electronic device can measure a user's heart rate using a heart rate monitoring sensor. The heart rate monitoring sensor can include a light emitter and a light receiver, which may be optical sensors (e.g., green/red Light Emitting Diode (LED)). When the electronic device is attached to a user's body, the light emitter of the heart rate monitor can output light and the light receiver can receive the output light reflected from part of the user's body. By digitizing and arranging a quantity of the light received at the light receiver based on time, a signal indicating a particular frequency can be generated. The heart rate monitoring sensor can measure the heart rate by scanning a frequency corresponding to heartbeats from the generated signal.
For the accurate heart rate measurement, the user needs to wear the heart rate monitoring sensor on his/her chest. Also, when the heart rate monitoring sensor scans the frequency and the user moves, the frequency according to the movement may affect the frequency corresponding to the heartbeat. In this case, to measure the accurate heart rate, the frequency (e.g., noise) according to the movement needs to be removed from the signal generated by the heart rate monitoring sensor. In addition, when the heart rate monitoring sensor is not closely attached to the user's body part, the signal based on the quantity of the light received at the light receiver can be considerably unstable. As a result, an accurate heart rate may not be attained merely by removing various noises in the heart rate measurement.
According to one aspect of the present disclosure, an electronic device can include a motion sensor, a heart rate monitor sensor, and a processor functionally coupled with the motion sensor and the heart rate monitor sensor. The processor can be configured to obtain first motion sensor data for a first duration using the motion sensor, to obtain first heartbeat data for the first duration using the heart rate monitor sensor, to determine an exercise type based on the first motion sensor data, to determine a heartbeat prediction range based on at least one of the first motion sensor data, the exercise type, and the first heartbeat data, to obtain second heartbeat data for a second duration using the heart rate monitor sensor, to determine whether the second heartbeat data falls within the heartbeat prediction range, and to determine heartbeat data of the second duration based on the determination result.
According to another aspect of the present disclosure, a method for operating an electronic device which includes a motion sensor and a heart rate monitor sensor, can include obtaining first motion sensor data for a first duration using the motion sensor, and obtaining first heartbeat data for the first duration using the heart rate monitor sensor, determining an exercise type based on the first motion sensor data, determining a heartbeat prediction range based on at least one of the first motion sensor data, the exercise type, and the first heartbeat data, determining whether second heartbeat data obtained for a second duration falls within the heartbeat prediction range, and determining heartbeat data of the second duration based on the determination result.
According to yet another aspect of the present disclosure, a computer-readable recording medium can include a program for obtaining first motion sensor data for a first duration using the motion sensor, and obtaining first heartbeat data for the first duration using the heart rate monitor sensor, determining an exercise type based on the first motion sensor data, determining a heartbeat prediction range based on at least one of the first motion sensor data, the exercise type, and the first heartbeat data, determining whether second heartbeat data obtained for a second duration falls within the heartbeat prediction range, and determining heartbeat data of the second duration based on the determination result.
According to various embodiments, using the user's exercise information and the sensor information, the heart rate measured by the heart rate monitor sensor can corrected and thus a more accurate heart rate can be attained.
According to various embodiments, the calories based on the heart rate can be calculated by acquiring the accurate heart rate through the heart rate correction.
According to various embodiments, by acquiring the accurate heart rate based on the user motion, various information can be provided using the heart rate.
Other aspects and salient features of the disclosure will become apparent to those skilled in the art from the following detailed description, which, taken in conjunction with the annexed drawings, discloses example embodiments of the disclosure.
FIG. 1
is a block diagram of an electronic device in a network according to various embodiments;
FIG. 2
is a block diagram of an electronic device according to various embodiments;
FIG. 3
is a block diagram of a program module according to various embodiments;
FIG. 4
is a block diagram of an electronic device according to various embodiments;
FIG. 5
is a flowchart of an operating method of an electronic device according to various embodiments;
FIGS. 6A
6B
and are diagrams of heartbeat data predicted in an electronic device according to various embodiments;
FIGS. 7A
7B
7C
7D
, , and are diagrams of heartbeat data measured variously according to an exercise type according to various embodiments;
FIG. 8
is a diagram of a heartbeat prediction range determined based on an exercise type according to various embodiments;
FIG. 9
is a flowchart of a method for determining a heartbeat prediction range in an electronic device according to various embodiments; and
FIG. 10
is a flowchart of a method for providing information using heartbeat data of an electronic device according to various embodiments.
The above and other aspects, and features of certain example embodiments of the present disclosure will be more apparent from the following description taken in conjunction with the accompanying drawings, in which:
Throughout the drawings, like reference numerals will be understood to refer to like parts, components and structures.
Hereinafter, various embodiments of the present disclosure will be described with reference to the accompanying drawings. However, it should be understood that there is no intent to limit the present disclosure to the particular forms disclosed herein; rather, the present disclosure should be construed to cover various modifications, equivalents, and/or alternatives of embodiments of the present disclosure. In describing the drawings, similar reference numerals may be used to designate similar constituent elements. As used herein, the expression "have", "may have", "include", or "may include" refers to the existence of a corresponding feature (e.g., numeral, function, operation, or constituent element such as component), and does not exclude one or more additional features. In the present disclosure, the expression "A or B", "at least one of A or/and B", or "one or more of A or/and B" may include all possible combinations of the items listed. For example, the expression "A or B", "at least one of A and B", or "at least one of A or B" refers to all of (1) including at least one A, (2) including at least one B, or (3) including all of at least one A and at least one B. The expression "a first", "a second", "the first", or "the second" used in various embodiments of the present disclosure may modify various components regardless of the order and/or the importance but does not limit the corresponding components. For example, a first user device and a second user device indicate different user devices although both of them are user devices. For example, a first element may be termed a second element, and similarly, a second element may be termed a first element without departing from the present disclosure.
It should be understood that when an element (e.g., first element) is referred to as being (operatively or communicatively) "connected," or "coupled," to another element (e.g., second element), it may be directly connected or coupled directly to the other element or any other element (e.g., third element) may be interposer between them. In contrast, it may be understood that when an element (e.g., first element) is referred to as being "directly connected," or "directly coupled" to another element (second element), there are no element (e.g., third element) interposed between them.
The expression "configured to" used in the present disclosure may be exchanged with, for example, "suitable for", "having the capacity to", "designed to", "adapted to", "made to", or "capable of' according to the situation. The term "configured to" may not necessarily imply "specifically designed to" in hardware. Alternatively, in some situations, the expression "device configured to" may mean that the device, together with other devices or components, "is able to". For example, the phrase "processor adapted (or configured) to perform A, B, and C" may mean a dedicated processor (e.g. embedded processor) only for performing the corresponding operations or a generic-purpose processor (e.g., central processing unit (CPU) or application processor (AP)) that can perform the corresponding operations by executing one or more software programs stored in a memory device.
The terms used in the present disclosure are only used to describe specific embodiments, and are not intended to limit the present disclosure. As used herein, singular forms may include plural forms as well unless the context clearly indicates otherwise. Unless defined otherwise, all terms used herein, including technical and scientific terms, have the same meaning as those commonly understood by a person skilled in the art to which the present disclosure pertains. Such terms as those defined in a generally used dictionary may be interpreted to have the meanings equal to the contextual meanings in the relevant field of art, and are not to be interpreted to have ideal or excessively formal meanings unless clearly defined in the present disclosure. In some cases, even the term defined in the present disclosure should not be interpreted to exclude embodiments of the present disclosure.
An electronic device according to various embodiments of the present disclosure may include at least one of, for example, a smart phone, a tablet Personal Computer (PC), a mobile phone, a video phone, an electronic book reader (e-book reader), a desktop PC, a laptop PC, a netbook computer, a workstation, a server, a Personal Digital Assistant (PDA), a Portable Multimedia Player (PMP), a MPEG-1 audio layer-3 (MP3) player, a mobile medical device, a camera, and a wearable device. According to various embodiments, the wearable device may include at least one of an accessory type (e.g., a watch, a ring, a bracelet, an anklet, a necklace, a glasses, a contact lens, or a Head-Mounted Device (HMD)), a fabric or clothing integrated type (e.g., an electronic clothing), a body-mounted type (e.g., a skin pad, or tattoo), and a bio-implantable type (e.g., an implantable circuit), and therefore include straps, buckles, clasps, slings, locks or any other attachment which may secure the device to a user's body. According to some embodiments, the electronic device may be a home appliance. The home appliance may include at least one of, for example, a television, a Digital Video Disk (DVD) player, an audio, a refrigerator, an air conditioner, a vacuum cleaner, an oven, a microwave oven, a washing machine, an air cleaner, a set-top box, a home automation control panel, a security control panel, a TV box (e.g., Samsung HomeSync™, Apple TV™, or Google TV™), a game console (e.g., Xbox™ and PlayStation™), an electronic dictionary, an electronic key, a camcorder, and an electronic photo frame.
According to another embodiment, the electronic device may include at least one of various medical devices (e.g., various portable medical measuring devices (a blood glucose monitoring device, a heart rate monitoring device, a blood pressure measuring device, a body temperature measuring device, etc.), a Magnetic Resonance Angiography (MRA), a Magnetic Resonance Imaging (MRI), a Computed Tomography (CT) machine, and an ultrasonic machine), a navigation device, a Global Positioning System (GPS) receiver, an Event Data Recorder (EDR) , a Flight Data Recorder (FDR) , a Vehicle Infotainment Devices, an electronic devices for a ship (e.g., a navigation device for a ship, and a gyro-compass), avionics, security devices, an automotive head unit, a robot for home or industry, an automatic teller's machine (ATM) in banks, point of sales (POS) in a shop, or internet device of things (e.g., a light bulb, various sensors, electric or gas meter, a sprinkler device, a fire alarm, a thermostat, a streetlamp, a toaster, a sporting goods, a hot water tank, a heater, a boiler, etc.).
According to some embodiments, the electronic device may include at least one of a part of furniture or a building/structure, an electronic board, an electronic signature receiving device, a projector, and various kinds of measuring instruments (e.g., a water meter, an electric meter, a gas meter, and a radio wave meter). The electronic device according to various embodiments of the present disclosure may be a combination of one or more of the aforementioned various devices. The electronic device according to some embodiments of the present disclosure may be a flexible device. Further, the electronic device according to an embodiment of the present disclosure is not limited to the aforementioned devices, and may include a new electronic device according to the development of technology. Hereinafter, an electronic device according to various embodiments will be described with reference to the accompanying drawings. As used herein, the term "user" may indicate a person who uses an electronic device or a device (e.g., an artificial intelligence electronic device) that uses an electronic device.
FIG. 1
illustrates a network environment including an electronic device according to various embodiments of the present disclosure.
FIG. 1
An electronic device 101 within a network environment 100, according to various embodiments, will be described with reference to . The electronic device 101 may include a bus 110, a processor 120, a memory 130, an input/output interface 150, a display 160, and a communication interface 170. According to an embodiment of the present disclosure, the electronic device 101 may omit at least one of the above components or may further include other components.
The bus 110 may include, for example, a circuit which interconnects the components 110 to 170 and delivers a communication (e.g., a control message and/or data) between the components 110 to 170.
The processor 120 may include one or more of a Central Processing Unit (CPU), an Application Processor (AP), and a Communication Processor (CP). The processor 120 may carry out, for example, calculation or data processing relating to control and/or communication of at least one other component of the electronic device 101.
The memory 130 may include a volatile memory and/or a non-volatile memory. The memory 130 may store, for example, commands or data relevant to at least one other component of the electronic device 101. According to an embodiment of the present disclosure, the memory 130 may store software and/or a program 140. The program 140 may include, for example, a kernel 141, middleware 143, an Application Programming Interface (API) 145, and/or application programs (or "applications") 147. At least some of the kernel 141, the middleware 143, and the API 145 may be referred to as an Operating System (OS).
The kernel 141 may control or manage system resources (e.g., the bus 110, the processor 120, or the memory 130) used for performing an operation or function implemented in the other programs (e.g., the middleware 143, the API 145, or the application programs 147). Furthermore, the kernel 141 may provide an interface through which the middleware 143, the API 145, or the application programs 147 may access the individual components of the electronic device 101 to control or manage the system resources.
The middleware 143, for example, may serve as an intermediary for allowing the API 145 or the application programs 147 to communicate with the kernel 141 to exchange data. Also, the middleware 143 may process one or more task requests received from the application programs 147 according to priorities thereof. For example, the middleware 143 may assign priorities for using the system resources (e.g., the bus 110, the processor 120, the memory 130, or the like) of the electronic device 101, to at least one of the application programs 147. For example, the middleware 143 may perform scheduling or loading balancing on the one or more task requests by processing the one or more task requests according to the priorities assigned thereto.
The API 145 is an interface through which the applications 147 control functions provided from the kernel 141 or the middleware 143, and may include, for example, at least one interface or function (e.g., instruction) for file control, window control, image processing, character control, and the like.
The input/output interface 150, for example, may function as an interface that may transfer commands or data input from a user or another external device to the other element(s) of the electronic device 101. Furthermore, the input/output interface 150 may output the commands or data received from the other element(s) of the electronic device 101 to the user or another external device.
Examples of the display 160 may include a Liquid Crystal Display (LCD), a Light-Emitting Diode (LED) display, an Organic Light-Emitting Diode (OLED) display, a MicroElectroMechanical Systems (MEMS) display, and an electronic paper display. The display 160 may display, for example, various types of contents (e.g., text, images, videos, icons, or symbols) to users. The display 160 may include a touch screen, and may receive, for example, a touch, gesture, proximity, or hovering input using an electronic pen or a user's body part.
The communication interface 170 may establish communication, for example, between the electronic device 101 and an external device (e.g., a first external electronic device 102, a second external electronic device 104, or a server 106). For example, the communication interface 170 may be connected to a network 162 through wireless or wired communication, and may communicate with an external device (e.g., the second external electronic device 104 or the server 106).The wireless communication may use at least one of, for example, Long Term Evolution (LTE), LTE-Advance (LTE-A), Code Division Multiple Access (CDMA), Wideband CDMA (WCDMA), Universal Mobile Telecommunications System (UMTS), Wireless Broadband (WiBro), and Global System for Mobile Communications (GSM), as a cellular communication protocol. In addition, the wireless communication may include, for example, short-range communication 164.
The short-range communication 164 may include at least one of, for example, Wi-Fi, Bluetooth, Near Field Communication (NFC), and Global Navigation Satellite System (GNSS). GNSS may include, for example, at least one of global positioning system (GPS), global navigation satellite system (Glonass), Beidou Navigation satellite system (Beidou) or Galileo, and the European global satellite-based navigation system, based on a location, a bandwidth, or the like. Hereinafter, in the present disclosure, the "GPS" may be interchangeably used with the "GNSS". The wired communication may include, for example, at least one of a Universal Serial Bus (USB), a High Definition Multimedia Interface (HDMI), Recommended Standard 232 (RS-232), and a Plain Old Telephone Service (POTS). The network 162 may include at least one of a telecommunication network such as a computer network (e.g., a LAN or a WAN), the Internet, and a telephone network.
Each of the first and second external electronic devices 102 and 104 may be of a type identical to or different from that of the electronic device 101. According to an embodiment of the present disclosure, the server 106 may include a group of one or more servers. According to various embodiments of the present disclosure, all or some of the operations performed in the electronic device 101 may be executed in another electronic device or a plurality of electronic devices (e.g., the electronic devices 102 and104 or the server 106). According to an embodiment of the present disclosure, when the electronic device 101 has to perform some functions or services automatically or in response to a request, the electronic device 101 may request another device (e.g., the electronic device 102 or 104 or the server 106) to execute at least some functions relating thereto instead of or in addition to autonomously performing the functions or services. Another electronic device (e.g., the electronic device 102 or 104, or the server 106) may execute the requested functions or the additional functions, and may deliver a result of the execution to the electronic device 101. The electronic device 101 may process the received result as it is or additionally, and may provide the requested functions or services. To this end, for example, cloud computing, distributed computing, or client-server computing technologies may be used.
FIG. 2
is a block diagram of an electronic device according to various embodiments of the present disclosure.
FIG. 1
The electronic device 201 may include, for example, all or a part of the electronic device 101 shown in . The electronic device 201 may include one or more processors 210 (e.g., Application Processors (AP)), a communication module 220, a memory 230, a sensor module 240, an input device 250, a display 260, an interface 270, an audio module 280, a camera module 291, a power management module 295, a battery 296, an indicator 297, and a motor 298.
FIG. 2
The processor 210 may control a plurality of hardware or software components connected to the processor 210 by driving an operating system or an application program, and perform processing of various pieces of data and calculations. The processor 210 may be embodied as, for example, a System on Chip (SoC). According to an embodiment of the present disclosure, the processor 210 may further include a Graphic Processing Unit (GPU) and/or an image signal processor. The processor 210 may include at least some (for example, a cellular module 221) of the components illustrated in . The processor 210 may load, into a volatile memory, commands or data received from at least one (e.g., a non-volatile memory) of the other components and may process the loaded commands or data, and may store various data in a non-volatile memory.
FIG. 1
The communication module 220 may have a configuration equal or similar to that of the communication interface 170 of . The communication module 220 may include, for example, a cellular module 221, a Wi-Fi module 223, a BT module 225, a GNSS module 227 (e.g., a GPS or GNSS module 227, a Glonass module, a Beidou module, or a Galileo module), an NFC module 228, and a Radio Frequency (RF) module 229. The cellular module 221, for example, may provide a voice call, a video call, a text message service, or an Internet service through a communication network. According to an embodiment of the present disclosure, the cellular module 221 may distinguish and authenticate the electronic device 201 in a communication network using a subscriber identification module (e.g.,: SIM card) 224 (for example, the SIM card). According to an embodiment of the present disclosure, the cellular module 221 may perform at least some of the functions that the AP 210 may provide. According to an embodiment of the present disclosure, the cellular module 221 may include a communication processor (CP).
For example, each of the Wi-Fi module 223, the BT module 225, the GNSS module 227, and the NFC module 228 may include a processor for processing data transmitted/received through a corresponding module. According to an embodiment of the present disclosure, at least some (e.g., two or more) of the cellular module 221, the Wi-Fi module 223, the BT module 225, the GNSS module 227, and the NFC module 228 may be included in one Integrated Chip (IC) or IC package. The RF module 229, for example, may transmit/receive a communication signal (e.g., an RF signal). The RF module 229 may include, for example, a transceiver, a Power Amplifier Module (PAM), a frequency filter, a Low Noise Amplifier (LNA), and an antenna. According to another embodiment of the present disclosure, at least one of the cellular module 221, the wi-fi module 223, the BT module 225, the GNSS module 227, and the NFC module 228 may transmit/receive an RF signal through a separate RF module. The subscriber identification module 224 may include, for example, a card including a subscriber identity module and/or an embedded SIM, and may contain unique identification information (e.g., an Integrated Circuit Card Identifier (ICCID)) or subscriber information (e.g., an International Mobile Subscriber Identity (IMSI)).
The memory 230 (e.g., the memory 130) may include, for example, an embedded memory 232 or an external memory 234. The embedded memory 232 may include at least one of a volatile memory (e.g., a Dynamic Random Access Memory (DRAM), a Static RAM (SRAM), a Synchronous Dynamic RAM (SDRAM), and the like) and a non-volatile memory (e.g., a One Time Programmable Read Only Memory (OTPROM), a Programmable ROM (PROM), an Erasable and Programmable ROM (EPROM), an Electrically Erasable and Programmable ROM (EEPROM), a mask ROM, a flash ROM, a flash memory (e.g., a NAND flash memory or a NOR flash memory), a hard disc drive, a Solid State Drive (SSD), and the like). The external memory 234 may further include a flash drive, for example, a Compact Flash (CF), a Secure Digital (SD), a Micro Secure Digital (Micro-SD), a Mini Secure Digital (Mini-SD), an eXtreme Digital (xD), a MultiMediaCard (MMC), a memory stick, or the like. The external memory 234 may be functionally and/or physically connected to the electronic device 201 through various interfaces.
The sensor module 240, for example, may measure a physical quantity or detect an operation state of the electronic device 201, and may convert the measured or detected information into an electrical signal. The sensor module 240 may include, for example, at least one of a gesture sensor 240A, a gyro sensor 240B, an atmospheric pressure sensor (barometer) 240C, a magnetic sensor 240D, an acceleration sensor 240E, a grip sensor 240F, a proximity sensor 240G, a color sensor 240H (e.g., red, green, and blue (RGB) sensor), a biometric sensor (medical sensor) 240I, a temperature/humidity sensor 240J, an illuminance sensor 240K, and a Ultra Violet (UV) sensor 240M. Additionally or alternatively, the sensor module 240 may include, for example, an E-nose sensor, an electromyography (EMG) sensor, an electroencephalogram (EEG) sensor, an electrocardiogram (ECG) sensor, an Infrared (IR) sensor, an iris scan sensor, and/or a finger scan sensor. The sensor module 240 may further include a control circuit for controlling one or more sensors included therein. According to an embodiment of the present disclosure, the electronic device 201 may further include a processor configured to control the sensor module 240, as a part of the processor 210 or separately from the processor 210, and may control the sensor module 240 while the processor 210 is in a sleep state.
The input device 250 may include, for example, a touch panel 252, a (digital) pen sensor 254, a key 256, or an ultrasonic input device 258. The touch panel 252 may use, for example, at least one of a capacitive type, a resistive type, an infrared type, and an ultrasonic type. The touch panel 252 may further include a control circuit. The touch panel 252 may further include a tactile layer, and provide a tactile reaction to the user. The (digital) pen sensor 254 may include, for example, a recognition sheet which is a part of the touch panel or is separated from the touch panel. The key 256 may include, for example, a physical button, an optical key or a keypad. The ultrasonic input device 258 may detect, through a microphone (e.g., the microphone 288), ultrasonic waves generated by an input tool, and identify data corresponding to the detected ultrasonic waves.
FIG. 1
The display 260 (e.g., the display 160) may include a panel 262, a hologram device 264, or a projector 266. The panel 262 may include a configuration identical or similar to the display 160 illustrated in . The panel 262 may be implemented to be, for example, flexible, transparent, or wearable. The panel 262 may be embodied as a single module with the touch panel 252. The hologram device 264 may show a three dimensional (3D) image in the air by using an interference of light. The projector 266 may project light onto a screen to display an image. The screen may be located, for example, in the interior of or on the exterior of the electronic device 201. According to an embodiment of the present disclosure, the display 260 may further include a control circuit for controlling the panel 262, the hologram device 264, or the projector 266.
FIG. 1
The interface 270 may include, for example, a High-Definition Multimedia Interface (HDMI) 272, a Universal Serial Bus (USB) 274, an optical interface 276, or a D-subminiature (D-sub) 278. The interface 270 may be included in, for example, the communication interface 170 illustrated in . Additionally or alternatively, the interface 270 may include, for example, a Mobile High-definition Link (MHL) interface, a Secure Digital (SD) card/Multi-Media Card (MMC) interface, or an Infrared Data Association (IrDA) standard interface.
FIG. 1
The audio module 280, for example, may bilaterally convert a sound and an electrical signal. At least some components of the audio module 280 may be included in, for example, the input/output interface 150 illustrated in . The audio module 280 may process voice information input or output through, for example, a speaker 282, a receiver 284, earphones 286, or the microphone 288. The camera module 291 is, for example, a device which may photograph a still image and a video. According to an embodiment of the present disclosure, the camera module 291 may include one or more image sensors (e.g., a front sensor or a back sensor), a lens, an Image Signal Processor (ISP) or a flash (e.g., LED or xenon lamp).
The power management module 295 may manage, for example, power of the electronic device 201. According to an embodiment of the present disclosure, the power management module 295 may include a Power Management Integrated Circuit (PMIC), a charger Integrated Circuit (IC), or a battery or fuel gauge. The PMIC may use a wired and/or wireless charging method. Examples of the wireless charging method may include, for example, a magnetic resonance method, a magnetic induction method, an electromagnetic wave method, and the like. Additional circuits (e.g., a coil loop, a resonance circuit, a rectifier, etc.) for wireless charging may be further included. The battery gauge may measure, for example, a residual quantity of the battery 296, and a voltage, a current, or a temperature while charging. The battery 296 may include, for example, a rechargeable battery and/or a solar battery.
The indicator 297 may display a particular state (e.g., a booting state, a message state, a charging state, or the like) of the electronic device 201 or a part (e.g., the processor 210) of the electronic device 201. The motor 298 may convert an electrical signal into a mechanical vibration, and may generate a vibration, a haptic effect, or the like. Although not illustrated, the electronic device 201 may include a processing device (e.g., a GPU) for supporting a mobile TV. The processing device for supporting a mobile TV may process, for example, media data according to a certain standard such as Digital Multimedia Broadcasting (DMB), Digital Video Broadcasting (DVB), or mediaFLOTM.
Each of the above-described component elements of hardware according to the present disclosure may be configured with one or more components, and the names of the corresponding component elements may vary based on the type of electronic device. In various embodiments, the electronic device may include at least one of the above-described elements. Some of the above-described elements may be omitted from the electronic device, or the electronic device may further include additional elements. Also, some of the hardware components according to various embodiments may be combined into one entity, which may perform functions identical to those of the relevant components before the combination.
FIG. 3
is a block diagram of a program module according to various embodiments of the present disclosure.
According to an embodiment of the present disclosure, the program module 310 (e.g., the program 140) may include an Operating System (OS) for controlling resources related to the electronic device (e.g., the electronic device 101) and/or various applications (e.g., the application programs 147) executed in the operating system. The operating system may be, for example, Android ™, iOS ™, Windows™, Symbian™, Tizen™, Bada™, or the like. The program module 310 may include a kernel 320, middleware 330, an API 360, and/or applications 370. At least some of the program module 310 may be preloaded on an electronic device, or may be downloaded from an external electronic device (e.g., the electronic device 102 or 104, or the server 106).
The kernel 320 (e.g., the kernel 141) may include, for example, a system resource manager 321 and/or a device driver 323. The system resource manager 321 may control, allocate, or collect system resources. According to an embodiment of the present disclosure, the system resource manager 321 may include a process management unit, a memory management unit, a file system management unit, and the like. The device driver 323 may include, for example, a display driver, a camera driver, a Bluetooth driver, a shared memory driver, a USB driver, a keypad driver, a Wi-Fi driver, an audio driver, or an InterProcess Communication (IPC) driver.
For example, the middleware 330 may provide a function utilized in common by the applications 370, or may provide various functions to the applications 370 through the API 360 so as to enable the applications 370 to efficiently use the limited system resources in the electronic device. According to an embodiment of the present disclosure, the middleware 330 (e.g., the middleware 143) may include at least one of a runtime library 335, an application manager 341, a window manager 342, a multimedia manager 343, a resource manager 344, a power manager 345, a database manager 346, a package manager 347, a connectivity manager 348, a notification manager 349, a location manager 350, a graphic manager 351, and a security manager 352.
The runtime library 335 may include a library module that a compiler uses in order to add a new function through a programming language while an application 370 is being executed. The runtime library 335 may perform input/output management, memory management, the functionality for an arithmetic function, or the like.
The application manager 341 may manage, for example, a life cycle of at least one of the applications 370. The window manager 342 may manage Graphical User Interface (GUI) resources used by a screen. The multimedia manager 343 may recognize a format utilized for reproduction of various media files, and may perform encoding or decoding of a media file by using a codec suitable for the corresponding format. The resource manager 344 may manage resources of a source code, a memory, and a storage space of at least one of the applications 370.
The power manager 345 may operate together with, for example, a Basic Input/Output System (BIOS) or the like to manage a battery or power source and may provide power information or the like utilized for the operations of the electronic device. The database manager 346 may generate, search for, and/or change a database to be used by at least one of the applications 370. The package manager 347 may manage installation or an update of an application distributed in a form of a package file.
For example, the connectivity manager 348 may manage wireless connectivity such as Wi-Fi or Bluetooth. The notification manager 349 may display or notify of an event such as an arrival message, promise, proximity notification, and the like in such a way that does not disturb a user. The location manager 350 may manage location information of an electronic device. The graphic manager 351 may manage a graphic effect which will be provided to a user, or a user interface related to the graphic effect. The security manager 352 may provide all security functions utilized for system security, user authentication, or the like. According to an embodiment of the present disclosure, when the electronic device (e.g., the electronic device 101) has a telephone call function, the middleware 330 may further include a telephony manager for managing a voice call function or a video call function of the electronic device.
The middleware 330 may include a middleware module that forms a combination of various functions of the above-described components. The middleware 330 may provide a module specialized for each type of OS in order to provide a differentiated function. Further, the middleware 330 may dynamically remove some of the existing components or add new components.
The API 360 (e.g., the API 145) is, for example, a set of API programming functions, and may be provided with a different configuration according to an OS. For example, in the case of Android or iOS, one API set may be provided for each platform. In the case of Tizen, two or more API sets may be provided for each platform.
The applications 370 (e.g., the application programs 147) may include, for example, one or more applications which may provide functions such as a home 371, a dialer 372, an SMS/MMS 373, an Instant Message (IM) 374, a browser 375, a camera 376, an alarm 377, contacts 378, a voice dial 379, an email 380, a calendar 381, a media player 382, an album 383, a clock 384, health care (e.g., measuring exercise quantity or blood sugar), or environment information (e.g., providing atmospheric pressure, humidity, or temperature information).
According to an embodiment of the present disclosure, the applications 370 may include an application (hereinafter, referred to as an "information exchange application" for convenience of description) that supports exchanging information between the electronic device (e.g., the electronic device 101) and an external electronic device (e.g., the electronic device 102 or 104). The information exchange application may include, for example, a notification relay application for transferring specific information to an external electronic device or a device management application for managing an external electronic device.
For example, the notification relay application may include a function of transferring, to the external electronic device (e.g., the electronic device 102 or 104), notification information generated from other applications of the electronic device 101 (e.g., an SMS/MMS application, an e-mail application, a health management application, or an environmental information application). Further, the notification relay application may receive notification information from, for example, an external electronic device and provide the received notification information to a user.
The device management application may manage (e.g., install, delete, or update), for example, at least one function of an external electronic device (e.g., the electronic device 102 or 104) communicating with the electronic device (e.g., a function of turning on/off the external electronic device itself (or some components) or a function of adjusting the brightness (or a resolution) of the display), applications operating in the external electronic device, and services provided by the external electronic device (e.g., a call service or a message service).
According to an embodiment of the present disclosure, the applications 370 may include applications (e.g., a health care application of a mobile medical appliance or the like) designated according to an external electronic device (e.g., attributes of the electronic device 102 or 104). According to an embodiment of the present disclosure, the applications 370 may include an application received from an external electronic device (e.g., the server 106, or the electronic device 102 or 104). According to an embodiment of the present disclosure, the applications 370 may include a preloaded application or a third party application that may be downloaded from a server. The names of the components of the program module 310 of the illustrated embodiment of the present disclosure may change according to the type of operating system.
According to various embodiments, at least a part of the programming module 310 may be implemented in software, firmware, hardware, or a combination of two or more thereof. At least some of the program module 310 may be implemented (e.g., executed) by, for example, the processor (e.g., the processor 210). At least some of the program module 310 may include, for example, a module, a program, a routine, a set of instructions, and/or a process for performing one or more functions.
The term "module" as used herein may, for example, mean a unit including one of hardware, software, and firmware or a combination of two or more of them. The "module" may be interchangeably used with, for example, the term "unit", "logic", "logical block", "component", or "circuit". The "module" may be a minimum unit of an integrated component element or a part thereof. The "module" may be a minimum unit for performing one or more functions or a part thereof. The "module" may be mechanically or electronically implemented. For example, the "module" according to the present disclosure may include at least one of an Application-Specific Integrated Circuit (ASIC) chip, a Field-Programmable Gate Arrays (FPGA), and a programmable-logic device for performing operations which has been known or are to be developed hereinafter. According to various embodiments, at least some of the devices (for example, modules or functions thereof) or the method (for example, operations) according to the present disclosure may be implemented by a command stored in a computer-readable storage medium in a programming module form. The instruction, when executed by a processor (e.g., the processor 120), may cause the one or more processors to execute the function corresponding to the instruction. The computer-readable recoding media may be, for example, the memory 130.
FIG. 4
is a block diagram of an electronic device according to various embodiments.
FIG. 4
Referring to , the electronic device 400 (e.g., the electronic device 101) can include a sensor module 410, a memory 420, a display 430, a communication module 440, and a processor 450. The memory 420 can be included in the processor 450, or can be disposed outside the processor 450 and functionally coupled with the processor 450. The electronic device 400 can be worn on a user's body to detect a user's motion, like a wearable device. Alternatively, the electronic device 400 can receive sensor data from a wearable device including the sensor module 410 and predict heartbeat data (or heart rate).
FIG. 2
FIG. 4
The sensor module 410 can continuously or periodically sense information measured or detected according to the user's motion. The sensor module 410 can include at least one of a heart rate monitor sensor 411, a motion sensor 413, and an air pressure sensor 415. Such a sensor module 410 can be the sensor module 240 of . Accordingly, the sensor module 410 can further include other sensors not depicted in . The sensor module 410 can send the measured or detected sensor data (e.g., acceleration data, air pressure data) or heart rate data to the processor 450.
The heart rate monitor sensor 411 can measure user's heartbeat data. For example, the heart rate monitor sensor 411 can include at least one of an optical sensor, an Electrocardiogram (ECG) sensor and a Photoplethysmography (PPG) sensor. When the heart rate monitor sensor 411 is the optical sensor, the heart rate monitor sensor 411 can include a light emitter and a light receiver. The light emitter can include at least one of Infrared (IR), a red Light Emitting Diode (LED), a green LED, and a blue LED, and the light receiver can include a photodiode. When the electronic device 400 is attached to the user's body, the light emitter of the heart rate monitor sensor 411 can output the light, and the light receiver can detect the output light reflected by at least part of the user's body. For example, to determine user's blood flow variance, the light can go deeper than a user's kin (e.g., to a blood vessel) and then be reflected. The heart rate monitor sensor 411 can digitize light amounts detected by the light receiver, arrange them in sequence, and thus generate a signal. The heart rate monitor sensor 411 can send the generated signal to the processor 450. Notably, the generated signal can include various noises in addition to a frequency measured by the user's heartbeat.
Alternatively, the PPG sensor can utilize changes in the light absorption and reflection according to variance of a blood vessel thickness based on the heartbeat. When the heart rate monitor sensor 411 is the PPG sensor, the heart rate monitor sensor 411 can include a light emitter which emits IR, and a light receiver which detects the light emitted to and reflected from the user's body. The heart rate monitor sensor 411 can detect a PPG signal from the changes of the blood flow volume optically detected by the light receiver based on time.
The motion sensor 413 can include an acceleration sensor or a gyro sensor. For example, the acceleration sensor measures acceleration on x, y, and z axes, and can predict a force exerted on the electronic device using the measured acceleration. For example, when the acceleration sensor detects no motion, a value corresponding to gravitational acceleration is produced. When the acceleration sensor detects a motion, vibrations in a movement direction can be exhibited as variance of the force, that is, variance of the acceleration. An acceleration change pattern varies according to an exercise type of the user, and a unique pattern can emerge per exercise. The motion sensor 413 can send the measured acceleration data to the processor 450.
The air pressure sensor 415 can detect an altitude change of the electronic device 400. For example, the air pressure sensor 415 can measure whether the user is moving to a high altitude or to a low altitude. During an indoor exercise on a treadmill, an indoor cycle, an elliptical trainer, or a rowing machine, its motion is detected by the acceleration sensor but the altitude change is not detected by the air pressure sensor 415 (e.g., no variance in air pressure data). Unlike the indoor exercise, an outdoor exercise such as walking or running can change the altitude (e.g., change in the air pressure data). In this case, the acceleration sensor can detect the motion and the air pressure sensor 415 can detect the altitude change. The air pressure sensor 415 can send the measured air pressure data to the processor 450.
FIG. 1
FIG. 2
The processor 450 can process operations or data for control or communication of at least one component (e.g., the sensor module 410, the memory 420, the display 430, or the communication module 440) of the electronic device 400. For example, the processor 450 can predict (or estimate) a user motion state (e.g., exercise type) such as walking or running, using an acceleration pattern, and process it into pedometer information. Using the sensor data of the air pressure sensor 415 together with the motion sensor 413, the processor 450 can specify the exercise (e.g., the exercise type) of the user. The processor 450 can be the processor 120 of or the processor 210 of . The processor 450 can include a heart rate extracting module 451, a heart rate verifying module 455, an exercise intensity measuring module 453, and a calorie calculating module 457.
The heart rate extracting module 451 can extract the frequency corresponding to the heart rate (or heartbeat data) from the signal received from the heart rate monitor sensor 411. The heart rate extracting module 451 can remove noise from the received signal using various processes. For example, the heart rate extracting module 451 can cancel the frequency according to the user motion and thus reduce interference between frequencies. Besides the user motion, various external factors can cause inaccurate heartbeat data measured. For example, when the heart rate monitor sensor 411 is not precisely attached to a user's body part and a space exists between the skin and the heart rate monitor sensor 411, other value than the value reflected from the skin can be input to the light receiver. In this case, accurate heartbeat data may not be acquired merely by removing the frequency of the user motion. To obtain the accurate heart rate (or heartbeat data), the processor 450 can use the heart rate verifying module 455.
The exercise intensity measuring module 453 can measure a user's exercise intensity using the sensor data received from the motion sensor 413 or the air pressure sensor 415. For example, when the electronic device 400 is a wearable device, a wearing position of the wearing device on the user can vary and accordingly motion sensor data measured by the motion sensor 413 or the air pressure sensor 415 can differ according to the device wearing position and a current user exercise type. That is, since the motion sensor data changes according to the device wearing position or the exercise type, an exercise intensity measurement model can differ. The exercise intensity measuring module 453 can measure the exercise intensity based on the motion sensor data and the exercise type. The measured exercise intensity can be used to predict and verify the heartbeat data in the heart rate verifying module 455.
The heart rate verifying module 455 can verify the heartbeat data extracted by the heart rate extracting module 451. The heart rate verifying module 455 can use the exercise intensity relating to the user motion as a leading indicator. For example, when the user performs a high-intensity exercise, the heartbeat data can rise according to a level. When the user performs a low-intensity exercise, the heartbeat data can decrease according to a level. The heart rate verifying module 455 can predict how next heartbeat data changes based on at least one of changes of heartbeat data previously measured in real time, and the exercise intensity. The heart rate verifying module 455 can compare the predicted heartbeat data with the measured heartbeat data and thus determine whether the current heartbeat data measured is accurate or not.
The heart rate verifying module 455 according to various embodiments can determine an exercise type using first motion sensor data acquired by the motion sensor 413 for a first duration, and determine a heartbeat prediction range based on at least one of the first motion sensor data, the exercise type, and first heartbeat data obtained from the heart rate monitor sensor 411 for the first duration. The heart rate verifying module 455 can determine whether second heartbeat data obtained from the heart rate monitor sensor 411 for a second duration is included in the heartbeat prediction range, and determine heartbeat data of the second duration based on the determination result. The heart rate verifying module 455 according to various embodiments can determine at least one of a maximum value, a minimum value, and an average value of the heartbeat prediction range, as third heartbeat data.
The calorie calculating module 457 can calculate calories based on the heartbeat data verified by the heart rate verifying module 455. The calorie calculating module 457 according to various embodiments can calculate the calories using the heartbeat data using the exercise type, or calculate the calories using the motion sensor data and the heartbeat data. Also, the calorie calculating module 457 can calculate the calories by further using at least one of location information, sensor data (e.g., acceleration data, air pressure data), and pedometer data in addition to the heartbeat data.
The processor 450 according to various embodiments can include a first processor and a second processor. The first processor can operate (e.g., activate, operation mode) when power is applied to the electronic device 400. While the power is applied to the electronic device 400, the first processor can wake up and receive the sensor data from the sensor module 410. The first processor can be awake regardless on/off of the display 430 of the electronic device 400. The first processor can drive with lower power than the second processor. The first processor can determine user exercise information based on the sensor data. The first processor can send the determined user exercise information to the second processor.
The second processor can selectively operate if desired. For example, when the display 430 is turned on, information is obtained, or information is scanned, the second processor can activate (e.g., operation mode). Also, when the display 430 is turned off, the second processor can deactivate (e.g., sleep mode). That is, the second processor can operate in the inactive state (e.g., sleep mode), and wake up and activate according to at least one of a periodic basis, a preset scanning period, and an application operation period (or application information request).
FIG. 2
FIG. 2
The second processor can obtain communication information through the communication module 440 and send the data obtained from the calorie calculating module 457. For example, the communication module 440 can send the data (e.g., heartbeat data, calories) obtained from the calorie calculating module 457 to another electronic device (e.g., a smart phone, a server, etc.) using at least one communication scheme (e.g., BT, WiFi, NFC, cellular, etc.). The communication module 440 can be the communication module 220 of . The display 430 can display various information relating to the heartbeat data. For example, the display 430 can display at least one of the heartbeat data, the motion sensor data, and the calories in detail or based on time according to a user input (e.g., touch input, button/key/wheel input). The display 430 can be the display 260 of .
According to various embodiments, the electronic device 400 can include a motion sensor 413, a heart rate monitor sensor 411, and a processor 450 functionally coupled with the motion sensor 413 and the heart rate monitor sensor 411. The processor 450 can be configured to obtain first motion sensor data for a first duration using the motion sensor 413, to obtain first heartbeat data for the first duration using the heart rate monitor sensor 411, to determine an exercise type based on the first motion sensor data, to determine a heartbeat prediction range based on at least one of the first motion sensor data, the exercise type, and the first heartbeat data, to obtain second heartbeat data for a second duration using the heart rate monitor sensor, to determine whether the second heartbeat data falls within the heartbeat prediction range, and to determine heartbeat data of the second duration based on the determination result.
The processor 450 can determine the second heartbeat data as the heartbeat data of the second duration when the second heartbeat data falls within the heartbeat prediction range, and determine third heartbeat data in the heartbeat prediction range as the heartbeat data of the second duration when the second heartbeat data does not fall within the heartbeat prediction range.
When the second heartbeat data does not fall within the heartbeat prediction range, the processor 450 can determine the third heartbeat data based on a maximum value and a minimum value of the heartbeat prediction range, and the second heartbeat data, and correct the second heartbeat data with the third heartbeat data.
The processor 450 can determine at least one of the maximum value, the minimum value, and an average value of the heartbeat prediction range, as the third heartbeat data.
When the second heartbeat data falls within the heartbeat prediction range, the processor 450 can calculate calories using the second motion sensor data or the second heartbeat data obtained in the second duration, and when the second heartbeat data does not fall within the heartbeat prediction range, the processor 450 can calculate calories using the second motion sensor data obtained in the second duration or the third heartbeat data within the heartbeat prediction range.
Based on the exercise type, the processor 450 can calculate the calories using the first heartbeat data, or calculate the calories using the first motion sensor data and the first heartbeat data.
The processor 450 can determine an exercise intensity or the exercise type based on the first motion sensor data, determines predicted heartbeat data based on at least one of the exercise intensity, the exercise type, and the first heartbeat data, and determine the heartbeat prediction range by considering a margin of error based on the predicted heartbeat data.
The first motion sensor data can include acceleration data, and the processor 450 can determine the exercise intensity or the exercise type according to acceleration variance based on the acceleration data or variance of air pressure data using an air pressure sensor 415.
With the variance of the air pressure data, the processor 450 can determine the predicted heartbeat data by adjusting a weight which reflects the acceleration variance on the exercise intensity.
Without the variance of the air pressure data, the processor 450 can determine the predicted heartbeat data to correspond to changes of the acceleration variance.
The processor 450 can set different margins of error according to the predicted heartbeat data.
The processor 450 can determine the heartbeat prediction range by further considering user body information.
The processor 450 can calculate first calories of the first duration using the first motion sensor data or the first heartbeat data, and display a user interface regarding at least one of the first motion sensor data, the first heartbeat data, and the first calorie, through a display of the electronic device.
The processor 450 can include a first processor and a second processor, the first processor can be activated, and the second processor can be selectively activated.
The electronic device 400 can be worn on a user body.
FIG. 5
is a flowchart of an operating method of an electronic device according to various embodiments.
FIG. 5
Referring to , in operation 501, the electronic device 400 (e.g., the processor 450) can obtain first motion sensor data and first heartbeat data. The first motion sensor data can be obtained (or received) from the motion sensor 413. The first heartbeat data can be obtained (or received) from the heart rate monitor sensor 411. The processor 450 can obtain or receive air pressure data from the air pressure sensor 415.
The processor 450 according to various embodiments can calculate the first motion sensor data using sensor data obtained from the motion sensor 413, and calculate the first heartbeat data using sensor data obtained from the heart rate monitor sensor 411. The processor 450 can remove noise from the obtained sensor data, and calculate the first motion sensor data and the first heartbeat data using the noise-free sensor data. For example, the processor 450 can cancel noise detected in the sensor data using a low pass filter, an average filter, and so on. Besides, the processor 450 can remove noise from the sensor data using various noise reduction filters.
According to various embodiments, the processor 450 can determine, using data received from the sensor module 410, whether the user has initiated execution of an exercise (or some physical activity). For example, when the sensor data obtained from the sensor module 410 exceeds a predetermined threshold (e.g., a motion level, an acceleration change level, etc.), the processor 450 can determine that the user has initiated exercise. For example, when a motion detected exceeds the preset threshold of the sensor data or a repeated motion pattern is detected, the processor 450 can determine that the user has initiated exercise. Alternatively, when the user directly selects an "exercise start button" through an input/output interface (e.g., the input/output interface 150), the processor 450 can determine that the user has initiated exercise. The first motion sensor data and the first heartbeat data can be obtained for a first duration (e.g., time point). For example, the first duration can be a preset time (e.g., 5 minutes, 10 minutes, etc.) after the user exercise start is determined.
In operation 503, the electronic device 400 (e.g., the processor 450) can determine a particular type of exercise routine or action being performed (e.g., an exercise type) based on the first motion sensor data. Mostly, the exercise type can include an isometric exercise, an isotonic exercise, an isokinetic exercise, an anaerobic exercise, and an aerobic exercise. Among the exercise types, an exercise relating to the heartbeat or the calorie consumption corresponds to the aerobic exercise and accordingly the aerobic exercise now is described by way of example. Notably, the present disclosure is not limited to those exercise types. The aerobic exercise can indicate, for example, walking, running, cycling, climbing, swimming, running on a treadmill, rowing, and elliptical training. That is, the aerobic exercise can indicate a regular or repeated exercise. To determine the exercise type, the motion sensor data can be utilized. The processor 450 (e.g., the exercise intensity measuring module 453) can determine an exercise intensity using the motion sensor data.
For example, for considerable acceleration variance, the processor 450 can determine a moderate exercise intensity and the running or the rowing as the exercise type. For small acceleration change, the processor 450 can determine a low exercise intensity and the walking or the elliptical training as the exercise type.
According to various embodiments, the processor 450 can determine the exercise intensity by considering variance of the air pressure data obtained from the air pressure sensor 415 together with the motion sensor data. For example, when the altitude increases (e.g., uphill) with the same exercise intensity, the acceleration variance can decrease. When the altitude decreases (e.g., downhill) with the same exercise intensity, the acceleration variance can increase. Hence, the processor 450 can calculate the exercise intensity by adjusting the acceleration variance based on the variance of the air pressure data. For acceleration variance and air pressure variance greater than respective variance preset thresholds, the processor 450 can determine the moderate exercise intensity and the exercise type as climbing (e.g., downhill walking or running). For acceleration variance and air pressure variance smaller than respective variance preset thresholds, the processor 450 can determine the high exercise intensity and the exercise type as climbing (e.g., uphill walking or running).
According to various embodiments, the processor 450 (e.g., the calorie calculating module 457) can calculate calories using the first heartbeat data. Alternatively, the processor 450 can calculate the calories using the first motion sensor data and the first heartbeat data. Since the first motion sensor data and the first heartbeat data are detected according to a user motion, the user can consume calories in a certain amount according to the movement. The processor 450 can calculate the calories (e.g., first calories) consumed by the user motion for the first duration. The processor 450 can calculate the calories using the first heartbeat data according to the exercise type, or calculate the calories using both of the first motion sensor data and the first heartbeat data. The processor 450 can provide a user interface relating to at least one of the first motion sensor data, the air pressure data, the first heartbeat data, and the first calories.
In operation 505, the electronic device 400 (e.g., the processor 450) can determine a heartbeat prediction range, or a predicted range in which the user's heartbeat is exepcted to fall while executing the exercise. The processor 450 (e.g., the heart rate verifying module 455) can calculate predicted heartbeat data using at least one of the first motion sensor data, the exercise type, and the first heartbeat data. The processor 450 can determine the heartbeat prediction range by considering a margin of error based on the predicted heartbeat data. For example, when the predicted heartbeat data is 100, the processor 450 can determine the heartbeat prediction range as 90 ∼ 110 based on the margin of error of ± 1.
According to various embodiments, the processor 450 can measure the exercise intensity based on the first motion sensor data. The exercise intensity is the leading indicator of the heartbeat data. For a "high" exercise intensity, the heartbeat data can rise to a level corresponding to the exercise intensity. For a "low" exercise intensity, the heartbeat data can fall to a level corresponding to the exercise intensity. Using the exercise intensity, the processor 450 can estimate how the heartbeat data will change. Also, based on the first motion sensor data, the processor 450 can determine the exercise type. For the high exercise intensity, the heartbeat data to be measured can change more than the current heartbeat data or maintain a high level. Alternatively, for the low exercise intensity, the heartbeat data to be measured can change less than the current heartbeat data. Depending on how long the exercise continues, the heartbeat data can change considerably or maintain a low level according to the exercise intensity. Since the heartbeat data is the heart rate of the user and the heart rate changes according to the exercise with a certain variance range, a next heart rate can be predicted based on the current heart rate. The variance of the heartbeat data can be greater or smaller than a certain threshold, according to whether the current heartbeat data reaches a certain limit.
According to various embodiments, the processor 450 can determine the heartbeat prediction range by further considering user's body information. For example, the user's body information can include various information about the user, such as height, weight, age, gender, resting heart rate, blood pressure, body fat, and blood type. For example, the blood pressure can differ according to the age or the gender, or according to a user's current condition. The blood pressure can affect the heartbeat data. A maximum heart rate can be determined based on the age, and the age can affect the heartbeat data and the calories. Hence, the processor 450 can determine an accurate heartbeat prediction range by further using the user body information together with the measured motion sensor data and heartbeat data. For doing so, the processor 450 can request the user to pre-input his/her body information. Alternatively, the processor 450 can analyze the body information based on user's use record, without having to pre-registering the body information of the user. For example, the processor 450 can extract body information registered by the user in a health application installed on the electronic device 400.
In operation 507, the electronic device 400 (e.g., the processor 450) can obtain second heartbeat data. The second heartbeat data can be sensor data obtained from the heart rate monitor sensor 411 for a second duration (e.g., 09:06 ∼ 09:10) after the first duration (e.g., 09:00 ∼ 09:05). The processor 450 can calculate the second heartbeat data using a second set of sensor data obtained from the heart rate monitor sensor 411. The processor 450 can obtain a second set of motion sensor data from the motion sensor 413 and the air pressure sensor 415 for the second duration (or time point). The first duration or the second duration can include a certain duration or a specific time point. The second duration can be shorter than the first duration. That is, the first duration can include a certain duration (e.g., 5 minutes), and the second duration can include a time point (e.g., 09:06).
In operation 509, the electronic device 400 (the processor 450) can determine whether the second heartbeat data indicates a heartbeat rate falling within the heartbeat prediction range. When the heart rate monitor sensor 411 is attached to a user's chest, the heartbeat data can be measured accurately. Otherwise, the measured heartbeat data can be inaccurate. This is because, when the heart rate monitor sensor 411 measures the heartbeat data, various external factors (e.g., user's movement, sensor attached or detached, etc.) can cause the inaccurate heartbeat data. The heartbeat prediction range can predict next heartbeat data using the exercise intensity, the exercise type, and the current heartbeat data. Hence, the predicted heartbeat data can be more accurate than the measured heartbeat data.
When the second heartbeat data falls within the heartbeat prediction range, the processor 450 (e.g., the heart rate verifying module 455) can perform operation 511. When the second heartbeat data does not fall within the heartbeat prediction range, the processor 450 can perform operation 513.
When the second heartbeat data falls within the heartbeat prediction range, the electronic device 400 (e.g., the processor 450) can determine the obtained second heartbeat data as the heartbeat data of the second duration in operation 511. The processor 450 can calculate calories (e.g., second calories) using the second motion sensor data or the second heartbeat data. Based on the exercise type, the processor 450 can calculate the calories using the second heartbeat data, or calculate the calories using second motion sensor data or the second heartbeat data. The processor 450 can calculate calories (e.g., the second calories) consumed by the user motion during the second duration. The processor 450 can provide a user interface relating to at least one of the second heartbeat data, the second heartbeat data, and the second calories.
When the second heartbeat data does not fall within the heartbeat prediction range, the electronic device 400 (e.g., the processor 450) can correct it with third heartbeat data within the heartbeat prediction range in operation 513. As mentioned earlier, the predicted heartbeat data (e.g., the heartbeat data within the heartbeat prediction range) can be more accurate than the measured heartbeat data (e.g., the second heartbeat data). When the second heartbeat data obtained in the second duration does not fall within the heartbeat prediction range, the processor 450 can correct the second heartbeat data with the third heartbeat data of the heartbeat prediction range. The third heartbeat data is included in the heartbeat prediction range, and the processor 450 can determine the third heartbeat data based on a maximum value and a minimum value of the heartbeat prediction range and the second heartbeat data. For example, the processor 450 can determine at least one of the maximum value, the minimum value, and an average value of the heartbeat prediction range, as the third heartbeat data.
In operation 515, the electronic device 400 (e.g., the processor 450) can determine the corrected third heartbeat data as the heartbeat data of the second duration. The processor 450 can calculate the calories using the second motion sensor data or the third heartbeat data. Based on the exercise type, the processor 450 can calculate the calories using the third heartbeat data, or calculate the calories using second motion sensor data and the third heartbeat data. The processor 450 can calculate the calories (e.g., the second calories) consumed by the user motion during the second duration. The processor 450 can provide a user interface relating to at least one of the second heartbeat data, the third heartbeat data, and the second calories. When the measured heartbeat data is not accurate, the processor 450 can calculate the calories using the corrected heartbeat data and thus provide more accurate calories based on the user movement.
FIGS. 6A
6B
and are diagrams of heartbeat data predicted in an electronic device according to various embodiments.
FIG. 6A
depicts an example where a heartbeat prediction range covers second heartbeat data.
FIG. 6A
Referring to , the electronic device 400 (e.g., the processor 450) can obtain first heartbeat data 611 and first motion sensor data 613 for a first duration 610. Herein, the motion sensor data 613 can be acceleration data obtained from an acceleration sensor. The first duration 610 can be a certain duration after the user begins an exercise. The processor 450 can predict second heartbeat data of a second duration 620 based on the first heartbeat data 611 and the first motion sensor data 613 of the first duration 610. For example, the processor 450 can measure a user exercise intensity using the first motion sensor data 613. The processor 450 can determine an exercise type based on the first motion sensor data 613, and determine a heartbeat prediction range 621 of the second duration 620 using at least one of the first motion sensor data 613, the determined exercise type, and the first heartbeat data 611. For example, the heartbeat prediction range 621 can be set to a value of a certain range by considering a margin of error in predicted heartbeat data 623. The processor 450 can obtain second heartbeat data 625 from the heart rate monitor sensor 411 for the second duration 620. When the second heartbeat data 625 falls within the heartbeat prediction range 621, the processor 450 can use the second heartbeat data 625 to calculate caloric consumption.
FIG. 6B
depicts an example where a heartbeat prediction range does not encapsulate the detected second heartbeat data.
FIG. 6B
Referring to , as in the preceding, the electronic device 400 (e.g., the processor 450) can obtain first heartbeat data 651 and first motion sensor data 653 for a first duration 650. Herein, the motion sensor data 653 can be acceleration data obtained from an acceleration sensor. The processor 450 can predict second heartbeat data of a second duration 660 based on the first heartbeat data 651 and the first motion sensor data 653 of the first duration 650. For example, the processor 450 can determine an exercise type based on the first motion sensor data 653, and determine a heartbeat prediction range 661 of the second duration 660 using at least one of the first motion sensor data 653, the determined exercise type, and the first heartbeat data 651. For example, the heartbeat prediction range 661 can be set to a value of a certain range by considering a margin of error in predicted heartbeat data 663. The processor 450 can obtain second heartbeat data 667 from the heart rate monitor sensor 411 for the second duration 660. When the second heartbeat data 667 does not fall within the heartbeat prediction range 661, the processor 450 can correct the second heartbeat data 667 using third heartbeat data 665. The third heartbeat data 665 can be included in the heartbeat prediction range 661.
According to various embodiments, the processor 450 can determine the third heartbeat data 665 as heartbeat data to use for the measurement based on a maximum value and a minimum value of the heartbeat prediction range 661 and the second heartbeat data 667. For example, when the second heartbeat data 667 is close to the minimum value (e.g., when the second heartbeat data 667 is smaller than the minimum value), the processor 450 can determine the third heartbeat data 665 to a value between the minimum value and the predicted heartbeat data 663. Alternatively, when the second heartbeat data 667 is close to the maximum value (e.g., when the second heartbeat data 667 is greater than the maximum value), the processor 450 can determine the third heartbeat data 665 to a value between the maximum value and the predicted heartbeat data 663. The processor 450 can calculate calories using the third heartbeat data 665 in the second duration 660.
FIGS. 7A through 7D
are diagrams of heartbeat data measured variously according to an exercise type according to various embodiments.
FIG. 7A
depicts a heartbeat graph 710 and an acceleration variance graph 720 of a user who performs an elliptical exercise.
FIG. 7A
Referring to , the heartbeat graph 710 shows the heartbeat data (or heart rate) of the user who performs the elliptical exercise rapidly for seven minutes, takes a rest for three minutes, and does the exercise slowly for six minutes. First heartbeat data 711 can be measured normally, and second heartbeat data 713 can be measured abnormally. The acceleration variance graph 720 shows variance 721 of values produced by normalizing sensor data (e.g., acceleration sensor) obtained from the motion sensor 413 during the user's elliptical exercise. Variance changes 723 can apply a low pass filter to the variance 721.
Comparing the heartbeat graph 710 and the acceleration variance graph 720, the first heartbeat data which is the normal heartbeats is quite similar to the variance changes 723. For example, when an acceleration variance level reaches '10', the value of the first heartbeat data 711 arrives at about 180. When the variance level reaches '3', the first heartbeat data 711 arrives at about 160 and then maintains a certain level. However, the second heartbeat data 713 does not correspond to the acceleration variance changes 723 at all, but increases when the variance level gets low.
<mrow><mtable><mtr><mtd><msub><mi>C</mi><mi mathvariant="italic">Elliptical</mi></msub><mo>∝</mo><mfenced separators=""><msub><mi mathvariant="italic">AccVar</mi><mi>i</mi></msub><mo>−</mo><msub><mi mathvariant="italic">AccVar</mi><mrow><mi>i</mi><mo>−</mo><mn>1</mn></mrow></msub></mfenced></mtd></mtr><mtr><mtd columnalign="left"><msub><mi>C</mi><mi mathvariant="italic">Elliptical</mi></msub><mo>∝</mo><mfrac><mn>1</mn><mrow><msub><mi mathvariant="italic">AccVar</mi><mi>i</mi></msub></mrow></mfrac></mtd></mtr></mtable></mrow>
Elliptical
C
: elliptical exercise coefficient
i
AccVar
: average value of acceleration norm variance
The processor 450 can express relationship between the acceleration variance and the heartbeat data with respect to the elliptical exercise as Equation 1. The processor 450 can predict the user's heartbeat data in the elliptical exercise based on Equation 1.
FIG. 7B
depicts a heartbeat graph 730 and an acceleration variance graph 740 of a user who exercises on a rowing machine.
FIG. 7B
Referring to , the heartbeat graph 730 shows heartbeat data (or heart rate) of the user who exercises on the rowing machine rapidly for six minutes, takes a rest for three minutes, and does the exercise slowly for six minutes. First heartbeat data 731 can be measured normally, and second heartbeat data 733 can be measured abnormally. In the acceleration variance graph 740, the acceleration variance 741 is greater than a preset threshold indicating the "hard" exercise, smaller than the preset threshold for the "light" exercise, and corresponds to the increase of the heartbeat data when the exercise intensity rises. Comparing the heartbeat graph 730 and the acceleration variance graph 740, when the light exercise continues, acceleration variance changes 743 and first heartbeat data 731 maintain a certain level. However, the second heartbeat data 733 abnormally measured changes regardless of the acceleration variance changes 743, and there is no relationship between the exercise intensity and the heartbeat data. Thus, when the changes of the heartbeat data do not follow the acceleration variance changes 743, the processor 450 can determine inaccurate measurement of the second heartbeat data 733.
<mrow><mtable><mtr><mtd><msub><mi>C</mi><mi mathvariant="italic">Rowing</mi></msub><mo>∝</mo><mfenced separators=""><msub><mi mathvariant="italic">AccVar</mi><mi>i</mi></msub><mo>−</mo><msub><mi mathvariant="italic">AccVar</mi><mrow><mi>i</mi><mo>−</mo><mn>1</mn></mrow></msub></mfenced></mtd></mtr><mtr><mtd columnalign="left"><msub><mi>C</mi><mi mathvariant="italic">Rowing</mi></msub><mo>∝</mo><mfrac><mn>1</mn><mrow><msub><mi mathvariant="italic">AccVar</mi><mi>i</mi></msub></mrow></mfrac></mtd></mtr></mtable></mrow>
Rowing
C
: rowing exercise coefficient
i
AccVar
: average value of acceleration norm variance
The processor 450 can express the relationship between the acceleration variance and the heartbeat data in relation to the rowing machine as Equation 2. The processor 450 can predict the user's heartbeat data in the rowing machine exercise based on Equation 2.
FIG. 7C
depicts a heartbeat graph 750 and an acceleration variance graph 760 of a user who is walking.
FIG. 7C
Referring to , the heartbeat graph 750 shows heartbeat data (or heart rate) when the user walks at a certain pace for ten minutes. First heartbeat data 751 can be measured normally, and the second heartbeat data 753 can be measured abnormally. The acceleration variance graph 760 shows acceleration variance 761 of values produced by normalizing sensor data (e.g., acceleration sensor) obtained from the motion sensor 413 during the user's walking. Variance changes 763 indicate the changes of the acceleration variance 761. When walking or running keeps a certain pace, the acceleration variance also maintains certain changes. Also, when a certain exercise intensity is maintained, the user's heartbeat data rises until it reaches a certain level and then maintains the level. This pattern can be also true for both of the walking and the running. Hence, when the heartbeat data abnormally increases or continuously increases while the acceleration variance 761 maintains the certain level, the processor 450 can determine the abnormal heartbeat data measured like the second heartbeat data 753. When the heartbeat data maintains a certain level, the processor 450 can predict that the heartbeat acceleration variance changes 763 continue similarly to the acceleration variance 761 while the acceleration variance 761 does not change.
<mrow><mtable><mtr><mtd><msub><mi>C</mi><mi mathvariant="italic">WalkRun</mi></msub><mo>∝</mo><mfenced separators=""><msub><mi mathvariant="italic">AccVar</mi><mi>i</mi></msub><mo>−</mo><msub><mi mathvariant="italic">AccVar</mi><mrow><mi>i</mi><mo>−</mo><mn>1</mn></mrow></msub></mfenced></mtd></mtr><mtr><mtd columnalign="left"><msub><mi>C</mi><mi mathvariant="italic">WalkRun</mi></msub><mo>∝</mo><mfrac><mn>1</mn><mrow><msub><mi mathvariant="italic">AccVar</mi><mi>i</mi></msub></mrow></mfrac></mtd></mtr></mtable></mrow>
WalkRun
C
: exercise coefficient of flat-surface walking/running
i
AccVar
: average value of acceleration norm variance
Thus, the processor 450 can express the relationship between the acceleration variance and the heartbeat data in relation to the walking as Equation 3. The processor 450 can predict the user's heartbeat data during the walking based on Equation 3.
FIG. 7D
depicts a heartbeat graph 780, an acceleration variance graph 770, and an air pressure graph 790 of a user who is walking on a slope.
FIG. 7D
Referring to , the heartbeat graph 780 shows heartbeat data when the user walks up stairs to a fifth floor at a certain pace, takes a rest for one minute, and then walks down to the ground floor. The heartbeat data is reliable values obtained from the heart rate monitor sensor 411 over the same period. The acceleration variance graph 770 shows acceleration variance and acceleration variance changes of values produced by normalizing sensor data (e.g., acceleration sensor) obtained from the motion sensor 413 while the user walks up and down the stairs. The air pressure graph 790 shows sensor data obtained from the air pressure sensor 415 while the user walks up and down the stairs. Typically, the air pressure data can decrease by 0.36 hp or so when the user walks up one floor (3m) and decrease when the user walks down the stairs.
Unlike other general exercises, the acceleration variance in the walking/running on the slope seems to be contrast to the heartbeat data variance. This is because the downhill climb has smaller acceleration variance than the uphill climb. Since the heartbeat data rises when the user climbs up (e.g., uphill), the processor 450 can determine based on the air pressure data whether the user is walking up or down. According to the uphill or downhill slope, the processor 450 can define a range of the acceleration variance.
Based on the heartbeat graph 780, the acceleration variance graph 770, and the air pressure graph 790, when the user walks up the stairs where the heartbeat data rises, the air pressure data value declines, the acceleration variance is constant, and the heartbeat data value increases. Next, when the user temporarily stops walking in order to climb down the stairs, the heartbeat data falls and the air pressure value is constant. When the user walks down the stairs, the air pressure data value increases and currently the acceleration variance level relatively rises. Notably, the heartbeat data value rises constantly, rather than sharply increasing in proportion to the increase of the acceleration variance.
<mrow><mtable><mtr><mtd columnalign="left"><msub><mi>C</mi><mi mathvariant="italic">hiking</mi></msub><mo>∝</mo><mfenced separators=""><msub><mi mathvariant="italic">AccVar</mi><mi>i</mi></msub><mo>−</mo><msub><mi mathvariant="italic">AccVar</mi><mrow><mi>i</mi><mo>−</mo><mn>1</mn></mrow></msub></mfenced></mtd></mtr><mtr><mtd><msub><mi>C</mi><mi mathvariant="italic">hiking</mi></msub><mo>∝</mo><mfenced separators=""><msub><mrow><mi>Pr</mi><mi mathvariant="italic">essure</mi></mrow><mrow><mi>i</mi><mo>−</mo><mn>1</mn></mrow></msub><mo>−</mo><msub><mrow><mi>Pr</mi><mi mathvariant="italic">essure</mi></mrow><mi>i</mi></msub></mfenced></mtd></mtr><mtr><mtd columnalign="left"><msub><mi>C</mi><mi mathvariant="italic">Hiking</mi></msub><mo>∝</mo><mfrac><mn>1</mn><mrow><msub><mi mathvariant="italic">AccVar</mi><mi>i</mi></msub></mrow></mfrac></mtd></mtr></mtable></mrow>
Hiking
C
: exercise coefficient of slope walking/running
i
AccVar
: average value of acceleration norm variance
i
essure
Pr : air pressure measurement value
Thus, the processor 450 can express the relationship between the acceleration variance, the air pressure variance, and the heartbeat data in relation to the slope walking as Equation 4. The processor 450 can predict the user's heartbeat data during the slope walking based on Equation 4.
FIG. 8
is a diagram of a heartbeat prediction range determined based on an exercise type according to various embodiments.
FIG. 8
Referring to , the electronic device 400 (e.g., the processor 450) can determine an exercise type 830 using sensor data measured by a sensor module 810. For example, the processor 450 can determine at least one of elliptical exercise, rowing machine, flat-surface walking or running, and slope walking or running, as the exercise type 830 using an acceleration sensor 811 and an air pressure sensor 813. The processor 450 can calculate an exercise intensity using a different exercise model based on the exercise type 830. The exercise intensity is a leading indicator of heartbeat data. For a high exercise intensity, the heartbeat data can rise to a level corresponding to the exercise intensity. For a low exercise intensity, the heartbeat data can fall to a level corresponding to the exercise intensity. Using the exercise intensity, the processor 450 can estimate a next level of the heartbeat data. The exercise intensity can be measured using a normalizing value and variance of acceleration data. The heartbeat data also moves in proportion to the acceleration data, and its coefficient value or level can vary according to the user or the exercise type.
FIGS. 7A
7B
7C
FIG. 7D
The exercise intensity differs according to the exercise type because a body part wearing the electronic device 400 differs and a motion varies according to the exercise. Referring back to , , and , when the acceleration variance increases, the heartbeat data increases. When the acceleration variance decreases, the heartbeat data reduces. When the acceleration variance maintains, the heartbeat data can reach a certain level and then maintains. Referring back to , the acceleration is in inverse proportion to the exercise intensity, but the exercise intensity can be affected by the variance of the air pressure data. Using such features, the processor 450 can better determine whether currently measured heartbeat data is normal and predict next heartbeat data based on at least one of reliable previous heartbeat data, the acceleration variance, and the air pressure variance.
For example, when the exercise type is the elliptical training, the processor 450 can calculate the exercise intensity using an elliptical model 831. When the exercise type is the rowing machine, the processor 450 can calculate the exercise intensity using a rowing machine model 833. When the exercise type is the flat-surface walking or running, the processor 450 can calculate the exercise intensity using a flat-surface walking or running machine model 835. When the exercise type is the slope walking or running, the processor 450 can calculate the exercise intensity using a slope walking or running model 837. For the slope waking or running model 837, the processor 450 can calculate the exercise intensity using air pressure data obtained from the air pressure sensor 813. The processor 450 can determine a heartbeat prediction range 850 based on the exercise intensity, the exercise type 830, and the current heartbeat data. That is, for the high exercise intensity, next heartbeat data to be measured can vary more than the current heartbeat data or maintain. Alternatively, for the low exercise intensity, the next heartbeat data to be measured can vary less than the current heartbeat data or maintain. The variation of the heartbeat data based on the exercise intensity can increase or decrease according to how long the exercise is preformed or how drastically the exercise intensity changes.
Thus, the processor 450 can determine different heartbeat prediction ranges according to the exercise intensity, the exercise type, and the current heartbeat data. For example, the processor 450 can determine a first heart rate prediction range 851 of the elliptical training using the elliptical model 831 and the current heartbeat data obtained by a heart rate monitor sensor 815. The processor 450 can determine a second heart rate prediction range 853 of the rowing machine using the rowing machine model 833 and the current heartbeat data obtained by the heart rate monitor sensor 815. The processor 450 can determine a third heart rate prediction range 855 of the flat-surface walking or running using the flat-surface walking or running machine model 835 and the current heartbeat data obtained by the heart rate monitor sensor 815. The processor 450 can determine a fourth heart rate prediction range 857 of the slope walking or running using the slope walking or running machine model 837 and the current heartbeat data obtained by the heart rate monitor sensor 815. The first heart rate prediction range 851 through the fourth heart rate prediction range 857 can have different predicted heartbeat data values.
FIG. 9
is a flowchart of a method for determining a heartbeat prediction range in an electronic device according to various embodiments.
FIG. 9
Referring to , in operation 901, the electronic device 400 (e.g., the processor 450) can determine whether acceleration variance is greater than a predetermined threshold or not. Motion sensor data (e.g., first motion sensor data) obtained from the sensor module 410 for a first duration can be acceleration data. For example, the acceleration data can be motion sensor data obtained from the motion sensor 413. Also, the processor 450 can obtain air pressure data from the air pressure sensor 415. The processor 450 can calculate variance of the acceleration data in the obtained motion sensor data.
For example, acceleration variance of the flat-surface running can be greater than acceleration variance of the flat-surface walking. Alternatively, acceleration variance of the elliptical training can be smaller than variance of the rowing machine. The processor 450 can define a threshold for determining whether the acceleration variance is considerable or not. For example, the threshold can be greater than the acceleration variance of the walking and smaller than the acceleration variance of the running. Alternatively, the threshold can be greater than the acceleration variance of the elliptical training and smaller than the acceleration variance of the rowing machine. The threshold can be set by the electronic device 400 or the user.
The processor 450 can determine the "great" acceleration variance when the acceleration variance exceeds a preset threshold, and determine the "small" acceleration variance when the acceleration variance falls below the same or a different threshold. The processor 450 can perform operation 905 for the considerable acceleration variance, and perform operation 903 for the small acceleration variance.
When the acceleration variance is not considerable, the electronic device 400 (e.g., the processor 450) can determine a first exercise intensity in operation 903. The processor 450 can determine an exercise intensity based on a user motion in a first duration, as the first exercise intensity. According to various embodiments, the processor 450 can divide the exercise intensity into various levels such as three levels, five levels, or ten levels. Hereafter, while the exercise intensity includes three levels to ease the understanding, the exercise intensity is not limited to those. For example, the processor 450 can divide the exercise intensity into three levels of high, moderate, and low. The first exercise intensity corresponds to the low exercise intensity, which is the lowest exercise intensity.
When the acceleration variance is detected as "great" (e.g., greater than the relevant threshold), the electronic device 400 (e.g., the processor 450) can whether the air pressure changes in operation 905. The processor 450 can determine variance of the air pressure data obtained by the air pressure data 415 in the first duration. The air pressure data can be detected when the user climbs or descends a slope such as hiking or stairs. Although the air pressure data is detected, a smooth slope may not heavily affect the exercise intensity and thus the processor 450 can determine a slope threshold by considering the effect of the slope on the exercise. For example, the slope threshold can be set by the electronic device 400 or the user.
When the air pressure data exceeds the slope threshold, the processor 450 can determine variance of the air pressure data. When the air pressure data falls below the slope threshold, the processor 450 can determine no variance of the air pressure data. With the variance of the air pressure data, the processor 450 can conduct operation 909. Without the variance of the air pressure data, the processor 450 can conduct operation 907.
Without the variance of the air pressure data, the electronic device 400 (e.g., the processor 450) can determine a third exercise intensity in operation 907. The processor 450 can determine the exercise intensity of the user motion in the first duration, as the third exercise intensity. The third exercise intensity corresponds to the high exercise intensity, which is the highest exercise intensity.
With the variance of the air pressure data, the electronic device 400 (e.g., the processor 450) can determine a second exercise intensity in operation 909. The processor 450 can determine the exercise intensity of the user motion in the first duration, as the second exercise intensity. The second exercise intensity corresponds to the moderate exercise intensity, which is the medium exercise intensity.
According to various embodiments, without the variance of the air pressure data, the processor 450 can determine the second exercise intensity. With the variance of the air pressure data, the processor 450 can determine the third exercise intensity. For example, uphill walking/running can have relatively smaller acceleration variance than the flat-surface walking/running and a higher exercise intensity than the flat-surface walking/running though the acceleration reduces as the air pressure changes considerably. Conversely, downhill walking/running can have relatively greater acceleration variance than the flat-surface walking/running and a faster speed as the air pressure changes greatly, but currently the acceleration variance can rapidly increase. When the air pressure changes greatly, the processor 450 can lower a weight which reflects the acceleration variance on the exercise intensity. In this case, the exercise intensity can increase slightly even when the acceleration variance increases severely.
According to various embodiments, the processor 450 can adjust the weight which reflects the acceleration variance on the exercise intensity, based on the air pressure change. For example, when detecting the air pressure data, the processor 450 can lower the weight which reflects the acceleration variance on the exercise intensity. With no variance of the air pressure data (e.g., the flat-surfing walking or running), the processor 450 can increase the weight which reflects the acceleration variance on the exercise intensity.
In operation 911, the electronic device 400 (e.g., the processor 450) can determine an exercise type based on the exercise intensity. For example, the exercise type corresponding to the first exercise type can include the walking or elliptical exercise. The exercise type corresponding to the second exercise type can include the uphill walking/running or the downhill walking/running. The exercise type corresponding to the third exercise type can include the running or the rowing machine.
<mrow><msub><mi mathvariant="italic">HR</mi><mrow><mi>Pr</mi><mi mathvariant="italic">ed</mi></mrow></msub><mo>=</mo><msub><mi mathvariant="italic">HR</mi><mi mathvariant="italic">Cur</mi></msub><mo>×</mo><mfenced separators=""><mn>1</mn><mo>+</mo><msub><mi>C</mi><mi mathvariant="italic">Exer</mi></msub></mfenced></mrow>
HR
ed
Pr
: predicted HR
Cur
HR
: current HR
Exer
C
: exercise coefficient
In operation 915, the electronic device 400 (e.g., the processor 450) can determine a heartbeat prediction range based on at least one of the exercise intensity, the exercise type, and a current heart rate. The current heart rate can indicate the first heartbeat data acquired in the first duration. For example, the processor 450 can set an exercise coefficient based on the exercise intensity and the exercise type, and calculate predicted heartbeat data using the exercise coefficient and the current heart rate. The processor 450 can calculate the predicted heartbeat data based on Equation 5.
The processor 450 can determine the heartbeat prediction range by taking into account a margin of error based on the predicted heartbeat data. For example, when the predicted heartbeat data is 120 based on Equation 5, the processor 450 can determine the heartbeat prediction range to 110 ∼ 130 based on the margin of error of ± 7.
According to various embodiments, the processor 450 can set different margins of error according to the predicted heartbeat data. For example, when the predicted heartbeat data is 100, the margin of error can be set to ± 10. When the predicted heartbeat data is 120, the margin of error can be set to ± 7. When the predicted heartbeat data is 140, the margin of error can be set to ± 5. The processor 450 may set different margins of error by considering at least one of the exercise intensity, the exercise type, and the current heart rate.
According to various embodiments, the processor 450 can determine the heartbeat prediction range by further considering user's body information. For example, the user's body information can include at least one of a height, a weight, an age, a gender, a resting heart rate, a maximum heart rate based on the age, a blood pressure, body fat, and blood type. The blood pressure can differ according to the age or the gender, and the blood pressure can affect the heartbeat data. The maximum heart rate can be determined based on the age. Hence, the processor 450 can determine an accurate heartbeat prediction range by further considering the user body information together with the exercise intensity, the exercise type, and the current heart rate.
According to various embodiments, the processor 450 may set different margins of error of the predicted heartbeat data by taking into account the user's body information. For example, the processor 450 can widen the margin of error (e.g., ± 15) for a high blood pressure, and narrow the margin of error (e.g., ± 5) for a low blood pressure. The processor 450 may set different margins of error by further considering at least one of the exercise intensity, the exercise type, the current heart rate, and the user body information.
FIG. 10
is a flowchart of a method for providing information using heartbeat data of an electronic device according to various embodiments.
FIG. 10
FIG. 5
Referring to , in operation 1001, the electronic device 400 (e.g., the processor 450) can obtain motion sensor data and heartbeat data. The processor 450 can obtain the motion sensor data from the motion sensor 413 for a certain duration (e.g., a second duration) or at a time point in the second duration. Also, the processor 450 can obtain air pressure data from the air pressure sensor 415. The heartbeat data can be sensor data obtained from the heart rate monitor sensor 411 in the certain duration (e.g., the second duration). That is, the motion sensor data and the heartbeat data can be acquired after an exercise type is determined using previous sensor data. Referring back to , the motion sensor data and the heartbeat data obtained in operation 1001 can indicate the second motion sensor data and the second heartbeat data acquired in the second duration (e.g., in operation 507).
In operation 1003, the electronic device 400 (e.g., the processor 450) can determine the exercise type based on the motion sensor data. For example, for great acceleration variance, the processor 450 can determine the moderate exercise intensity and the exercise type as the running or the rowing machine. For small acceleration variance, the processor 450 can determine the low exercise intensity and the exercise type as the walking or the elliptical exercise. Also, the processor 450 can determine the exercise type based on the motion sensor data and the air pressure data. For example, when the acceleration variance is great and an air pressure variance is great, the processor 450 can determine the moderate exercise intensity and the exercise type as the climbing (e.g., downhill walking or running). When the acceleration variance is small and the air pressure variance is small, the processor 450 can determine the high exercise intensity and the exercise type as the climbing (e.g., uphill walking or running).
In operation 1005, the electronic device 400 (e.g., the processor 450) can determine whether the exercise type has changed. That is, user may be walking on level ground, an uphill incline, or a downhill incline. Despite executing the same walking exercise, the predicted heart rate or the consumed calories can vary according to an exercise intensity which changes with the slope. Accordingly, as the sensed data changes, the processor 450 can determine whether a current exercise type has changed from a previous one other, and alter configuration to ensure acquisition of more accurate heartbeat data.
When the exercise type is changed, the processor 450 can perform operation 1007. When the exercise type is not changed, the processor 450 can perform operation 1009.
When the exercise type is changed, the processor 450 (e.g., the processor 450) can change a heartbeat prediction range in operation 1007. The heartbeat prediction range can predict next heartbeat data based on the current exercise intensity, the current exercise type, and the current heartbeat data. However, since the current exercise can be different from the next exercise in type, when the exercise type changes, the processor 450 may modify the heartbeat prediction range to ensure a more accurate prediction. The processor 450 can change predicted heartbeat data based on the changed exercise intensity and the changed exercise type.
In operation 1009, the electronic device 400 (the processor 450) can determine whether the heartbeat data falls within the heartbeat prediction range. For example, when the heartbeat prediction range is not changed, the processor 450 can determine whether the heartbeat data obtained in operation 1001 falls within the original heartbeat prediction range. Alternatively, when the heartbeat prediction range is changed, the processor 450 can determine whether the heartbeat data obtained in operation 1001 is included in the modified heartbeat prediction range.
When the heartbeat data falls within the heartbeat prediction range, the processor 450 can conduct operation 1011. When the heartbeat data does not fall within the heartbeat prediction range, the processor 450 can conduct operation 1015.
When the heartbeat data falls within the heartbeat prediction range, the electronic device 300 (e.g., the processor 450) can determine the obtained heartbeat data as data for the measurement in operation 1011.
In operation 1013, the electronic device 400 (e.g., the processor 450) can calculate calories using the motion sensor data or the heartbeat data. For example, using the heartbeat data obtained in operation 1001, the processor 450 can calculate calories consumed by a user's motion for a certain duration (e.g., a second duration). The processor 450 can calculate the calories using the heartbeat data obtained in operation 1001 based on the exercise type, or calculate the calories using the motion sensor data and the heartbeat data acquired in operation 1001. The processor 450 can provide a user interface regarding at least one of the motion sensor data, the heartbeat data, and the calories.
When the heartbeat data does not fall within the heartbeat prediction range, the electronic device 400 (e.g., the processor 450) can correct the heartbeat data within the heartbeat prediction range in operation 1015. The processor 450 can rely more on the predicted heartbeat data than the heartbeat data obtained in operation 1001. When the heartbeat data does not fall within the heartbeat prediction range, the processor 450 can correct the heartbeat data with other heartbeat data of the heartbeat prediction range. The corrected heartbeat data can be included in the heartbeat prediction range.
According to various embodiments, the processor 450 can correct the heartbeat based on a maximum value and a minimum value of the heartbeat prediction range, and the heartbeat data (e.g., the heartbeat data acquired by the heart rate monitor sensor). Alternatively, the processor 450 can determine at least one of the maximum value, the minimum value, and an average value of the heartbeat prediction range, as the predicted heartbeat data. For example, when the predicted heartbeat data is 120, the heartbeat prediction range can be 110 ∼ 130, the minimum value of the heartbeat prediction range can be 110, and the maximum value of the heartbeat prediction range can be 130. When the heartbeat data is 105, a difference value (e.g., 5) between the minimum value and the heartbeat data is smaller than a difference value (e.g., 10) between the predicted heartbeat data and the minimum value and thus the processor 450 can correct the heartbeat data close to the minimum value. For example, the processor 450 can correct the heartbeat data to the value 115 between the predicted heartbeat data and the minimum value.
Alternatively, when the heartbeat data is 100, the difference value (e.g., 10) between the minimum value and the heartbeat data is equal to the difference value (e.g., 10) between the predicted heartbeat data and the minimum value and thus the processor 450 can correct the heartbeat data with the minimum value. For example, the processor 450 can correct the heartbeat data to the minimum value 110 of the predicted heartbeat range. Alternatively, when the heartbeat data is 135, the difference value (e.g., 5) between the maximum value and the heartbeat data is smaller than the difference value (e.g., 10) between the predicted heartbeat data and the maximum value and thus the processor 450 can correct the heartbeat data close to the maximum value. For example, the processor 450 can correct the heartbeat data to the value 125 between the predicted heartbeat data and the maximum value.
Alternatively, when the heartbeat data is 140, the difference value (e.g., 10) between the maximum value and the heartbeat data is equal to the difference value (e.g., 10) between the predicted heartbeat data and the maximum value and thus the processor 450 can correct the heartbeat data with the maximum value. For example, the processor 450 can correct the heartbeat data to the maximum value 130 of the predicted heartbeat range. Alternatively, the processor 450 may correct the heartbeat data with the predicted heartbeat data by taking into account the exercise intensity, the exercise type, and the heartbeat data. In this case, the processor 450 can correct the heartbeat data with 120.
In operation 1017, the electronic device 400 (e.g., the processor 450) can determine the corrected heartbeat data as data for the measurement. Next, the processor 450 can calculate the calories using the motion sensor data or the corrected heartbeat data in operation 1013.
According to various embodiments, a method for operating an electronic device which includes a motion sensor 413 and a heart rate monitor sensor 411, can include obtaining first motion sensor data for a first duration using the motion sensor 413, and obtaining first heartbeat data for the first duration using the heart rate monitor sensor 411, determining an exercise type based on the first motion sensor data, determining a heartbeat prediction range based on at least one of the first motion sensor data, the exercise type, and the first heartbeat data, determining whether second heartbeat data obtained for a second duration falls within the heartbeat prediction range, and determining heartbeat data of the second duration based on the determination result.
Determining whether the second heartbeat data falls within the heartbeat prediction range can include, when the second heartbeat data falls within the heartbeat prediction range, determining the second heartbeat data as the heartbeat data of the second duration, and, when the second heartbeat data does not fall within the heartbeat prediction range, determining third heartbeat data in the heartbeat prediction range as the heartbeat data of the second duration.
Determining the third heartbeat data can include determining the third heartbeat data to at least one of a maximum value, a minimum value, and an average value of the heartbeat prediction range.
The method can further include, when the second heartbeat data falls within the heartbeat prediction range, calculating calories using the second motion sensor data or the second heartbeat data obtained in the second duration, when the second heartbeat data does not fall within the heartbeat prediction range, calculating calories using the second motion sensor data obtained in the second duration or the third heartbeat data within the heartbeat prediction range.
The method can further include determining an exercise intensity or the exercise type based on the first motion sensor data, determining predicted heartbeat data based on at least one of the exercise intensity, the exercise type, and the first heartbeat data, and determining the heartbeat prediction range by considering a margin of error based on the predicted heartbeat data.
The first motion sensor data can include acceleration data, and determining the exercise intensity or the exercise type can include determining the exercise intensity or the exercise type according to acceleration variance based on the acceleration data or variance of air pressure data using an air pressure sensor 415.
According to various embodiments, a computer-readable recording medium can include a program for obtaining first motion sensor data for a first duration using the motion sensor, and obtaining first heartbeat data for the first duration using the heart rate monitor sensor, determining an exercise type based on the first motion sensor data, determining a heartbeat prediction range based on at least one of the first motion sensor data, the exercise type, and the first heartbeat data, determining whether second heartbeat data obtained for a second duration falls within the heartbeat prediction range, and determining heartbeat data of the second duration based on the determination result.
The computer-readable recording medium can include a hard disk, a floppy disk, magnetic media (e.g., a magnetic tape), optical media (e.g., a Compact Disc (CD)-Read Only Memory (ROM), a DVD), magneto-optical media (e.g.,. a floptical disk), and an internal memory. An instruction can include machine code made by a compiler or code executable by an interpreter. A module or a program module according to various embodiments can include at least one or more of the aforementioned components, omit some of them, or further include additional other components. Operations performed by a module, a program module, or other components according to various embodiments can be executed in a sequential, parallel, repetitive, or heuristic manner. At least some operations can be executed in a different order or be omitted, or other operations can be added.
The control unit or processor may include a microprocessor or any suitable type of processing circuitry, such as one or more general-purpose processors (e.g., ARM-based processors), a Digital Signal Processor (DSP), a Programmable Logic Device (PLD), an Application-Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA), a Graphical Processing Unit (GPU), a video card controller, etc. In addition, it would be recognized that when a general purpose computer accesses code for implementing the processing shown herein, the execution of the code transforms the general purpose computer into a special purpose computer for executing the processing shown herein. Any of the functions and steps provided in the Figures may be implemented in hardware, software or a combination of both and may be performed in whole or in part within the programmed instructions of a computer. In addition, an artisan understands and appreciates that a "processor" or "microprocessor" may be hardware in the claimed disclosure.
While the disclosure has been shown and described with reference to certain example embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the disclosure as defined by the appended claims and their equivalents. | |
Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.
Integrating 3D virtual reality simulations in reusable e-learning courses
:
Segura, Álvaro
;
Moreno, Aitor
;
Müsebeck, Petra
;
Hambach, Sybille
Hambach, S. (Prüfer); Martens, A.; Urban, B.; Tavangarian, D. ; Fraunhofer-Institut für Graphische Datenverarbeitung -IGD-, Institutsteil Rostock:
e-Learning Baltics 2009. 2nd International eLBa Science Conference. Proceedings : in Rostock, Germany, June 18-19, 2009
Stuttgart: Fraunhofer Verlag, 2009
ISBN: 3-8396-0012-X
ISBN: 978-3-8396-0012-2
S.81-92
International eLBa Science Conference <2, 2009, Rostock>
Englisch
Konferenzbeitrag
Fraunhofer IGD
(
)
E-Learning
;
virtual reality (VR)
;
simulation
Abstract
This paper presents an example of creating and integrating Virtual Reality simulations in SCORM compliant Web-based e-learning scenarios. The simulations will be added to the e-learning content and they are composed of several passive and interactive simulations, requiring to follow a workflow in order to produce them in a reusable manner. The implemented VR simulations are shown in the results and they are related to the training of technicians in wind energy generator maintenance, which is part of the e-WindTech collaborative project. The main element of the VR simulations is the 3D wind mill model, which is used to produce different offline rendered videos and various interactive virtual worlds depending on the specific requirements of the e-learning content that will be presented to the students.
: | http://publica.fraunhofer.de/dokumente/N-116814.html |
Video Production Specialist Job Description, Key Duties and Responsibilities
This post covers all you need to know about a video production specialist job description, including the major duties, tasks, and responsibilities they are expected to perform.
It also presents the major requirements for being considered by most companies for the video production specialist role, please read on:
What Does a Video production Specialist Do?
Video production specialist develops and executes video post-production procedures, including reviewing film, making editorial judgments, video transcoding, rough cuts, audio adjustment, color correction, and final editing in Final Cut Pro software.
He/she utilizes computer graphics and special effects for video post-production in accordance with the creative vision of the producer or client.
The video production specialist job description also involves designing and producing motion graphics.
Video production specialist ensures that all video products meet the required technical specifications before release.
They coordinate with other departments, such as sound, marketing, and advertising to ensure that videos are produced in accordance with the overall strategic objectives of the company or client.
They operate equipment, such as HD video cameras, professional lighting kit, and related support gear.
The video production specialist also performs duties such as maintaining shooting logs; producing shooting dailies; organizing electronic files for editing; building storyboards; and transcoding footage into edit-friendly formats.
They review scripts before production, to identify format issues and offer recommendations.
Video production specialist creates daily edits during production process that can be used by the director for reference while directing cast and crew.
They ensure that all studio lights are set up correctly with both color temperature and intensity calibrated appropriately before shooting begins so as not to influence footage.
They write and perform video tests to determine the effectiveness of setup, picture quality, audio quality, lighting consistency, color temperature balance, focus issues, and lens distortion, and other issues that may compromise shot footage before shooting begins.
The video production specialist’s tasks also include helping producers scout locations for shoots at various sites.
They exhibit excellent communication skills, both verbally and in writing, working with both internal team members and external clients.
They create storyboards to help visualize the pace of each scene before shooting begins; they also write treatment or script.
They maintain current understanding of appropriate technologies; recommend upgrades or new technologies that could improve the video post-production process.
Overseeing quality control of all videos produced and ensuring that a high level of professionalism is maintained throughout all projects are also duties performed by video production specialists.
Their tasks also involve editing and finalizing videos for distribution, including color correction, sound adjustment, and adding graphic elements as required.
Video production specialists ensure that all video files are named according to established standards so as not to cause problems during the editing process.
They synchronize on-screen text with audio track using professional graphics package.
They also make sure that videos are encoded in appropriate formats for target websites, showcasing the strengths of each video.
They optimize video files to ensure they are as small as possible without compromising performance or experience..
Video Production Specialist Job Description Example/Sample/Template
The following duties, tasks, and responsibilities generally make up the job description of a video production specialist:
- Creates innovative and effective video products by correctly comprehending the producer’s or client’s vision
- Works with producers, editors, graphic designers, and other personnel to produce films that comply with the specified criteria
- Provides technical support for any equipment and software used in video production
- Ensures quality control of all video products
- Edits and finalizes videos for distribution
- Maintains knowledge of current technologies and recommends appropriate ones
- Designs and produces motion graphics
- Ensures that all technical requirements are met before releasing any video product
- Coordinates with other departments such as sound, marketing, and advertising to ensure that videos are made in line with brand and design standards and marketing campaigns
- Creates storyboards to assist filmmakers conceive of the tempo of each section before filming begins; may also write script treatments or scripts
- Assists with pre-production, location scouting, casting talent, budgeting, and scheduling as required
- Operates equipment and software required for video production, such as cameras, sound systems, and lighting
- Participates in the editing, finalization, and distribution of videos
- Works on color correction, sound adjustment, and adding graphical elements as needed.
Video Production Specialist Job Description for Resume
If you are writing a new resume as someone who is currently working as a video production specialist or has worked in that position before, you can highlight such experience in the work/job/professional experience section of your resume/CV.
You can easily create this section by applying the video production specialist duties and responsibilities given in the above job description example.
Having the professional experience section in your resume can significantly boost the impact of your resume on the recruiter to grant you an invite for an interview.
This is because the work experience section shows that you have been successful or are being successful doing the job of a video production specialist, which means you will be effective on the new job, especially if the new job requires some work experience as a video production specialist.
Video Production Specialist Requirements: Knowledge, Skills, and Abilities for Career Success
If you are seeking the job of a video production specialist, here are major requirements to meet for most employers to offer you the job:
- A Bachelor’s degree in a relevant discipline
- Experience of at least three years as a video specialist or in a similar role
- Video and video editing experience is beneficial
- Innovative thinker
- The ability to identify and resolve issues, as well as being a critical thinker and problem-solver
- A good team player
- Able to manage time efficiently
- Strong communication and interpersonal skills
- Excellence in using Final Cut Pro X and Adobe After Effects to edit video
- Knowledge and a thorough understanding of motion graphics.
Video Production Specialist Salary
The average yearly salary of a video production specialist in the United States is $48,750, which comes to $25 per hour.
Starting salaries for entry-level jobs are $37,050 per year, while most experienced professionals earn up to $67,734 per year.
Conclusion
Video production specialists are creative professionals who work with directors, producers, and other video crews to film content for TV programs.
They specialize in technical aspects of the process like lighting, sound recording, camera operation and editing.
Video production specialists also have a strong understanding of the program requirements before they even step foot on set.
These people can be found working at television stations or productions studios as well as freelance jobs where they produce commercially sponsored videos that sell products online.
This post is helpful to individuals interested in a video production specialist career, to increase their knowledge about the role and be convinced if that is what they want to do.
Recruiters/employers in the process of hiring for the video production specialist role can also use the sample video production specialist job description provided on this page in creating a detailed one for their organizations. | https://jobdescriptionandresumeexamples.com/video-production-specialist-job-description-key-duties-and-responsibilities/ |
Merrill Lynch Global Wealth Management is a leading provider of comprehensive wealth management and investment services for individuals and businesses globally. With over 13, 700 Financial Advisors and $1.9 trillion in client balances as of December 31, 2013, it is among the largest businesses of its kind in the world. Within Merrill Lynch Global Wealth Management, the Private Banking and Investment Group provides tailored solutions to ultra affluent clients, offering both the intimacy of a boutique and the resources of a premier global financial services company. These clients are served by more than 150 Private Wealth Advisor teams, along with experts in areas such as investment management, concentrated stock management and intergenerational wealth transfer strategies. Merrill Lynch Global Wealth Management is part of Bank of America Corporation.
Source: Bank of America. Merrill Lynch Global Wealth Management (MLGWM) represents multiple business areas within Bank of America’s wealth and investment management division including Merrill Lynch Wealth Management (North America and International), Merrill Lynch Trust Company, and Private Banking and Investments Group. As of December 31, 2013, MLGWM entities had approximately $1.9 trillion in client balances. Client Balances consists of the following assets of clients held in their MLGWM accounts: assets under management (AUM) of MLGWM entities, client brokerage assets, assets in custody of MLGWM entities, loan balances and deposits of MLGWM clients held at Bank of America, N.A. and affiliated banks.
The purpose of the Merrill Lynch Wealth Management “Risk Allocation Framework for Goal-Driven Investing Strategies” research chair is to develop new research on risk allocation and goals-based investing. The initiative involves the pursuit of fundamental research on risk allocation and goals-based wealth management.
The aim of the research project is to deliver a mathematically rigorous approach to investing for goals such as capital preservation, retirement income, maintenance of minimum wealth levels and preferences regarding risk and liquidity.
Any investment process should start with a thorough understanding of the investor problem. Individual investors do not need investment products with alleged superior performance; they need investment solutions that can help them meet their goals subject to prevailing dollar and risk budget constraints. This paper develops a general operational framework that can be used by financial advisors to allow individual investors to optimally allocate to categories of risks they face across all life stages and wealth segments so as to achieve personally meaningful financial goals. One key feature in developing the investment framework for goals-based wealth management is the introduction of systematic rule-based multi-period portfolio construction methodologies, which is a required element given that risks and goals typically persist across multiple time frames. | https://risk.edhec.edu/merrill-lynch-wealth-management-risk-allocation-framework-goal-driven-investing-strategies |
The question that’s been burning me up (pun sort-of intended) since kidhood is: when are we finally going to get off the stuff? It continues to amaze me, what with more solar energy striking the Earth in an hour than humankind uses in a year, and with Einstein’s e=mc2 equation long since having found practical application in nuclear energy (and more destructive forms of same).
Oh, I know, the reasons are legion, from the realities of legacy energy infrastructure to facts about energy density to what I believe is a plain-old failure of the imagination. I recently got into a near-argument with a colleague from conservative Georgia who claimed that alternative-energy research is bunk, and all these newfangled vehicles are doomed to failure. After he promulgated the usual Fox News canards (“none of this will work without subsidies”, “it’s just shifting pollution to the electrical grid”, etc.), I finally blurted out, “Come on; we’re scientists. We’re not supposed to thrown up our hands and say ‘impossible!'”
So to that end, when the time came for me to pick up a rental car for a day’s worth of errands and a visit to some friends down the Peninsula, I responded with an enthusiastic “yes, please!” when the agent at the car-rental shop asked, “do you want to try the new Nissan Leaf?”
I was a bit concerned about the oldest bugaboo in the book: although I’d heard electric cars are a pleasure to drive (responsive, power-efficient engines), the limiting factor for them has always been energy storage. While gasoline, for all its faults, can store an incredible amount of energy (and is easily portable, being a liquid at room temperature), battery technology has always been comparably weak. That’s why batteries typically only power small devices, why they’re constantly running down and needing to be recharged or replaced… and why, in spite of a heritage going back to the earliest days of the automobile, they never caught on the way their petroleum-distillate counterparts did. Only now, with historic rises in oil prices and talk of peak oil — coupled with incremental advances in battery efficiency — are electric cars starting to come out in greater (though still modest) numbers. So much so that my local Enterprise outlet now offers them as part of their regular fleet.
“You can definitely make it to Menlo Park and back,” said the agent as he sat me down in the car and gave me a tour of its Internet-age instrument cluster. “Just make sure to put it in ‘Eco mode.” Apparently in this mode the vehicle uses less energy, sacrificing a bit of performance to do so.
Right away it was apparent that the boosters of electric-drivetrain vehicles were right: the car drives fantastically. Even in Eco mode it was smooth, zippy, and (of course) quiet. The range indicator ticked off the miles remaining at a slightly slower pace than actual miles driven. Since the car is all-electric, it’s able to recapture some of the energy lost in braking to charge the battery, a process known as regenerative braking. The car’s initial range was 80 miles (a bit more with Eco mode on); after half a day’s worth of errands, I still had some 70-plus miles left — more than enough for the 28 or so miles each way to get down to my friends’ party in the Peninsula that evening.
Still, I wanted to see if I could top up, and to that end went looking for one of those electric vehicle charging stations the eco-minded leadership in San Francisco has been busily installing these past few years. A few taps on my smartphone and I found one nearby — in the parking garage of a nearby Costco superstore.
Easier said than done, however: no signage existed inside the mammoth parking area, and only care of an employee did I discover the two spots, forlornly tucked away behind the exit and a tire service center. Both were available free of charge, and both had charging receptacles… but neither one of them featured the type of outlet my Nissan Leaf called for.
Still, I figured I’d be fine, and with a friend in tow I headed down south. Again, even in Eco mode the car was as assured on the highway as any comparable subcompact… but neither of us could help turning our eyes toward the range meter: the miles kept peeling off as we cruised at typical highway speeds (in these parts they nudge 70 mph). Our 30 miles of distance consumed 45 miles of charge… which means we were down to 25 miles and wouldn’t have enough to get home.
Fortunately, my suburban friends have a garage (and a long three-pronged extension cord), and with the car’s own cord and adapter, we plugged the thing in and settled in for some St. Paddy’s Day revelry. Five hours later, the charge was up to 45 miles… which based on previous consumption patterns would barely be enough to get back to San Francisco. As it happens, SFO airport (and another Enterprise outlet) was on the way, so I figured if we were running low on juice stopping there to swap vehicles was always an option.
This time, however, I really took it easy (I am, admittedly, a bit of a leadfoot): we turned off the climate control, and drove a grandpa-style 55 mph the whole way home. This time the mileage corresponded more closely to the range meter, and we got back with over a half-dozen miles to spare. This was more than enough to head back to the rental place the next morning, where I experienced the best part of the whole adventure: no need to fill up!
So what’s the verdict? I think this is a stellar step in the right direction, but a combination of improved battery range and recharging infrastructure need to be in place to truly make this viable. This has been the chicken-and-egg issue with electric vehicles all along: the expense of getting everything in place is staggering. Sure, this was true back in the early 20th-century with gasoline vehicles as well, but the difference then was that no motor vehicle infrastructure existed, so any improvement in fueling and roadways were welcomed (and took a long time, too: from the invention of the motorcar in the 1880s to widespread popularity in the 1920s marks a span of nearly half a century). But now we already have an infrastructure in place, a gasoline-based system, and turfing out that “legacy” investment is proving a tough nut to crack. And until there is a reliable electric infrastructure in place, people are understandably gun-shy about plunking down their ducats… and so continues the vicious cycle.
There are a number of ways out of this: for one thing, a “hot-swappable” battery would be great. At least one company is working on this. A global, universal standard would be nice as well — I shouldn’t have had an issue finding charging stations compatible with my model of vehicle. And improved battery life (and an accurately-rendered one at that) is critical: until cars can get 300-plus miles to a charge, I doubt many of us will be interested (one manufacturer is pretty close to that benchmark already). Sure, having a “city car” is nice, but for most of us, the ability to go anywhere, anytime, is what we pay to have an automobile for. As with all things environmental, it’ll only be when green technologies offer comparable features to their non-green counterparts that they’ll really become popular.
Ultimately, then, I think this is going to take a coordinated, concerted effort on the part of governments, corporations, and the public — a notoriously difficult combination to get in sync. But similar such efforts have been successful in the past, from winning World Wars to putting men on the moon. In spite of initial hurdles, I’m excited to see what comes next — and will definitely be first in line when this technology is more mature. | http://www.davidjedeikin.com/category/energy/ |
Introduction to the theory and application of ethnographic and qualitative methods in educational settings with special emphasis on applications for educational linguistics, educational anthropology, and research related to language arts instruction. Surveys the basic rationale for qualitative/ethnographic inquiry and basic concepts and methods for applications in teacher-as-researcher approaches and for action research. Same course as LING 595. Letter grade only (A-F).
This course was completed in Spring 2007.
Artifacts
Article Critique
The assignment was to take an article and critique the qualitative research study and results. The goal of this assignment was two fold. First, I needed to read the article and critique it for its qualitative research study methods and results. Having just learned the meaning and study methods of a qualitative research study, this was challenging. Second, the assignment was teaching the class how to look at scholarly literature and evaluate it for potential use in a project. This assignment was preparation for the next and more challenging assignment, the research proposal.
The article critique aligns with learning objectives for introduction to qualitative research methods to students.
Research Proposal
This assignment was to develop a qualitative research proposal complete with a literature review, study parameters, and results. The assignment was meant to mimic a masters research thesis or project though that it had many of the major sections one would expect to find in such a paper, however it relied soly on qualitative research methods.
This project was my first introduction to doing such a project. I found the academic research and writing difficult at first, but once I narrowed down my topic and found a few reliable references, it became a bit easier.
This assignment aligns with the learning objectives for qualitative inquiry, basic concepts and methods for applications in teacher-as-researcher approaches, and for action research.
Reflection
The reason I chose to take qualitative research methods instead of quantitative research methods is due to how it applied to my work environment in higher education. As a college webmaster, I found that I frequently observed faculty and staff in my college and how they interacted with technology. This lead to many discoveries about technology use and how I could improve it at the college.
The artifacts I selected reflect my journey through the class. The article critique was one of the first literature review style pieces where I was doing more than just looking for information that backed my writing. In this piece, I read and reflected on what I though the author got out of the study and determined the validity of the study based on the information provided in the article. This is a critical skill needed for a student who may be preparing for a masters thesis or project. The second artifact is my original draft for a possible research proposal. This project took up the majority of the semester, went through five drafts, peer review, and finally a presentation. What this project taught me was that while I knew a lot about technology, I was just learning about how to back that knowledge up with an academic literature review and study.
Overall, I thought the class was very informative. Having taken statistics before, I felt the class was a better choice for me as I had had very little experience with formal qualitative research methods. | https://www.brendaspot.com/education/etec-portfolio/courses/ed-p-595-qualitative-research-methods/ |
What leads to a successful creative collaboration? Be it music, movies, or multimedia… collaborative online communities are springing up around all sorts of shared artistic interests. Even more exciting, these communities offer new opportunities to study creativity and collaboration through their social dynamics.
We examine collaboration in February Album Writing Month (FAWM), an online music community. The annual goal for each member is to write 14 new songs in the 28 days of February. In previous work we found that FAWM newcomers who collab in their first year are more likely to (1) write more songs, (2) reach the 14-song goal, (3) give feedback to others, and (4) donate money to support the site. Given the individual and group benefits, we sought to better understand what factors lead to successful collabs.
By combining traditional member surveys with a computational analysis over four years of archival log data, we were able to extend several existing social theories about collaboration in nuanced and interesting ways. A few of our main findings are:
- Collabs form out of shared interests but different backgrounds. Theory predicts that people work with others who share their interests. But we found that, for example, a heavy-metal songwriter is less likely to collab with another metalhead than, say, a jazz pianist (who enjoys head-banging on occasion).
- Collabs are associated with small status differences. Existing theory also predicts that people tend to work with others of the same social status. In our study, members teamed up with folks of slightly different status more often than those of identical status. (There are several explanations, ranging from newcomer socialization to hero-worship.)
- A balanced effort is most enjoyable for both participants. The “social loafing” literature suggests that people are disappointed by collabs when their partner is a slacker. However, we found that the slackers themselves were disappointed, too.
To top it all off, the novel path-based regression model we use is significantly better than other standard techniques for predicting new collabs that will form (see the graphs below). This has exciting implications for recommender systems and other socio-technological tools to help improve collaboration in online creative communities.
For more, please see our full paper Let’s Get Together: The Formation and Success of Online Creative Collaborations. | https://blog.humancomputation.com/?p=6135 |
Questions tagged [c++]
C++ is a compiled general-purpose programming language that adds object-oriented features and other enhancements to the C language. It is popular for both embedded (including robotics) and PC software development.
62 questions
0
votes
2answers
44 views
How do I get competent in using c++ for my projects? [closed]
I am a PhD student, working on Sensor Fusion and estimation problems. I would like as I finish my PhD to have acquired sound knowledge in what seems to be the industry norm for working with real ...
1
vote
1answer
92 views
Subscriber to array type overwriting data from different publishers!
I have been witnessing some bizarre behaviour with my very simple node. I have a custom message that contains uint8 type (it's 8 elements long). I have a ...
2
votes
1answer
211 views
How to properly initialize every new pose in a Visual SLAM algorithm (namely DSO)?
My question is a bit specific, because it is linked to a certain algorithm. Therefore I didn't find any other solutions on how to go about this problem. If you could refer me to research papers, ...
3
votes
2answers
62 views
How to get unix/posix time stamp in header of ROS msg?
I am using two sensors each connected to different machine (each machine separately runs Ubuntu 16.04 and ROS Kinetic). When I echo topic on these two machines, I ...
2
votes
3answers
177 views
How to implement path planning algorithm considering orientation?
I am developing GUI c++ program to test path planning algorithms: A*, Dijkstra, ....etc in occupancy grid map. I found many open source codes , but I need to modify them to design a GUI for testing ...
2
votes
1answer
48 views
Make robot drive as far from obstacles as possible
I am trying to make robot drive around the room, avoiding obstacles with pathfinding. The problem is I get few waypoints from pathfinding algorithm for robot to drive to (green dots), but it drive a ...
1
vote
1answer
69 views
Controller algorithm implementation in ROS/Gazebo
I am doing some robotic simulations in ROS/Gazebo and wondering what is the best way (programming-wise since I don't have a CS background) to implement a robot's motion controller. Here are the ...
0
votes
1answer
58 views
Some Kalman filter implementation queries
Just to clear some doubts: Qn 1. Does kalman filter require constant time step? From my own study, it does not seem necessary to have a constant time step. You just need to take into account time ...
0
votes
0answers
69 views
OpenGL C++: Inverse kinematics using Jacobian Transpose doesnt work
I am using glm library and OpenGl for this. This is might not be directly related to Robotics but I guess Inverse kinematics is used there I have 3 joints Here is my code ...
0
votes
2answers
365 views
/odom topics not matching ERROR
I have an existing ROS workspace which was built in C++. I am trying to implement a node built by The Construct in python which reads distance travelled from the ...
3
votes
1answer
95 views
Command line boolean parameters to ros node
I'm working with ROS melodic and Gazebo 9.9.0 on an Ubuntu 18.04.2 LTS. I want to get two boolean parameters from command line. To do it, I have this code: ...
4
votes
2answers
127 views
How Can A Total Beginner Become A Skilled Roboticist?
Say the only computer skill you have is programming in C/C++...what all things would you have to learn in order to be an adept full stack roboticist, one who can single-handedly build an autonomous/AI ...
1
vote
0answers
44 views
Any way to program a robot remotely?
So Udacity, Kuka and KIT are/were offering a Kuka challenge. Basically, the participants got to write code and run it remotely on a Kuka robot based in the KIT labs in Germany and submit their code ...
4
votes
1answer
184 views
Adjusting the PWM frequency and duty cycle to achieve the desired angular velocity in differential drive robots
I am practicing C++ and intro to robotics using a simple differential drive robot and a raspberry pi. Also following along the GA Tech course on control of mobile robotics. The implementation I am ...
14
votes
5answers
8k views
How to setup CLion for ROS?
How can I setup the C++ IDE CLion to display documentation and auto completion correctly when working with ROS?
0
votes
1answer
40 views
Real world dynamic problem suggestion for PID [closed]
I need you guys to suggest me a problem that requires to change PID coefficients in different times. Lets say every 1 minutes the environment should change and readjust the PID’s parameters. Note that ...
1
vote
0answers
110 views
Detecting and counting the number of junctions
So I am making a robot which is to follow a line. The field has both +(junctions) as well(by the intersection of two lights). I am yet to figure out what exactly is the best way to track the number of ...
1
vote
1answer
1k views
How do I control a servo using a beaglebone black running ubuntu
I have a BeagleBoneBlack and would like to use it to control a servo for my robot. I'm mostly programming in ros and as such am looking preferably for a c++ solution. Is there an easy way of ...
2
votes
1answer
299 views
Multiple View Triangulation method used by COLMAP
I'm looking at how COLMAP does multi-view triangulation. I can't work out what this function is doing. I can't find any formulas which look similar. The input "proj_matricies" come from pose data, the ...
2
votes
2answers
263 views
ROS: Sensor data via USB
Im working on a project and i want to use a sensor. I can plug in the sensor via usb. I do not need any driver for the sensor. My task is to have an access to the sensor data. I have followed the ...
0
votes
2answers
136 views
using motor controllers with Raspberry instead of Arduno. Is it just me or is everyone ok with bad sofftware support?
I bought a Roboclaw controller. Since then, have looked around for other. The software support seems to be the same (to me:sad) state. They all seem to provide Arduino code examples, however, then I ...
3
votes
1answer
429 views
Are both of C++ and Python necessary in ROS
I'm a newbie on ROS and I'm trying to figure out how ROS works so I'm installing ROS from source. I've found that most of ROS packages contains two kinds of codes: C++ and Python. For example, here ...
1
vote
1answer
79 views
How to become proficient in software development for an aspiring roboticist? [closed]
I am a masters student specializing in Robotics. I have a bachelors degree in Mechanical engineering and hence a bit sloppy with programming languages. MATLAB is one thing I am proficient in. But ...
5
votes
1answer
178 views
Robot positioning problem
The problem i am facing is to try and calculate the x and y position of a robot with dead reckoning. Reading from the encoders and getting proper rotations of the wheels of my robot works. The robot ...
1
vote
0answers
610 views
How to make a line following algortihm for an A.R Drone 2.0? [closed]
I am trying to develop a line following algorithm where a drone will detect a bounding box and follow what is inside the bounding box. I am filtering all the colors to only see the color white. Once ...
0
votes
3answers
2k views
Is there any C++ library I could use to program a robotic manipulator involving forward and inverse kinematics?
I came across robotics library (RL), but quite unclear about its real purpose. Is it a FK/IK solver library or simply an graphical simulator?. RL has poor documentation, so its not clear how to use it....
3
votes
2answers
122 views
Computationally efficient way to represent joint C space for a multi-robot RRT
I am working on writing code for a coordinated multi-robot rapidly exploring random tree (RRT) based planner, which would naturally involve a lot of sampling, nearest neighbor searching and 'radius' ...
1
vote
1answer
1k views
how to move my robot to the assigned coordinates
I am working on an ground surveillance robot using an Arduino mega for programming, am using components like the HMC5883L compass, Adafruit GPS for assigning of coordinates (latitude and longitude) ...
3
votes
2answers
246 views
Kalman filter prediction questions [closed]
I have a dataset where measurements were taken at 1 Hz, and I am trying to use a Kalman filter to add predicted samples in between the measurements, so that my output is at 10 Hz. I have it ...
4
votes
1answer
7k views
Create a simple C++ client Application to control KUKA's Robot-arm LBR iiwa via FRI
Until now I have been programming the robot using Java on KUKA's IDE "KUKA Sunrise.Workbench", what I want to do is control the robot arm via my C++.Net application (I would use a camera or Kinect to ...
0
votes
1answer
307 views
VFH+ (Vector Field Histogram+) : Is it possible to choose a candidate sector without a set goal point?
Good day I am currently implementing the VFH algorithm. Is it possible to configure the algorithm such that a reactionary motion is generated at the presence of an obstacle? I have been able to ...
4
votes
5answers
6k views
Programming Inverse Kinematics in C++
I want to write my own kinematics library for my project in C++. I do understand that there are a handful of libraries like RL (Robotics Library) and ROS with inverse kinematics solvers. But for my ...
1
vote
1answer
77 views
Robot Graphical Representation in Real Time
I'm working with a robot intended to be placed in a tele-echography environment. To control the robot I'm using a 6D space mouse that control each degree of freedom of the robot. However, since the ...
1
vote
2answers
343 views
Can i use a predictive kalman filter to 'increase' my sample rate?
I have a slam algorithm that outputs at around 30Hz, an implementation of ORBSLAM2. https://github.com/raulmur/ORB_SLAM2 I am reading this into a renderer that expects 60+ Hz. Because my sample ...
1
vote
0answers
192 views
Scaling monocular SLAM with another source?
I have implemented a stable monocular slam tracking system, based on ORBSLAM2. I am trying to find a way to add real-world distance/scale to this. At the same time, I am running a (less stable) ...
0
votes
1answer
233 views
Save Depth Map Video
I have recently purchased an Orbbec Astra camera, which uses the same technology and produces the same style depth map as a Microsoft Kinect. What would be the correct file format to save the depth ...
0
votes
1answer
301 views
Combine individually working cartesian coordinates
I am trying to control a Dobot arm. The arm moves with angles whereas I need to work with cartesian coordinates. From inverse kinematics equations and polar coordinates I have implemented x,y and z ...
0
votes
1answer
930 views
CompressedImage to an Image in a node
Update Hey I have the following subscriber on Nvidia TX1 board running on an agricultural robot. we have the following issue with subscribing to Sensor_msgs::Compressed: ...
2
votes
1answer
1k views
Stereo Vision Using Compute Module: Pi camera synchronization
Good day, I am currently working on an obstacle avoiding UAV using stereo vision to obtain depth maps. I noticed that the quadcopter would sometimes not steer to the correct direction. I am using ...
0
votes
2answers
91 views
TCP Communication with PCDuino
I'm working on a robot that is controlled by an xbox controller connected to a windows computer and commands are sent to a pcduino through a tcp connection. I have it working by sending a string of 1'...
1
vote
1answer
520 views
Change Message Interval ArduPilot
I am using Mavlink protocol (in c++) to communicate with the ArduPilotMega, I am able to read messages such as ATTITUDE for example. I am currently getting only 2Hz (message rate) and I would like to ...
3
votes
1answer
265 views
How to split tasks between interrupts and the main loop on a bare metal controller?
I'm working on a robotics project where I have 3 services running. I have my sensor DAQ, my logic ISR (motor controller at 20kHz) and my EtherCAT slave controller. DAQ and EtherCAT run in the idle ...
2
votes
1answer
127 views
How do I compute the inverse kinematics given a desired transformation matrix?
I am at the moment trying to implement an inverse kinematics function which function is to take a desired transformation matrix, and the current transformation matrix, and compute the Q states that is ...
-1
votes
1answer
534 views
C++ and Create 2
I am trying to use C++ to talk to the Create 2 robot. Does anyone have basic code to write/read from the Create 2 using C++ or C? I am having trouble with converting Create 2 commands (like ...
4
votes
1answer
953 views
Implementation of inverse kinematics solution in c++
I am having some issue with implementing a least square solution of the inverse kinematics problem. The q configuration I get are rather large, or makes no sense, so I was hoping someone here could ...
2
votes
2answers
341 views
Implementing an analytic version of an inverse kinematic
People have recommended me implement an analytic version of inverse Jacobian solver, such that I won't be forced only the least square solution, but would have an local area of solution near to the ...
0
votes
1answer
405 views
Quadcopter program execution time optimization using Raspberry Pi by increasing i2c baudrate
Is it possible to speed up execution time of a c++ program in raspberry pi solely by increasing the i2c baudrate and increasing the sampling frequency of the sensors? I have the issue of sudden ...
3
votes
1answer
129 views
inverse kinematics osciliations..
I am the moment having some issues with an Jacobian going towards a singularity (i think)as some of its values becomes close to zero, and my robot oscillates, and therefore thought that some form of ...
-1
votes
2answers
765 views
Does C have advantages over C++ in robotics? [closed]
I want to build robots, and right now I aim to work with Arduino boards I know that they are compatible with c and c++, so was wondering which language is better for robotics in general? I know how ...
2
votes
2answers
738 views
Is there a simpler way than ROS for 5 DOF Dynamixel arm control
I will have a 5 or 6 DOF arm build with Dynamixel or HerculeX smart servos. I need to move the gripper along Cartesian trajectory, which I will calculate in my C++ application. I looked at ROS, but ... | https://robotics.stackexchange.com/questions/tagged/c%2B%2B?sort=active |
Purchase this article with an account.
or
Miranda Scolari, Sabine Kastner; Mechanisms of attentional control in fronto-parietal cortex across spatial positions. Journal of Vision 2013;13(9):288. doi: https://doi.org/10.1167/13.9.288.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
An abundant and varied set of studies has established that attention can be directed to a particular region of space, such that visual input at an attended location is preferentially processed over input at unattended locations (space-based attention or SBA), and this processing bias is manifested in visual cortex. More recently, it has been explored how SBA modulation of sensory signals is controlled via higher-order cortical networks. Based on early patient observations that have since been refined by contemporary imaging studies, topographic subunits within fronto-parietal cortex have been implicated as a source of control. This control network is best described as a gradient of attention across space, wherein the two hemispheres operate in concert by generating attentional weights in favor of the contralateral visual field (interhemispheric competition account). When a relevant item appears in one visual field, the weighting sum is biased in the corresponding direction. We hypothesized that the magnitude of the spatial bias should dictate not only the attended hemifield, but also the eccentricity at which attention is focused. Using high-resolution fMRI, we estimated interhemispheric competition (via a contralateral bias index) for each topographic subunit of fronto-parietal cortex to determine how the control network signals fluctuations in stimulus position. Subjects either attended to a single flickering grating appearing at one of four eccentricities from fixation (2°, 5°, 8°, or 12°), or to an RSVP letter stream at fixation. As expected, SBA effects were observed in all visual (V3v-V7), parietal (IPS0-5, SPL1) and frontal (FEF, PreCC) areas. The contralateral bias showed no systematic patterns across eccentricities in visual or frontal cortex. However, the bias tended to increase with eccentricity in parietal regions. This is consistent with previous hypotheses that frontal cortex is primarily involved in general, goal-directed components of attention, whereas parietal cortex is primarily involved in stimulus-driven components. | https://jov.arvojournals.org/article.aspx?articleid=2142395&resultClick=1 |
The Kaiser Family Foundation (KFF) Survey of Non-Group Health Insurance Enrollees is the third in a series of surveys examining the views and experiences of people who purchase their own health insurance, including those whose coverage was purchased through a state or federal Health Insurance Marketplace and those who bought coverage outside the Marketplaces. The survey was designed and analyzed by researchers at KFF. Social Science Research Solutions (SSRS) collaborated with KFF researchers on sample design and weighting, and supervised the fieldwork. KFF paid for all costs associated with the survey.
Because the study targeted a low-incidence population, the sample was designed to increase efficiency in reaching this group, and consisted of three parts: (1) respondents reached through random digit dialing (RDD) landline and cell phone (N=142); (2) respondents reached by re-contacting those who indicated in a previous RDD survey that they either purchased their own insurance or were uninsured (N=234); (3) respondents reached as part of the SSRS Omnibus survey (N=410), a weekly, nationally representative RDD landline and cell phone survey. All RDD landline and cell phone samples were generated by Marketing Systems Group.
A multi-stage weighting process was applied to ensure an accurate representation of the national population of non-group enrollees ages 18-64. The first stage of weighting involved corrections for sample design, including accounting for the likelihood of non-response for the re-contact sample, number of eligible household members for those reached via landline, and a correction to account for the fact that respondents with both a landline and cell phone have a higher probability of selection. In the second weighting stage, demographic adjustments were applied to account for systematic non-response along known population parameters. No reliable administrative data were available for creating demographic weighting parameters for this group, since the most recent Census figures could not account for the changing demographics of non-group insurance enrollees brought about by the ACA. Therefore, demographic benchmarks were derived by compiling a sample of all respondents ages 18-64 interviewed on the SSRS Omnibus survey during the field period (N=7,601) and weighting this sample to match the national 18-64 year-old population based on the 2015 U.S. Census Current Population Survey March Supplement parameters for age, gender, education, race/ethnicity, region, population density, marital status, and phone use. This sample was then filtered to include respondents qualifying for the current survey, and the weighted demographics of this group were used as post-stratification weighting parameters for the standard RDD and omnibus samples (including gender, age, education, race/ethnicity, marital status, income, and population density). A final adjustment was made to the full sample to control for previous insurance status (estimated based on the combined RDD and omnibus samples), to address the possibility that the criteria used in selecting the prescreened sample could affect the estimates for previous insurance status.
Weighting adjustments had a minor impact on the overall demographic distribution of the sample, with the biggest adjustments being made based on age (this is common in all telephone surveys, as younger respondents are the most difficult to reach and convince to participate). Weighted and unweighted demographics of the final sample are shown in the table below.
All statistical tests of significance account for the effect of weighting. The margin of sampling error (MOSE) including the design effect is plus or minus 4 percentage points for results based on the total sample. Unweighted Ns and MOSE for key subgroups are shown in the table below. For other subgroups the margin of sampling error may be higher. | https://www.kff.org/report-section/survey-of-non-group-health-insurance-enrollees-wave-3-methodology/ |
Artist Susanna Bauer uses natural objects that often go overlooked and breathes new life into them through the art of crocheting. She transforms dry, brittle leaves into delicate sculptures that are awe-inspiring in their detail and craftsmanship.
Bauer hasn't picked the easiest material to work with. If you've ever stepped on a brown leaf, you probably know how easily it breaks beneath your feet, making a crunching noise and splintering into several pieces. Her sculptures, however, look as though they're completed with relative ease. The artist crochets tunnels, cubes, and cones, in addition to some decorative stitching. And, it's all done without cracking the form.
Bauer's work is a balance of fragility and strength, and this duality also signifies the tenderness and tension found in human connections. She explains, “The transient yet enduring beauty of nature that can be found in the smallest detail, [is] vulnerability and resilience that could be transferred to nature as a whole or the stories of individual beings.”
Bauer currently has an exhibition at the Lemon Street Gallery in Cornwall, England, running through the June 27th. | https://mymodernmet.com/susanna-bauer-crochet-leaf-sculptures/ |
Updated: October 28, 2021 9:40:32 am
At the heart of its significant order on the Pegasus snoop allegations, lie three key imperatives that the Supreme Court has underlined: the right to privacy of citizens; freedom of the press including the right of journalists to ensure protection of their sources; and the limits of national security as an alibi by the Government to block disclosure of facts related to citizen’s rights.
The three-judge bench, headed by Chief Justice of India N V Ramana, in ordering a probe by a committee headed by former Supreme Court judge R V Raveendran, flagged that its intervention is to “uphold the constitutional aspirations and rule of law” without being “consumed in (the) political rhetoric.”
On the government’s refusal to file a detailed response to the allegations made by the petitioners, the court cited the 2011 landmark ruling on black money Ram Jethmalani v. Union of India to say that the Government “should not take an adversarial position when the fundamental rights of citizens are at threat”.
“This free flow of information from the Petitioners and the State, in a writ proceeding before the Court, is an important step towards Governmental transparency and openness, which are celebrated values under our Constitution,” the court said.
The apex court also refused to accept the blanket argument of national security made by Solicitor General of India Tushar Mehta when he refused to file a detailed affidavit or answer whether the Centre had procured the spyware at all.
Indeed, the court moved the needle on holding the government accountable when it refused to accept the sweeping use of national security to deny information to the court. In fact, it said that now on, the Government will have to plead its case.
“Of course, the Respondent Union of India may decline to provide information when constitutional considerations exist, such as those pertaining to the security of the State, or when there is a specific immunity under a specific statute. However, it is incumbent on the State to not only specifically plead such constitutional concern or statutory immunity but they must also prove and justify the same in Court on affidavit,” the Court said.
That’s not all. The court also rejected the government’s plea to set up its own probe. “Such a course of action would violate the settled judicial principle against bias, i.e., that ‘justice must not only be done, but also be seen to be done’,” it said.
Citing the right to privacy, the court said that “privacy is not the singular concern of journalists or social activists.”
“In a democratic country governed by the rule of law, indiscriminate spying on individuals cannot be allowed except with sufficient statutory safeguards, by following the procedure established by law under the Constitution.”
The court has set six terms of reference for the Justice Raveendran Committee that range from confirming the use of Pegasus spyware on citizens, details of those affected to whether the government or any other party procured the spyware to use on citizens and the laws that could have allowed such use. Significantly, these are the same questions the government refused to answer before the court.
The court has also asked the Raveendran committee to make recommendations on a legal and policy framework to protect citizens against surveillance and enhance cyber security of the country.
Significantly, the court also emphasised freedom of press and the right of journalists to protect sources as a compelling reason to initiate the probe.
“Such chilling effect (alleged surveillance) on the freedom of speech is an assault on the vital public watchdog role of the press, which may undermine the ability of the press to provide accurate and reliable information,” the court said.
“An important and necessary corollary of such a right is to ensure the protection of sources of information. Protection of journalistic sources is one of the basic conditions for the freedom of the press. Without such protection, sources may be deterred from assisting the press in informing the public on matters of public interest,” it added.
Newsletter | Click to get the day’s best explainers in your inbox
📣 The Indian Express is now on Telegram. Click here to join our channel (@indianexpress) and stay updated with the latest headlines
For all the latest Explained News, download Indian Express App.
-
- The Indian Express website has been rated GREEN for its credibility and trustworthiness by Newsguard, a global service that rates news sources for their journalistic standards. | https://indianexpress.com/article/explained/pegasus-snoop-allegations-sc-moved-the-needle-on-privacy-press-freedom-govt-security-alibi-7594235/ |
CARDIFF MONTESSORI HIGH APPROACH
Our world has, and is rapidly changing and progressing. Yet our educational models have struggled to keep pace with this change. We believe a better approach to education in the 21st century is needed. One that can truly prepare students for further studies and life beyond school. Montessori education is recognised as one of the most advanced educational approaches in the world.
Many of us have become accustomed to what we think secondary education "should" look like based on our own experience. Most people can relate to the numerous pitfalls of the traditional secondary model but have not been able to conceive an alternative for their children. Here at Cardiff Montessori High we provide an approach to enable children to gain not just academic excellence but wider skills.
We believe students need to achieve the qualifications required to access further education. However, these qualifications should not limit what they need to study and learn. Schools should not be like a factory line. Not every child needs to leave with the same "knowledge". There is a basic level of information students need to know for their future examinations but students must also be free to explore and delve deep into areas of interest. This allows them to share their interests and expertise with one another and form a learning community. Each bringing new information and their own unique skills to the group and collaborative projects.
As a group students must learn to draw upon each others' strengths and skills to work as a collective, taking responsibility for certain areas and delegating to one another appropriately. Every member of the community must have the opportunity to shine and provide a valuable contribution. These are the skills they will need in later work. Perhaps more importantly, these are the skills they will need to navigate healthy relationships and develop a strong sense of belonging. Everyone matters and everyone is valuable.
One of the questions we are often asked is how can a group of 40 students allow for the same social skill development as being in a large school of several 100? Our students will get to know each of their peers. In a smaller group, they must learn to work with and get along with everyone. This approach is key to social skill development. Our staff are also trained to be present when needed and fade away when not. Across the school the focus is on intrinsic rather than extrinsic motivation. We achieve excellent behaviour by ensuring students recognise that such behaviour is both to their own benefit and to the benefit of their community.
All teenagers will face challenges and conflicts. By having a smaller community, this ensures these do not go unnoticed and that our staff can intervene to guide students to resolve these themselves as they arise.
"The child's development follows a path of successive stages of independence, and our knowledge of this must guide us in our behaviour towards him. We have to help the child to act, will and think for himself."
Maria Montessori
THE ADOLESCENT
Montessori recognized that that are 4 stages (often referred to as planes) of development. The first plane (0-6 years) is one of self construction, where the child is saying to us "help me to do it myself". The second plane (6-12 years) is the one in which moral reasoning is developing and the child is asking us to "help me to think for myself". The third plane child (12-18 years), is now undergoing social construction. Much like the first plane child they want to do things for themselves but this time what they seek can be articulated as "help us to do it for ourselves". Unlike the first plane child they now want to do things with their peers and to achieve their aims as a collective.
In the 6-12 years they have learnt through the Montessori elementary program how society works and now they need to experience it. Secondary is where they take the 12 years of construction and experience they have had and enter with it into the next phase, leaving childhood behind as the birth of the adult begins.
Montessori recognizes that this is a time of fragility as great changes in the physical body as well as mental and cognitive functions are taking place. They need independence from the family unit, while still requiring support from an adult guide who understands and loves teenagers. They need experiential learning, to be able to design projects and run enterprises instead of rigid lessons. They must learn how to work as part of a group and discover what are their unique capabilities and how they can contribute to their community and the world around them.
It is that deep understanding of child development that underpins the methodology in all classes throughout the school. Many have said that Montessori did not develop one method of education, but four. The way we teach and the way we interact with the students is different in each of the four planes of development in keeping with the developmental needs of the child.
“Education should not limit itself to seeking new methods for a mostly arid transmission of knowledge: | https://www.cms.cardiff.sch.uk/secondaryapproach |
1. Field of the Invention
The present invention relates to an apparatus for performing communication with another vehicle having similar positional data based on broadcast positional data. The present invention also relates to an avoidance operation when it is recognized that there is a possibility of a collision with another vehicle.
2. Description of the Related Art
Systems for collecting a variety of information using a vehicle mounted communication apparatus, systems for collecting destination information of each vehicle to be utilized in traffic control, and a variety of other systems have all been proposed.
Inter-vehicle communication has been proposed where a moving or stopping vehicle will notify another vehicle of its actions or of information obtained in communication between the vehicles.
With such inter-vehicle communication, along with useful data, unnecessary information is often transmitted and received. For example, even if information of future traveling/stopping of a vehicle which has brushed is received, it usually has no meaning. Therefore, in inter-vehicle communication, there are many requests to effectively select useful data from all received data.
Furthermore, Japanese patent laid-open publication No. Hei 7-333317 discloses an apparatus that transmits/receives position information between movable bodies and raises an alarm when both movable bodies are approaching a predetermined distance.
Moreover, there has been proposed a system for averting vehicle collision by performing communication between vehicles (inter-vehicle communication) and measuring the distance between the vehicles. The xe2x80x9cSS boomerang systemxe2x80x9d is one such system. In the xe2x80x9cSS boomerang system,xe2x80x9d an electromagnetic wave is broadcast, and a response is returned by vehicles which receive that signal. The response time is measured to calculate the distance between the vehicles to allow the possibility of collision to be reduced.
However, with inter-vehicle communication for measuring the distance between vehicles based on the response time of the transmitted electric wave, accurate motion information of another vehicle other than the distance between the vehicles is difficult to obtain.
In addition, it is not negligible that the avoidance operation carried out by the both vehicles that have possibility of a crash is not necessarily appropriate.
It is an object of the present invention to provide an inter-vehicle communication apparatus capable of receiving only necessary information in a vehicle by including positional data in a communication protocol.
It is another object of the present invention to provide a vehicular traveling control apparatus for executing control to avert a collision with another vehicle by obtaining accurate motion information of the other vehicle through inter-vehicle communication.
According to one aspect of the present invention, a communication pattern is determined based on the position of the user""s vehicle. Therefore, transmission data is received in the vehicle that has received data in accordance with the communication pattern at that position. Thus, a signal required for reception can be automatically selected.
For example present position and projected positions at two seconds later, four seconds later, . . . , n seconds later are represented in the form of time data and positional data, and the communication pattern is determined based on these data. As such a communication pattern, the PN series for the spread spectrum or the frequency hopping pattern may be adopted. For example, when the PN series is determined based on the time and position and then transmitted, the inverse spread is performed only in a vehicle using the same PN series to receive signals. In other words, on the receiving side, only signals that coincide with the future time and position of the user""s vehicle.
Further, it may be preferable to determine a search range for the communication pattern based on a range related to traveling of the user""s vehicle, whereby this vehicle can communicate with another vehicle in that range.
When it is determined that there is a possibility of collision, it may be preferable to select a communication pattern for emergency. This enables identification of the emergency communication from any other communication.
Furthermore, it may be preferable to narrow the search range when another vehicle approaches. This can narrow the search range to select only a specific emergency communication to be performed.
Moreover, according to another aspect of the present invention, existence probability data can be calculated based on the positional data of the user""s vehicle and position error data. The accuracy of this existence probability data can be greatly increased by utilizing the position error data.
Use of the existence probability data can allow precise motion information of another vehicle to be obtained to assist carrying out of accurate avoidance control.
Additionally, according to yet another aspect of the present invention, relative position data of the user""s vehicle and another vehicle obtained from the inter-vehicle communication is used to generate the existence probability data of the user""s vehicle in order to execute avoidance control.
In addition, according to a further aspect of the present invention, the user""s vehicle and another vehicle do not perform the uniform avoidance operation even when there is a possibility of a collision. Rather, the operation for averting collision is executed based on the priority of the user""s vehicle and of the other vehicle. This can prevent affecting the travel of the other vehicle or the traffic flow while still effectively avoiding collision.
Furthermore, when the user""s vehicle travels on a privileged road, it is preferable to suppress the avoidance operation of that vehicle and give priority of the avoidance operation to another vehicle. This enables avoidance of a collision without adversely affecting any other vehicle running on the privileged road.
Moreover, in regard to priority, the difficulty of the avoidance operation and the influence on other traffic also depends on the speeds at which the vehicles are travelling. For example, when the speed of the user""s vehicle is lower than that of another vehicle, it is relatively easy to execute an avoidance operation through the user""s vehicle. Accordingly, determining the priority based on the vehicle speed can effectively avoid collision.
In addition, avoiding collision with a first vehicle is pointless if this action increase the possibility of a collision with a second vehicle. Taking into account the possibility of collision with the second vehicle when the user""s vehicle is performing an avoidance operation, it is preferable to carry out the avoidance operation when there is no possibility of collision with an additional vehicle. This can avoid collisions and help maintain smooth traffic flow.
Additionally, in the inter-vehicle communication, transmission/reception information for determining which vehicle should move to avert a collision can assist in effectively avoiding collision.
Further, even if the user""s vehicle takes priority, it is preferable to determine that avoidance operation should not be carried out upon receiving data representative of execution of the avoidance operation from another vehicle. For example, the user""s vehicle executes the avoidance operation in principle, even when it takes priority, unless data indicating that another vehicle is performing an avoidance operation is received from that vehicle.
| |
Today's public transport is not easy to use by people having mental problems or those who are disabled. Traffic planning today is powered by online time tables calculating the optimal way to use public transport in terms of time and costs. Therefore usually a graph representing the public transport network is set up and algorithms taken from graph theory or operations research are used to find optimal routes. This is not suitable for a group of travellers having constraints in using vehicles, vehicle types or particular stations for health reasons. On the other hand, since most of those people are not able to drive a car on their own, making public transport available and moreover easily usable enable these people to improve their mobility and their quality of life in general. This paper shows an approach developed within the mobile project funded by The German Federal Ministry of economy and energy that allows supporting this group of users during a travel using public transport and undertaking personal constraints in route planning. This allows getting personalized advice during travel and while travel planning. This is implemented by generation of a second graph representing the public transport network not in dimension of time and costs but in preferences and dislike of a given traveller. The second graph is an overlay to the standard graph to get a personalized graph that allows finding a suitable route respecting the constraints of the user. | https://www.exeley.com/transport_problems/doi/10.21307/tp-2015-057 |
The Theater Security Cooperation (TSC) efforts of the U.S. Indo-Pacific Command (USINDOPACOM) regional country teams, partner nations and allies have tied the Indo-Pacific region together for decades. TSC serves as a kind of military diplomacy that can create positive effects in regional political relations to counter detrimental outside influences and to enhance stability and security.
Through TSC projects, USINDOPACOM subordinate commands exercise military readiness, disaster relief and humanitarian assistance, bringing the governments of the region together in activities that provide mutual benefits. TSC efforts also support USINDOPACOM Commander Adm. Philip S. Davidson’s vision for a free and open Indo-Pacific, in which “all nations should enjoy unfettered access to the seas and airways upon which our nations and economies depend.”
Bilateral Agreements
The U.S. Coast Guard’s (USCG’s) flagship maritime security cooperation program is the Shiprider program, and it is growing within USINDOPACOM every fiscal year. Through TSC, the USCG regularly exercises 11 bilateral fisheries law enforcement agreements with countries throughout the Pacific islands region. These agreements enable USCG and U. S. Navy (USN) vessels and USCG law enforcement personnel to work with host nations to protect critical regional resources. Like TSC projects orchestrated by the U.S. Defense Department, USCG Shiprider projects promote host-nation sovereignty by enabling Pacific islands partners enforce their laws and regulations, while protecting resources.
USINDOPACOM’s TSC program strives to achieve a balance in the region through activities and exercises that develop and leverage the diverse professional capabilities of the region’s militaries, from the sophisticated such as Australia, Japan, the Republic of Korea and Singapore; to those in a transitional stage such as Fiji, the Philippines and Vietnam; to countries whose capabilities remain underdeveloped because of resource constraints, such as the Federated States of Micronesia, Kiribati and the Marshall Islands. U.S. relationships with these partners are crafted to factor differing capacities and to help improve varying competencies.
USINDOPACOM’s TSC programs remain the cornerstone of the United States’ sustained joint engagement with Pacific islands partners. They are focused on building operational and institutional capacity and developing partners’ capabilities. They also provide a framework within which regional partners engage in interagency activities. These activities complement and reinforce other U.S. government agency programs, such as those of the State and Interior departments.
Protecting Trade
Securing maritime highways for international commerce has always been important to the U.S. to ensure the Pacific links of the global supply chain. The Pacific islands region covers a vast portion of the Indo-Pacific, and its nations enjoy the shared importance of the economic value of their territorial seas. This economy remains dynamic, with some countries being rich in natural resources and successful in managing these assets and other countries lacking the capacity to succeed alone. The security problems these nations face also differ. In this highly active region, growing powers such as China and India are openly discussing trade and security goals with the realization that to ensure the former, the latter is required. How nations approach these engagements in the region also differ. Some favor agreements based on resource and territorial control, while others settle for access and future partnership.
In the Pacific islands region, the People’s Republic of China (PRC) has embarked on two programs: Building a maritime force of navy, coast guard and commercial fishing vessels that is larger than most other nations and inserting them into the PRC’s One Belt, One Road plan, and then creating an offshoot commercial artery, linking China through Southeast Asia with Pacific islands.
PRC initiatives appear to target perceived geostrategic advantages the U.S. has enjoyed from the combination of its bilateral agreements, the location of three U.S. territories and the Compact of Free Association with the Federated States of Micronesia, the Marshall Islands and Palau, a relationship that facilitates a U.S. strategic presence in the region.
PRC activities strive to alter the geostrategic balance by undermining U.S. relationships with its allies and partners, such as Australia, New Zealand and Fiji, by providing targeted and predatory development assistance to Pacific island partners.
The U.S. is working with regional partners such as Australia and New Zealand to stabilize the region and working with smaller partner nations such as Fiji and Palau to bolster resiliency. The best way to achieve these goals is to move beyond states and promote people-to-people interactions and cultural mingling of societies through intergovernmental security engagements and TSC.
In the maritime arena, the USCG is the agency partner of choice for many entities in the Pacific islands. The USN is a superior force, charged with the projection of U.S. sovereignty and freedom of navigation throughout the Pacific. These U.S. forces also perform maritime enforcement and environmental protection. Many U.S. partners will never have or need such a force.
Security Cooperation
The focus of USCG TSC projects is to build maritime safety and security by increasing maritime awareness, response capabilities, prevention methods and governance infrastructure. Through TSC, USCG and its interagency partners conduct engagement activities with Pacific islands partners and governmental/nongovernmental organizations to enhance partner nation self-sustaining capability to maintain maritime security within their inland waterways, territorial seas and exclusive economic zones.
USCG provides sustained engagement using mobile training teams, interagency and international trainers, working from the 14th Coast Guard District, the Coast Guard’s Directorate of International Affairs and Foreign Policy, the USN Pacific Fleet (PACFLT) and Nevada National Guard partners. As previously stated, these USCG capacity-building activities complement Department of State programs and are planned with the U.S. embassy country teams and partner nations. The goal is the development of professional officials who are disciplined, capable and responsible toward civilian authorities and committed to the well-being of their nation and its citizens.
Relationships are key to countering aggression and coercion in the region. TSC programs such as the USCG Shiprider program build enduring relationships while working alongside our partners as they conduct independent operations to maintain their sovereignty. Since 2010, these bilateral maritime law enforcement (MLE) shiprider agreements have provided U.S. vessel and aircraft platforms, as well as MLE expertise to assist Pacific islands officials with exercising their enforcement authority.
These agreements help close regional MLE shortfalls; improve cooperation, coordination and interoperability; and build MLE capacity to more effectively combat illegal, unreported and unregulated fishing, and other illegal activity. The agreements also allow partner nation law enforcement officials to embark on USCG and USN vessels and aircraft and allow these same platforms to assist host nation law enforcement officials with maritime surveillance and boardings.
Generally, USCG vessels, aircraft and MLE teams execute shiprider agreements; however, USN and host-nation vessels and aircraft participate as well, such as PACFLT’s support of joint shiprider operations through the Oceania Maritime Security Initiative.
The USCG Shiprider program continues to be an innovative and collaborative way to effectively influence the region. With each adoption of a new bilateral shiprider agreement with a Pacific partner, the USCG helps strengthen regional stability.
Deeper Commitment
A greater re-engagement with Pacific islands partners is imperative for the U.S. and its allies to counter the growing PRC presence in the Pacific islands region. The USCG Shiprider program is perfectly aligned to help meet this need in coordination with other USINDOPACOM TSC projects.
What may seem like 11 individual bilateral agreements between the U.S. and various Pacific island nations is actually the foundation of a regional partnership; an investment in shared environmental and maritime resources; a transparent agreement between nations with a shared interest in maritime safety and security; and a commitment to fair and reciprocal trade, throughout the central and south Pacific.
The U.S. stands firmly with its partners to ensure a free and open Indo-Pacific available to all nations. As the growing interest in the Shiprider program shows, the USGC is in a unique position to continue facilitating the enhancement of stability and security of U.S. partners in direct support of the USINDOPACOM mission.
U.S. Coast Guard Shiprider Agreements
U.S. Agency for international development
The U.S. Coast Guard regularly exercises 16 bilateral fisheries law enforcement shiprider agreements with countries in the Eastern Pacific and in West Africa. In November 2018, Fiji became the latest nation to sign a shiprider agreement, which allows partnering nations’ defense and law enforcement officers to embark on U.S. Coast Guard and U.S. Navy vessels to observe, protect, board and search vessels suspected of violating laws or regulations within their exclusive economic zones or on the high seas.
Shiprider agreements help close global maritime law enforcement gaps; improve cooperation, coordination and interoperability; and build maritime law enforcement capacity to more effectively combat illegal, unreported and unregulated (IUU) fishing and other illegal activity. The agreements are meant to complement and reinforce arrangements in place with partners such as Australia, New Zealand and France.
Fiji’s law enforcement officers, for example, can now work on U.S. Coast Guard and Navy vessels as “shipriders.” Missions include interdicting suspicious vessels potentially involved in illicit activities, such as illegal fishing and smuggling, including the trafficking of illegal drugs. In the past six years, U.S. Coast Guard and Navy vessels have helped host nations board 103 vessels, identifying 33 violations, according to a 2018 U.S. Coast Guard report.
Bilateral maritime law enforcement shiprider agreements promote host nation sovereignty by helping the host nation enforce their laws and regulations. The adoption of shiprider agreements between other countries and in other regions could help strengthen global maritime law enforcement efforts.
The U.S. has signed a counter-high seas driftnet bilateral shiprider agreement with China, five bilateral shiprider agreements with West African countries — including Cape Verde, Gambia, Ghana, Sierra Leone and Senegal — and 11 permanent bilateral shiprider agreements with Pacific island countries, including Cook Islands, Fiji, Kiribati, Marshall Islands, Micronesia, Palau, Nauru, Samoa, Tonga, Tuvalu and Vanuatu.
The collaborative, ongoing, international fisheries enforcement shiprider operations have been conducted by the U.S. Coast Guard over the past 23 years with China and nine years with the West African and Pacific island countries.
Shiprider agreements are an innovative and collaborative way to more effectively police the world’s oceans. Countries interested in learning more about shiprider agreements are welcome to contact the U.S. Coast Guard or their local U.S. Embassy. | https://ipdefenseforum.com/2020/01/shiprider-program/ |
During the average working day I sometimes find myself in situations where there seems to be some kind of mismatch in communication. This can range from the very trivial, leading nowhere which does not matter, all the way to the very critical, leading to a heated escalation of emotions. The core problem is not the disagreement but rather the misunderstanding about what one is actually discussing. Sometimes certain aspects of the problem are imagined, tones of voice implied and words and sentences incorrectly translated.
The first thing I try to do is understand what the other person does not understand and why there is this obstacle. The best way to approach this is to be open and honest. The deadliest killer of relationships is to assume that the other person understands you or keep on pretending that you understand what the other person is saying even though you sense that you have missed some vital point.
We are bizarre creatures who have the tendency to fight a win-or-lose battle when it is very likely and also to both sides' best advantage rather to seek out a win-win result. This win-win result is ironically enough easier to reach than having to fight it out. There are also much fewer cuts and bruises.
There was this guy named Carl Rogers who developed a theory called the "actualizing tendency." This theory focuses on a single guiding force where every life-form develops its potentials to the fullest extent possible. This same person also taught: "that which is most personal is most general."
What this means to me is that the more truly authentic you are and the more genuine and open in your expressions and gestures, the safer people feel and the more naturally they can express themselves near you. Especially those related to inner thoughts and personal experiences, even if it means exposing self-doubt. This so-called "actualizing tendency" extends outwards from your soul and even encompasses those near you, feeding the other person's spirit. Genuine creativity springs forth, stimulating efficient communication and eventually even producing new insights (ref. Stephen R. Covey, paraphrased by me).
So when conflict threatens to arise, this is more than likely due to poor communication which in turn has its roots in the inability to understand why and what the other person does not understand.
Sincerely try to understand the other side from your heart, balancing emotions with rational thought. This is a difficult yet noble path to follow. Put aside exaggerated emotions just enough so that they guide rather than drown you. Temper the overly rational ways of thinking with the stuff of emotions.
Open up and be genuine. | http://www.kiffingish.com/2002/06/how-to-understand.html |
A recent comment said telerobotic surgery might be used in space, a follow up comment said that multiple minute light delays would complicate it.
But I wonder if just the fact that being in orbit, and the communication complications of the would make it logistically unfeasible.
As mentioned in comments to the OP, "feasible" may mean different things in different contexts. But technical feasibility is probably not the only important issue: economic feasibility is also important. It is not, yet, economically feasible.
There are several terrestrial applications for truly remote, near or completely unsupported telerobotic surgery. For instance, the Antarctic use case pointed to in the linked question, or in remote locations worldwide. If and to the degree the tech exists to enable that, it has not been deployed notwithstanding enormous human and financial costs of failing to have it in place. Note that the radio communication delays and complications terrestrially may be similar to the issues communicating with ISS.
During surgery, even if everything goes as planned, all sorts of things happen at unpredictable times. For instance, after initial incision and retraction, there may be blood in relatively unpredictable places or at unpredictable times that obscure views and potentially injure the patient. The surgeon needs to respond to those quickly. If there is significant latency in the transmission, the surgeon cannot do so. One way to deal with this is to have a surgeon on board ISS to assist the ground surgeon using the telerobot. But that is inefficient - easier to just have the onboard surgeon do the cutting with a consulting surgeon in her ear, following along by video. No need for the robot.
If the question is limited to technical feasibility, it seems to me that the first technical issue would be maintaining the communication link throughout orbit. This would require consistent and predictable connection between the two endpoints, which is complicated. Once a consistent connection can be maintained, the remaining issues are not unique to ISS: relatively high latency, possibility of blackout, unpredictability, and lack of human backup intervention in the event of an emergency all confront terrestrial applications - they are not special to ISS. So it seems to me the unique hurdle is maintaining communication in orbit. Put differently, if we could do it on earth (we can't right now, or we would be,) then the only impediment to doing it on ISS is maintaining reliable communication throughout orbit.
Maintaining reliable communication throughout orbit is certainly hypothetically possible. For instance, if obstructions were cleared and enough geosynchronously orbiting satellites hung in the air, we should be able to maintain line-of-sight with ISS throughout its orbit, and communication maintained. This is in the works - for instance, see this relevant question concerning end-to-end optical communication between ISS and a ground station.The significant variable latency is not unique - the telerobotic system must have a way to deal with unexpected events in the case when communication drops, and a way to recognize and prepare for latency. This probably means that the system needs to be backed by a powerful AI that can do things like recognize and clamp off unexpected bleeders.
Bottom line - if by "telerobotic surgery" you mean long-distance waldos, high and variable latency communication makes their use impossible or dangerous. If the telerobots have advanced with a significant AI, and if the communication problem has been addressed, then it may be feasible. At present, that is the stuff of (hopefully not far off) sci-fi.
It is feasible. Network latency is a challenge however it is one that is not unique to space. Although that is dependent on the length of the operation because having to reroute instructions across the planet to keep in contact with a moving vehicle would be a bit more challenging.
There is one major counter argument to its feasibility though. Normally when such robots are used there are medical personnel on standby in that location who can jump in in the event something has gone wrong. For instance, the robot could break down, power could be lost, the network could go down. This is still very much a problem and even more so in space where you only have your crew and a whole list of technical things that can go wrong.
That being said, I would see a robotic surgeon as a backup/skillset enhancement to the mission rather than a replacement for medically trained crewmembers. Say a crewmember gets impaled through the brain, I doubt they would have a neurosurgeon on board and he definitely wouldn't be stable enough for re-entry but they could just dial one up to try and stabilize him. Or say the medical personnel are injured and need stabilization.
Bottom line is, a capability that can increase mission success without adding much in the way of resource cost/allocation is always great to have.
On Earth, the alternatives to telesurgery are simply superior. If you can afford to build a robotic facility in the middle of nowhere, why not instead build a helicopter landing pad, and whisk the patient away to a proper trauma center?
Don't assume that just because you can find articles about telesurgery, that it is established practice. People have been promising it for a long time. The military is often portrayed as a prime customer, but despite decades of attempts at telesurgery, they still evacuate the wounded to proper hospitals.
The argument for telesurgery is that you can have an experienced surgeon doing it.
But no surgeon has that experience doing it, much less with a latency of minutes.
To get that training, the surgeon would need to practice on patients with an inferior and probably dangerous technology. It is unethical to do that.
Practice on animals will get you only so far. Remember, the entire justification of telesurgery is that you are using an "expert" that is superior to what the crew can do.
Okay, so your surgeon has practiced the procedure for X, but situation Y develops instead.
The existing model of dealing with problems in space works well. Spaceflight requires the expertise of aerospace engineers, chemists, computer programmers, medical doctors, physicists, etc. Yet we don't fill our crews with one aerospace engineer, one chemist, etc. Instead, we send up people who are good at solving urgent problems and following orders, and then we keep experts in Mission Control (or on call as needed).
Arguably, you will get just as good results by having a doctor on crew, who can do the procedure themselves after consulting with mission control. In case the doctor is compromised or needs assistance, the remaining crew should get 2-3 months training in basic space medicine, including assisting real human surgeries for 1-2 months. That would give them enough training to follow instructions given from mission control, as well as deal with urgent situations.
Not the answer you're looking for? Browse other questions tagged iss communication health or ask your own question. | https://space.stackexchange.com/questions/30441/is-telerobotic-surgery-logistically-feasible-in-orbit |
CROSS-REFERENCE TO RELATED APPLICATIONS
FIELD
This application claims priority to U.S. Provisional Patent Application No. 63/191,481, filed May 21, 2022, titled “Systems And Methods For Scene-Adaptive Image Quality In Surgical Video,” the entirety of which is hereby incorporated by reference.
The present application generally relates to image processing and more particularly relates to systems and methods for scene-adaptive image quality in surgical video.
BACKGROUND
Endoscopes are routinely used in surgical procedures to illuminate anatomy and to capture video during the procedure. However, the quality of the video can vary significantly due to illumination in the scene, which can be affected by anatomical features or instruments in the scene. In addition, depending on the anatomical features in the scene, different image settings, e.g., brightness, contrast, etc., may be desirable to provide a better view to the surgeon. To enable the surgeon to adjust these settings, the system may provide controls to make such adjustments, e.g., on-screen controls or physical dials or knobs.
SUMMARY
Various examples are described for systems and methods for scene-adaptive image quality in surgical video. One example method for scene-adaptive image quality in surgical video includes receiving a first video frame from an endoscope, the first video frame generated from a first raw image captured by an image sensor of the endoscope and processed by an image signal processing (“ISP”) pipeline having a plurality of ISP parameters; recognizing, using a trained machine learning (“ML”) model, a first scene type or a first scene feature type based on the first video frame; determining a first set of ISP parameters based on the first scene type or the first scene feature type; applying the first set of ISP parameters to the ISP pipeline; and receiving a second video frame from the endoscope, the second video frame generated from a second raw image captured by the image sensor and processed by the ISP pipeline using the first set of ISP parameters.
One example system includes a non-transitory computer-readable medium; and one or more processors communicatively coupled to the non-transitory computer-readable medium and configured to execute processor-executable instructions stored in the non-transitory computer-readable medium to receive a first video frame from an endoscope, the first video frame generated from a first raw image captured by an image sensor of the endoscope and processed by an image signal processing (“ISP”) pipeline having a plurality of ISP parameters; recognize, using a trained machine learning (“ML”) model, a first scene type or a first scene feature type based on the first video frame; determine a first set of ISP parameters based on the first scene type or the first scene feature type; apply the first set of ISP parameters to the ISP pipeline; and receive a second video frame from the endoscope, the second video frame generated from a second raw image captured by the image sensor and processed by the ISP pipeline using the first set of ISP parameters.
One example non-transitory computer-readable medium includes processor-executable instructions configured to cause one or more processors to receive a first video frame from an endoscope, the first video frame generated from a first raw image captured by an image sensor of the endoscope and processed by an image signal processing (“ISP”) pipeline having a plurality of ISP parameters; recognize, using a trained machine learning (“ML”) model, a first scene type or a first scene feature type based on the first video frame; determine a first set of ISP parameters based on the first scene type or the first scene feature type; apply the first set of ISP parameters to the ISP pipeline; and receive a second video frame from the endoscope, the second video frame generated from a second raw image captured by the image sensor and processed by the ISP pipeline using the first set of ISP parameters.
These illustrative examples are mentioned not to limit or define the scope of this disclosure, but rather to provide examples to aid understanding thereof. Illustrative examples are discussed in the Detailed Description, which provides further description. Advantages offered by various examples may be further understood by examining this specification.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated into and constitute a part of this specification, illustrate one or more certain examples and, together with the description of the example, serve to explain the principles and implementations of the certain examples.
FIG. 1
show example systems for scene-adaptive image quality in surgical video;
FIG. 2
shows an image signal processing pipeline to convert raw image sensor information to a color image;
FIGS. 3A-3D
show example images from surgical procedures illustrating different image quality issues;
FIGS. 4-6
shows an example system for scene-adaptive image quality in surgical video;
FIG. 7
shows an example method for scene-adaptive image quality in surgical video; and
FIG. 8
shows an example computing device suitable for use with systems and methods for scene-adaptive image quality in surgical video according to this disclosure.
DETAILED DESCRIPTION
Examples are described herein in the context of systems and methods for scene-adaptive image quality in surgical video. Those of ordinary skill in the art will realize that the following description is illustrative only and is not intended to be in any way limiting. Reference will now be made in detail to implementations of examples as illustrated in the accompanying drawings. The same reference indicators will be used throughout the drawings and the following description to refer to the same or like items.
In the interest of clarity, not all of the routine features of the examples described herein are shown and described. It will, of course, be appreciated that in the development of any such actual implementation, numerous implementation-specific decisions must be made in order to achieve the developer's specific goals, such as compliance with application- and business-related constraints, and that these specific goals will vary from one implementation to another and from one developer to another.
During a minimally invasive surgery (“MIS”), e.g., one performed using a robotic surgical system, a surgeon may employ an endoscope to capture video within the surgical site to allow them to view the surgical site, to guide tool movement, and to detect potential complications. The surgeon sits at a surgeon console and manipulates hand or foot controls to guide the movement of robotic tools within the surgical site, while watching video captured by the endoscope. However, as the surgeon performs the surgery, lighting within the scene may change, such as due to movement to a new location within the patient's body, introduction of a reflective tool into the scene, darkening due to bleeding, reflection from nearby tissue, etc.
To adjust to these changes, robotic surgical system employs functionality to dynamically adjust image settings in the endoscope to improve or maintain image quality despite changing conditions. As the surgical procedure proceeds and the endoscope captures video, the video is provided to one or more trained machine learning (“ML”) models that recognize the current scene type or features in the scene. For example the ML model has been trained to recognize common surgical scenes, such as scenes corresponding to commonly performed surgical procedures (e.g., gastrojejunostomy, appendectomy, cholecystectomy, etc.), commonly occurring anatomy (e.g., liver, gall bladder, abdominal wall, small or large intestine, etc.), surgical tools, events (e.g., bleeding, smoke, etc.), or other features. As the ML model(s) receive frames of video, they analyze some of the frames and output the identified scene types or features, which are used to adjust image signal processing (“ISP”) settings in the camera.
To make these adjustments, the system adjusts parameters used by an image sensor processing (“ISP”) pipeline employed by the endoscope to convert raw pixel data output by an image sensor into a color image. This example system has access to sets of ISP parameters corresponding to each type of scene or scene feature that the ML model(s) have been trained to recognize. When a particular type of scene is recognized, the corresponding set of ISP parameters may be obtained and provided to the camera to overwrite previous ISP parameters, thereby processing incoming raw image sensor data to provide a higher quality image of the scene. Further, if both a scene type and a scene feature (or multiple scene features) are recognized, the robotic surgical system may combine ISP parameters from multiple different sets to generate a single set of hybrid ISP parameters, which are then applied to the endoscope's camera. As the surgical procedure continues, these ISP parameters may change as the scene changes, e.g., as the endoscope moves through the patient's body, and surgical tools enter and exit the frame, etc.
Thus, the system is able to dynamically adjust the image quality provided to the surgeon in real-time to maintain high video quality throughout the procedure, despite changing conditions. This can provide significantly improved performance over existing endoscopes which instead employ a static set of ISP parameters that are applied to raw image sensor data, irrespective of the scene. Further, by employing ML models to enable scene or scene feature recognition, ISP parameters may be specially tailored for specific scenarios and applied as needed when those scenarios are encountered in real-time. Such functionality may enable a surgeon to more effectively and efficiently perform a surgical procedure by presenting a clearer view of the surgical site.
This illustrative example is given to introduce the reader to the general subject matter discussed herein and the disclosure is not limited to this example. The following sections describe various additional non-limiting examples and examples of systems and methods for scene-adaptive image quality in surgical video.
FIG. 1
FIG. 1
100
110
130
120
Referring now to , shows an example system for scene-adaptive image quality in surgical video. The system includes a robotic surgical system that includes a surgical robot and a user station , both of which are connected to a controller .
110
104
110
110
The surgical robot is any suitable robotic system that can be used to perform surgical procedures on a patient, e.g., patient , to provide simulations of surgical procedures, or to provide training functionality to allow a surgeon to learn how to control a surgical robot , e.g., using exercises to train particular movements or general dexterity, precision, etc. It should be appreciated that discussions throughout this detailed description related to surgical procedures are equally applicable to simulated procedures or training exercises using a surgical robot .
110
120
130
110
A surgical robot may have one or more articulating arms connected to a base. The arms may be manipulated by a controller via inputs received from the user station , which may include one or more user interface devices, such as joysticks, knobs, handles, or other rotatable or translatable devices to effect movement of one or more of the articulating arms, as well as one or more display devices to display information to the surgeon during surgery, e.g., video from an endoscope, information from patient medical records, previously obtained images (e.g., X-rays, MRI images, etc.). The articulating arms may be equipped with one or more surgical instruments to perform aspects of a surgical procedure. Different surgical robots may be configured for particular types of surgeries, such as cardiovascular surgeries, gastrointestinal surgeries, gynecological surgeries, transplant surgeries, neurosurgeries, musculoskeletal surgeries, etc., while some may have multiple different uses. As a result, different types of surgical robots, including those without articulating arms, such as for endoscopy procedures, may be employed according to different examples.
110
120
110
120
130
110
The controller in this example includes a computing device in communication with the surgical robot and is able to control access and use of the robot. For example, the controller may require that a user authenticate herself before allowing access to or control of the surgical robot . As mentioned above, the controller may include, or have connected to it, e.g., via user station , one or more user input devices capable of providing input to the controller, such as a keyboard, mouse, or touchscreen, capable of controlling the surgical robot , such as one or more joysticks, knobs, handles, dials, pedals, etc.
110
110
104
130
During a surgical procedure, one or more tools may be connected to the surgical robot that may then be inserted into the patient's body to perform different aspects of the surgical procedure. To enable the surgeon to perform the surgery, an endoscope may be connected to the surgical robot and inserted in to the patient . Video captured by the endoscope may be communicated to the controller, which then presents them to the surgeon at the user station . Based on the video, the surgeon can manipulate the surgical robot to cut tissue, ablate, cauterize, etc.
120
120
To help ensure the surgeon is provided with a high-quality view of the surgical scene, the controller executes one or more trained ML models to analyze the incoming video from the endoscope, select ISP parameters based on the ML models' output, and updates an “ISP pipeline,” described in greater detail below, for the endoscope's camera with the updated ISP parameters. In this example, the ISP pipeline is implemented on the controller , such as using software executed by the controller's processor or by using a special-purpose processor, such as a graphics processing unit (“GPU”), field programmable gate array (“FPGA”), etc. In some such examples, raw video frames are received from the endoscope camera and processed by the controller. However, in some examples, the ISP pipeline may be implemented in the endoscope itself. The new ISP parameters are then used on subsequent captured video to process incoming raw image sensor information.
FIG. 2
To provide video, the endoscope employs an image sensor that includes an array of light-sensitive elements, e.g., photodiodes, that outputs a grayscale RAW-format image, such as in a grid Bayer pattern. This RAW-format image is then passed through a series of image and signal processing steps to ultimately generate a color red-green-blue (“RGB”) image that is displayed to the surgeon and recorded as a video frame, if the surgical video is recorded. The image and signal processing steps may be referred to as the ISP pipeline and include a wide variety of processing stages to both correct for defects in the image sensor (e.g., bad pixels), to normalize the sensor output, to adjust pixel values, to remove noise, as well as ultimately convert from grayscale to RGB. An example of such an ISP pipeline is illustrated in .
FIG. 2
FIG. 1
120
130
The example ISP pipeline in illustrates a typical ISP pipeline for a camera. The various blocks represent specific image or signal processing steps that are performed in sequence from the initial RAW sensor data output. Some or all of these blocks may employ ISP parameters that may be modified over time based on the output of the ML model(s), as discussed above with respect to . Thus, as the ML model(s) analyze video output by the endoscope and recognize scene types or scene features, the controller may determine new ISP parameters based on the recognized scene types or scene features and provide the new ISP parameters to the endoscope, which updates its ISP pipeline to employ the new ISP parameters. Subsequent video frames will then be processed through the ISP pipeline using the updated ISP parameters, changing the appearance of the captured video frames and providing higher quality video to the surgeon at the user station .
FIGS. 3A-3D
FIG. 3A
FIG. 3B
FIG. 3C
FIG. 3D
Using a static ISP pipeline, video captured during a surgery may have any number of image quality issues. illustrate some of the issues with image quality that can arise. For example, illustrates an illuminated scene that is overly dark due to a particular scene feature being present, i.e., blood in the scene, in this example. In contrast, illustrates a scene in which the patient's tissue is underexposed due to the presence of a different scene feature, i.e., a highly reflective surgical tool in the scene. illustrates an example where anatomical features in the scene near to the camera are well illuminated, but anatomy distant from the camera is overly dark due to the near field illumination affecting the ISP pipeline processing. Finally, illustrates an image in which the scene provides excellent contrast; however, important anatomical features, i.e., blood vessels in this example, have poor quality contrast, resulting from the contrast settings for the remainder of the image. Thus, each of these figures illustrates scenarios in which changing ISP parameters for a camera to adjust to the scene could significantly improve image quality.
FIG. 4
FIG. 4
FIG. 4
400
400
414
430
412
414
400
404
412
414
404
403
414
400
414
414
412
404
400
Referring now to , shows a more detailed view of an example system for scene-adaptive image quality in surgical video. This example system includes a robotic surgical device configured to operate on a patient , and a central controller to control the robotic surgical device . The system also includes a surgeon console connected to the central controller and the robotic surgical device . The surgeon console is operated by a surgeon to control and monitor the surgeries performed using the robotic surgical device . In addition to these components, the system might include additional stations (not shown in ) that can be used by other personnel in the operating room, for example, to view surgery information, video, etc., sent from the robotic surgical device . In this example, the robotic surgical device , the central controller , the surgeon console and other stations are connected directly to each other, though in some examples they may be connected using a network, such as a local-area network (“LAN”), a wide-area network (“WAN”), or any other networking topology known in the art that connects the various stations in the system .
414
414
416
426
426
403
404
416
The robotic surgical device can be any suitable robotic system utilized to perform surgical procedures on a patient. For example, the robotic surgical device may have one or more robotic arms connected to a base. The robotic arms may be manipulated by a tool controller , which may include one or more user interface devices, such as joysticks, knobs, handles, or other rotatable or translatable devices to effect movement of one or more of the robotic arms. The robotic arms may be equipped with one or more surgical tools to perform aspects of a surgical procedure, and different surgical tools may be exchanged during the course of the surgical procedure. For example, the robotic arms may be equipped with surgical tools A-C. Each of the surgical tools can be controlled by the surgeon through the surgeon console and the tool controller .
414
428
403
428
414
416
428
414
428
FIG. 4
In addition, the robotic surgical device is equipped with one or more cameras , such as an endoscope camera, configured to provide a view of the operating site to guide the surgeon during the surgery. In some examples, the camera can be attached to one of the robotic arms of the robotic surgical device controlled by the tool controller as shown in . In other examples, the camera can be attached to a mechanical structure of the robotic surgical device that is separate from the robotic arms, such as a dedicated arm for carrying the camera .
414
414
400
Different robotic surgical devices may be configured for particular types of surgeries, such as cardiovascular surgeries, gastrointestinal surgeries, gynecological surgeries, transplant surgeries, neurosurgeries, musculoskeletal surgeries, etc., while some may have multiple different uses. As a result, different types of surgical robots, including those without robotic arms, such as for endoscopy procedures, may be employed according to different examples. It should be understood that while only one robotic surgical device is depicted, any suitable number of robotic surgical devices may be employed within a system .
414
In some examples, robotic surgical devices (or a respective controller) may be configured to record data during a surgical procedure. For example, images and videos of the surgical procedures performed by the robotic surgical device can also be recorded and stored for further use.
FIG. 4
440
428
404
408
403
426
430
403
404
426
428
406
404
426
428
410
416
In the example shown in , surgical video of a robotic surgical procedure captured by the camera is also be transmitted to the surgeon console and be displayed on a video monitor in real time so that the surgeon can view the procedure while the surgical tools are being used to operate on the patient . In this example, the surgeon uses the surgeon console to control the surgical tools and the camera , and uses controls on the surgeon console to maneuver the surgical tools and camera by sending corresponding control signals to the tool controller .
FIG. 4
412
454
452
432
428
452
440
As shown in , the controller includes an ISP pipeline and video analysis software to process the raw surgical video captured during the surgical procedure and to determine and provide new ISP parameters to be applied to the camera . As will be discussed in more detail below, the video analysis software analyzes frames of the received video (e.g. surgical video ) to recognize scene types or features captured in the video frames.
432
428
454
440
452
404
Raw surgical video captured by the camera is first processed by the ISP pipeline , which is implemented using a GPU in this example. The GPU applies the currently selected set of ISP parameters to the fully-configurable ISP pipeline and generates surgical video , which is then provided to the video analysis software and to the surgeon console .
452
In this example, the video analysis software employs one or more ML models that have been trained to recognize scene types or scene features, which may be anatomic (e.g., organs, adhesions, blood vessels, etc.), tools (e.g., scalpels, forceps, clamps, trocars, etc.), events (e.g., bleeding, smoking, etc.), or other features. Video frames are presented to the trained ML technique(s), which then recognize the scene types or features within the frames and output the identified scene types or features.
In some examples, an ML model may output probabilities related to recognized scene types or features. For example, an ML model may have been trained to recognize twenty different scene types. After being presented with a video frame as input, the ML model may output a group (or tuple) of twenty probability values, each of which corresponds to one of the trained scene types, with the cumulative total of all the outputted probabilities being 1 (or 100%). In an example using an ML model trained to recognized three scene types, a tuple output may resemble the following: (0.15, 0.67, 0.18), corresponding to scene types 1-3, respectively, and with probability values ranging from 0 to 1, representing probabilities from 0-100%. Similarly, a ML model trained to recognize different scene features may also output one or more probabilities associated with the trained scene features. Alternatively, the ML model(s) may output only probabilities associated with scene types or features above a predetermined threshold, or the top N probabilities (e.g., the top 3 probabilities). Further, in some examples, the ML model(s) may only output the most likely scene type or feature.
412
While this example employs one ML model to recognize scene types and one ML model to recognize scene features, it should be appreciated that multiple ML models may be used to recognize scene types or scene features. For example, one ML model may be trained on a particular class of scene types, while a different ML model may be trained on a different class of scene types. Similarly, one ML model may be trained on a particular class of scene features, while a different ML model may be trained on a different class of scene features. The central controller may provide received video frames to each of the trained ML models to determine scene types or features.
412
412
412
452
During operation, the central controller may receive video frames in real-time at a predetermined frame rate from the camera, e.g., 30 or 60 frames per second. Depending on the frame rate or the configuration of the central controller , it may process all received frames or only a subset of them. For example, the central controller may provide 1 frame per second to the video analysis software to determine scene types or features. Such a configuration may enable reasonable response time without requiring substantial processing resources. Some examples may employ different strategies to process video frames, such as processing as many frames as possible, e.g., by processing a one frame and, when processing on the frame is completed, processing the next available frame, irrespective of how many intervening frames have been captured.
452
452
After the video analysis software has determined a scene type or a scene feature corresponding to a video frame, it determines one or more ISP parameters corresponding to the scene type or scene feature. In this example, the video analysis software has access to a data store that stores sets of ISP parameters for each scene type the ML model is trained to recognize, as well as sets of ISP parameters for each scene feature the ML model is trained to recognize.
FIG. 3A
400
For example, if the ML model recognizes a bleeding event (a type of scene feature) at a particular location in a frame (e.g., as shown in ), it may obtain a set of ISP parameters with a modified auto-exposure setting to increase luminance in the auto-exposure (“AE”) portion of the ISP pipeline. And while this may over-expose the other portions of the scene, the set of ISP parameters also include ISP parameters to modify the local tone mapping module to non-linearly map the intensities of anatomy at locations other than where the bleeding is identified to prevent overexposure. In a typical ISP pipeline, a bleeding event that only darkens a portion of the image may not be fully compensated by AE since auto-exposure operates on image-level statistics, rather than portions of the image. Thus, the system is able to compensate for darkened portions of the scene without over-saturating the entire scene.
FIG. 3B
Alternatively, if the ML model recognizes a reflective instrument (another type of scene feature), or multiple reflective instruments, at a particular location in a frame e.g., as shown in ), it may obtain a set of ISP parameters with a modified auto-exposure setting to key only on the anatomy in the scene instead of the reflective instrument(s). At the same time, local tone mapping is adjusted to drastically dampen the strong reflections on the instruments to prevent eye fatigue and distractions. By modifying the parameters for these portions of the ISP pipeline, the system ensures that the anatomy of interest is well-illuminated and more accurately color represented, while simultaneously dampening the reflections from the instruments, which may be distracting or contribute to eye fatigue.
FIG. 3D
452
More generally, scenes may include features with widely varying dynamic ranges, or with very little variation in dynamic ranges. For example, if both a reflective tool and a bleeding event occur in the same frame, it can present dramatically different brightness levels in different parts of the same frame. Conversely, some scenes may have much lower dynamic ranges (e.g., the bowels shown in ). In some examples, the ML model may determine scene types based on detected dynamic ranges, e.g., having a high dynamic range (above a first threshold) or a low dynamic range (below a second threshold). Thus, the video analysis software is able to adjust the ISP pipeline to improve contrast in a low dynamic range scene, while preventing over-darkening in high dynamic range scenes, such as discussed above with respect to reflective tools.
452
Thus, after determining a scene type or scene feature, the video analysis software accesses the corresponding set of ISP parameters, and if only one scene type or feature is identified, uses the accessed set of ISP parameters without modification.
452
452
452
In some examples, however, the video analysis software may determine multiple different scene types or features. As discussed above, in some examples, a tuple of probabilities may be received for a video frame. The video analysis software may then select some or all of the corresponding scene types based on their respective probabilities, e.g., the probability satisfies a predetermined threshold. To determine a set of ISP parameters, the video analysis software may access the sets of ISP parameters corresponding to each scene or feature. In some examples, it may only access the sets of ISP parameters corresponding to scenes or features with a sufficiently high probability.
452
After accessing the sets of ISP parameters, the video analysis software combines them to generate a single set of ISP parameters. In this example, the video analysis software employs interpolation between ISP parameters in each of the accessed sets. For example, for ISP parameter 1, the values for that parameter from each accessed set of ISP parameters are weighted according to the corresponding probability of the scene type or feature. Each weighted parameter contributes to a parameter value according to its respective weight. In some examples, parameter values may not be interpreted linearly. For example some parameter values may affect image quality in a way that does not change linearly to human vision. The non-linearity may be characterized, e.g., as a curve, and used as the basis for interpolating ISP parameter values according to the non-linearity, e.g., along the curve.
452
454
428
452
404
400
403
After combining the sets of ISP parameters to generate one set of ISP parameters, the video analysis software applies the ISP parameters to the ISP pipeline , which replaces the then-current ISP parameters employed by its ISP pipeline. Subsequently, video frames captured by the camera are processed using the new ISP parameters and provided to the video analysis software for analysis. By updating the ISP parameters based on the recognized scene types or parameters, video quality at the surgeon console may be improved. Further, because the video analysis is performed repeatedly, the ISP parameters may be updated throughout the course of a surgical procedure in real-time (e.g., within a frame or a few frames of a scene type or feature change) or near-real-time (e.g., within a second or two of a scene type or feature change, enabling the system to provide high quality video the surgeon .
454
412
454
428
434
428
428
412
428
454
While in this example, the ISP pipeline is implemented in software at the central controller and executed by a GPU, in other examples, the ISP pipeline may be implemented in hardware, e.g., in a system-on-a-chip (“SOC”), or as a combination of hardware and software. Further in some examples the ISP pipeline may be implemented in the camera itself. In some such examples, the ISP parameters may be communicated to the camera to update its ISP pipeline. Further, some examples may divide the ISP pipeline between the camera and the central control , which may result in some ISP parameters being sent to the camera , while others are sent to the portion ISP pipeline at the central controller.
FIG. 4
400
400
It should be appreciated that although illustrates the presented technique of scene-adaptive image quality in surgical video in the context of a system , it can be implemented in other types of systems and settings. For example, this technique can be implemented in a computing device separate from a system , such as by receiving a video feed from the endoscope via a networked connection.
FIG. 5
FIG. 5
FIG. 4
500
500
520
510
530
500
510
520
454
452
530
520
454
Referring now to , shows an example system for scene-adaptive image quality in surgical video. In this example, the system includes a computing device that is communicatively coupled to an endoscope and a user station . In this example, the system is employed in a manual surgical procedure, without the use of a surgical robotic system, such as the example discussed above with respect to . Thus, during the surgery, the surgeon may manipulate the endoscope during the surgery to provide a suitable view for a particular task. Video frames captured by the endoscope are communicated to the computing device , which executes an ISP pipeline, e.g., ISP pipeline , and video analysis software, e.g., video analysis software . In addition, video frames are communicated to the user station , which may be a separate computing device or may be a display device in some examples. The computing device can update the ISP parameters in the ISP pipeline based on analysis performed by the video analysis software to adjust the ISP pipeline.
FIG. 6
FIG. 6
600
600
610
620
610
612
610
620
622
620
Referring now to , shows an example system for scene-adaptive image quality in a surgical video. The example system includes both scene recognition software and scene feature recognition software . The scene recognition software in this example includes a trained scene recognition ML model , though in some examples, the scene recognition software may include multiple trained scene recognition models. Similarly, the scene feature recognition software includes a trained scene feature recognition ML model , though in some examples, the scene feature recognition software may include multiple trained scene recognition models.
600
452
600
600
FIG. 6
The example system shown in is embedded within video analysis software, such as video analysis software ; however, in some examples, the system may be a discrete software module that provides its output to video analysis software. In one such example the video analysis software may receive the output and determine a set of ISP parameters based on the received scene type(s) or scene feature(s) output by the system .
610
602
612
602
614
620
602
622
602
624
612
622
612
622
610
452
612
622
FIG. 4
During operation, the scene recognition software receives surgical video frames and provides them to the trained scene recognition model , which processes the video frame to generate one or more recognized scene types . Similarly, the scene feature recognition software receives surgical video frames and provides them to the trained scene feature recognition model , which processes the video frame to generate one or more recognized scene feature types . In this example, the models , each output a tuple having probabilities corresponding to each scene type or scene feature type the respective model , is trained to recognize. The scene recognition software obtains the tuples and outputs them for use by video analysis software, e.g., video analysis software , to determine a set of ISP parameters, such as discussed above with respect to . And while the ML models , in this example output tuples, in some examples, ML models may only output the most likely recognized scene or scene feature type, e.g., the scene type or scene feature type with the highest probability. Still other variations are contemplated by this disclosure.
600
610
620
FIG. 6
While the example system shown in includes two models, one each for scene recognition and scene feature recognition, in some examples, multiple models of either kind may be used. Further, in some examples, one ML model may be trained to recognized both scene types and scene feature types. In one such examples, scene recognition software and scene feature recognition software may be a single software module accessing a single trained ML model to obtain tuples representing probabilities of scene types or scene feature types.
FIG. 7
FIG. 7
FIGS. 4 and 6
700
700
Referring now to , shows an example method for scene-adaptive image quality in surgical video. This example method will be described with respect to the systems shown in ; however, any suitable system according to this disclosure may be employed.
710
452
412
454
412
404
At block , the video analysis software , executed by the controller , receives a first video frame from an endoscope, the first video frame generated from a first raw image captured by an image sensor of the endoscope and processed by an image signal processing (“ISP”) pipeline, e.g., ISP pipeline , having a plurality of ISP parameters. As discussed above, an endoscope may employ an image sensor that outputs raw pixel data that is processed by an ISP pipeline to generate an RGB image that is transmitted as a frame of video. Further, as discussed above, the ISP pipeline may be executed at the controller, on the endoscope, divided between the two, or at any other suitable computing device. In this example, video frames are received both by the controller and the surgeon console . In some examples, the video frames may be sent to any suitable computing device, whether as a part of a robotic surgery system or as a standalone computing device.
720
452
612
622
602
610
620
452
FIG. 6
At block , the video analysis software recognizes a first scene type or scene feature type using a trained ML model, e.g., one of trained ML models , . As discussed above with respect to , a video frame from an endoscope may be provided to scene recognition software or scene feature recognition software to recognize a scene type or scene feature type. Further, as discussed above, the output of the ML model(s) may be a single identified scene type or scene feature type, or it may be a tuple representing probabilities that the video frame depicts a particular scene type or scene feature type. Thus, the video analysis software may recognize the scene type or scene feature type by selecting a single scene type or scene feature type, by outputting probabilities that the video frame depicts different scene types or scene feature types, or any other suitable format.
452
FIG. 6
In some examples, the video analysis software may recognize both a scene type and a scene feature type. For example, while some example systems may only include trained ML models to recognize scene types or scene feature types, without an ML model capable of recognizing the other, some examples may include one or more models to recognize scene types and scene feature types, e.g., as depicted in or as described above.
730
452
452
452
At block , the video analysis software determines a set of ISP parameters based on the scene type or the scene feature type. In some examples, the video analysis software may only determine a scene type or a scene feature type, and further the ML model may only identify a single scene type or scene feature type. In one such example, the video analysis software accesses a data store and identifies and retrieves a set of ISP parameters corresponding to the identified scene type or scene feature type.
452
452
452
612
622
However, in some examples, the video analysis software may identify a scene type or scene feature type, but may obtain probabilities associated with multiple different scene or feature types, such as in examples where the ML model(s) output tuples of probabilities corresponding to scene types or features. After obtaining the multiple probabilities, the video analysis software may discard scene types or scene feature types with associated probabilities that do not satisfy a pre-determined threshold (e.g., 25%). Alternatively, the video analysis software may determine a set of ISP parameters based on all probabilities output by the ML model , .
452
452
452
452
If probabilities for multiple scene types or scene feature types are used, the video analysis software may obtain sets of ISP parameters corresponding to the scene types that were not discarded based on probabilities. The video analysis software may then combine corresponding parameter values from the various sets of ISP parameters to determine a single set of ISP parameters. In this example, the video analysis software weights each parameter value in each set of ISP parameters based on the probability for the corresponding scene type or scene feature type. Corresponding weighted values for each ISP parameter in the sets of ISP parameters may then be summed and divided by the sum of the weights of the ISP parameter sets to obtain an interpolated parameter value for each ISP parameter. Alternatively, rather than employing a weighted interpolated value, the video analysis software may access curves or other non-linear characterizations associated with one or more ISP parameters and interpolate ISP parameter values from the different sets of ISP parameters along such curves or non-linear characterizations, based on the respective probabilities.
452
730
452
In some examples that recognize both scene type and scene feature types, the video analysis software may determine a set of ISP parameters for the scene type and another set of ISP parameters for the scene feature type, as discussed above. These two sets of ISP parameters may then be combined, such as according to a predetermined ratio or by interpolating between the two sets as discussed above, e.g., by weighted interpolation, linearly or non-linearly. If multiple sets of ISP parameters for multiple scene types or scene features are determined, they also may be combined generally as discussed above. Thus, at block , the video analysis software can combine ISP parameter sets in any suitable way to obtain a single set of ISP parameters.
740
452
412
454
412
428
At block , the video analysis software applies the set of ISP parameters to the ISP pipeline. In this example, the controller applies the set of ISP parameters to the ISP pipeline executed at the central controller ; however, in some examples, it may transmit the set of ISP parameters to the endoscope along with an indication to update the then-current ISP parameters used by the endoscope . Further, in examples with a distributed ISP pipeline, the set of ISP parameters may be sent to the respective portions of the ISP pipeline according to which parameters are applied at which portion.
750
412
428
412
452
700
710
At block , the controller receives another video frame from the endoscope and applies the new set of ISP parameters using the ISP pipeline. The controller provides the new video frame to the video analysis software to restart the method at block .
710
750
Examples according to this disclosure may repeatedly execute the functionality in blocks - to update ISP parameters over time during a surgical procedure. The rate at which example methods may be repeated may vary from every video frame to only a sampled set of video frames.
400
500
FIG. 5
While the example discussed above was within the context of a robotic surgical system , it should be appreciated that use of such a robotic surgical system is not required. Instead, a traditional minimally-invasive surgery may be performed manually and employ an endoscope, such as by using the example system shown in .
FIG. 8
FIG. 8
FIG. 7
FIGS. 4
FIG. 5
800
800
810
820
800
802
810
820
700
820
860
452
500
800
850
800
800
840
Referring now to , shows an example computing device suitable for use in example systems or methods for scene-adaptive image quality in surgical video according to this disclosure. The example computing device includes a processor which is in communication with the memory and other components of the computing device using one or more communications buses . The processor is configured to execute processor-executable instructions stored in the memory to perform one or more methods for scene-adaptive image quality in surgical video according to different examples, such as part or all of the example method described above with respect to . In this example, the memory includes a video analysis system , such as the example video analysis software or system shown in or . In addition, the computing device also includes one or more user input devices , such as a keyboard, mouse, touchscreen, microphone, etc., to accept user input; however, in some examples, the computing device may lack such user input devices, such as remote servers or cloud servers. The computing device also includes a display to provide visual output to a user. However, it should be appreciated that user input devices or displays may be optional in some examples.
800
840
830
The computing device also includes a communications interface . In some examples, the communications interface may enable communications using one or more networks, including a local area network (“LAN”); wide area network (“WAN”), such as the Internet; metropolitan area network (“MAN”); point-to-point or peer-to-peer connection; etc. Communication with other devices may be accomplished using any suitable networking protocol. For example, one suitable networking protocol may include the Internet Protocol (“IP”), Transmission Control Protocol (“TCP”), User Datagram Protocol (“UDP”), or combinations thereof, such as TCP/IP or UDP/IP.
While some examples of methods and systems herein are described in terms of software executing on various machines, the methods and systems may also be implemented as specifically-configured hardware, such as field-programmable gate array (FPGA) of graphics processing unit (GPU) specifically to execute the various methods according to this disclosure. For example, examples can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in a combination thereof. In one example, a device may include a processor or processors. The processor comprises a computer-readable medium, such as a random access memory (RAM) coupled to the processor. The processor executes computer-executable program instructions stored in memory, such as executing one or more computer programs. Such processors may comprise a microprocessor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), field programmable gate arrays (FPGAs), and state machines. Such processors may further comprise programmable electronic devices such as PLCs, programmable interrupt controllers (PICs), programmable logic devices (PLDs), programmable read-only memories (PROMs), electronically programmable read-only memories (EPROMs or EEPROMs), or other similar devices.
Such processors may comprise, or may be in communication with, media, for example one or more non-transitory computer-readable media, that may store processor-executable instructions that, when executed by the processor, can cause the processor to perform methods according to this disclosure as carried out, or assisted, by a processor. Examples of non-transitory computer-readable medium may include, but are not limited to, an electronic, optical, magnetic, or other storage device capable of providing a processor, such as the processor in a web server, with processor-executable instructions. Other examples of non-transitory computer-readable media include, but are not limited to, a floppy disk, CD-ROM, magnetic disk, memory chip, ROM, RAM, ASIC, configured processor, all optical media, all magnetic tape or other magnetic media, or any other medium from which a computer processor can read. The processor, and the processing, described may be in one or more structures, and may be dispersed through one or more structures. The processor may comprise code to carry out methods (or parts of methods) according to this disclosure.
The foregoing description of some examples has been presented only for the purpose of illustration and description and is not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Numerous modifications and adaptations thereof will be apparent to those skilled in the art without departing from the spirit and scope of the disclosure.
Reference herein to an example or implementation means that a particular feature, structure, operation, or other characteristic described in connection with the example may be included in at least one implementation of the disclosure. The disclosure is not restricted to the particular examples or implementations described as such. The appearance of the phrases “in one example,” “in an example,” “in one implementation,” or “in an implementation,” or variations of the same in various places in the specification does not necessarily refer to the same example or implementation. Any particular feature, structure, operation, or other characteristic described in this specification in relation to one example or implementation may be combined with other features, structures, operations, or other characteristics described in respect of any other example or implementation.
Use herein of the word “or” is intended to cover inclusive and exclusive OR conditions. In other words, A or B or C includes any or all of the following alternative combinations as appropriate for a particular usage: A alone; B alone; C alone; A and B only; A and C only; B and C only; and A and B and C. | |
Insights from the Asia Future Lab for Innovation and Policy
How to design future-proof solutions for the Asian textile-fashion sector?
On October 4th 2022, SEED and GO4SDGs organised the Asia Future Lab for Innovation and Policy. The event was organised as part of the Global Future Lab series, bringing together around 30 regional SME stakeholders, such as policy makers, intermediaries and SMEs to co-create and develop future-proof solution. Designed to connect and leverage the expertise of these key stakeholders, the outcome of the Future Labs will inform a Sustainable SME Action Agenda.
The participatory lab focused on the Asian textile-fashion sector. Offering employment to more than 300 million people worldwide, the textile-fashion sector remains crucial to realising the ambitions set out in the Sustainable Development Goals. At the same time, the textile-fashion sector is itself a major contributor to global environmental problems. Historically, Asian countries have dominated the global textile industry as major exporters. This has been accompanied by often severe environmental challenges, such as water pollution. Therefore, the Asia Future Lab aimed to offer a platform for key SME stakeholders to discuss how SMEs are leading the way to transform the regional textile-fashion sector, which challenges they still face and what potential there is for further action. In so doing, the discussion was guided by Five Key Action Areas such as Innovation, Non-Financial Support, Finance, Policy and Market.
To read more about the Asian textile-fashion sector and the Five Action Areas click here.
The event featured some exemplary cases of SMEs, intermediaries and policy makers, who shared their success stories and challenges throughout the event.
The panel brought together a diverse group of SME stakeholders, including representatives from government, intermediaries and green textile-fashion SMEs. Part of the panel were Ukhnaa Sarangoo from Agronomes et Vétérinaires sans frontières (AVSF) (Mongolia), Prajakta Verma from the Ministry of Textiles (India), Hu Kehua of the Office of Social Responsibility of the National Textile and Apparel Council (China), Amorpol Huvanandana of Moreloop (Thailand) and Zinaida Fadeeva of the SWITCH-ASIA SCP Facility. The panel was kicked off by Ukhnaa Sarangoo, who pointed out some of the key challenges faced in the textile-fashion sector. For instance, how the expectation to receive sustainable products for the same prices as unsustainable ones often clashes with the reality of high production costs. She highlighted the need for more readiness to invest in sustainable supply chains and to improve the green loan system. To introduce the government perspective, Prajakta Verma spoke about the importance of regulations and complementing benchmarks to support advancements toward the green economy. For the next few years, she highlighted three main issues: improving and innovating on textile waste, advocating to replace the current trend of fast fashion with responsible fashion trends and adopting sustainable technologies along the entire textile value chain. Adding to this, Hu Kehua elaborated on the government’s role to steer the economy toward more sustainability. He emphasised the importance of government regulation and high-level goals, such as national carbon neutrality pledges, that function as a point of orientation for all actors in the economy. Introducing the perspective of green textile-fashion SMEs, Amorpol Huvanandana explained how – at Moreloop – they embarked on the journey to build a sustainable textile-fashion enterprise. Utilising the online market space, combined with a green business model, they were able to build a successful SME. However, he elaborated on one of the key challenges for green textile-fashion SMEs, namely explaining how the enterprise is more sustainable than many mainstream alternatives. He emphasised that we must introduce sustainable practices into the mainstream, making them more known to consumers and businesses. The panel was concluded with the input of Zinaida Fadeeva, speaking from an intermediary’s perspective. She highlighted the need to go beyond individual regulations, standards and laws and aim for a comprehensive policy package, thereby addressing the variety of challenges and combining regulations with incentives and benefits to green enterprises.
Building on these initial insights from the panel discussion, the Deep-Dive Working Groups continued to expand on the identified solutions. The working groups were kicked off with a first input of the SME, intermediary and government champions, presenting their ambitions and challenges to future-proof solutions in the textile-fashion sector. Hannes MacNulty and Youngran Hur provided a first overview of UNEP’s ambitions for future proof solutions in the textile-fashion sector to open the discussion. To complement these insights, entrepreneurs Amita Deshpande (Recharkha, India) and Vimlendu Jha (Green the Map, India) introduced their business model and made the case for green textile-fashion SMEs. Ding Shuang (China National Institute of Standardisation) and Rogier van Mansvelt (GGGI, Cambodia) further expanded on these ideas from an intermediary and research perspective. The Working Groups advanced the discussion along the Five Action Areas Innovation, Non-Financial Support, Finance, Policy and Market. Along these action areas, they identified three critical next steps to designing future-proof solutions: First, there is a need for better communication and outreach. Green SMEs have an advantage today, namely that they already meet more ambitious targets for sustainability than their competitors. However, it often remains difficult for them to communicate this advantage towards their customers and share their lessons with partners. Secondly, support programmes in each of the Five Action Areas must be better tailored to the specific geographic and cultural contexts, targeting the specific enterprise sector and business model. Finally, coordination of all of these activities presents a major challenge. To successfully design future-proof solutions for 2030, activities must be well coordinated across the textile-fashion sector, involving all key stakeholders.
These outcomes of the Asia Future Lab, together with the other events in the Global Future Lab Series, will inform the Sustainable SME Action Agenda and help frame pathways for future actions to be ready for the challenges of 2030. | https://seed.uno/articles/blog/insights-from-the-asia-future-lab-for-innovation-and-policy |
abstraction in motion
His observations of natural phenomena; their rhythms and movements are often the starting point for the work of Michael Bom.
He often draws his motives from natural elements such as water, sun, wind, and moon. An important motive for him is the sea. Here nature appears in its simplest, essential, unchanging form. The sea is is a constantly moving landscape which is never twice the same, but still remains the same.
Through abstraction he brings across the essence of these observations.
Michael works across different mediums in which he explores his interest in reflection, movement and structure. With print work and paintings on canvas on flat and static surfaces he uses dots and rhythm and the juxtaposition of layers to achieve abstract translations of movement. With his kinetic ‘analogue’ machines he is able to express his observations through a constantly changing movement between interacting surfaces of dots and filters.
Exhibition Winter Salon 2020 at Arti et Amicitiae
Winter Salon shows recent work by more than 150 artist members of Arti et Amicitiae in Amsterdam.
My optical kinetic work “Coloured Circle” will be on show in this exhibition. This work is one of my first pieces in which the viewer has to move around to experience movement and change of colour.
10 December – 28 March – Tuesday till Sunday 12 -18h00 . Entrance: €3,- Please reserve a time slot via this link. Address: Arti et Amicitiae, 112 Rokin, 1012 LB Amsterdam
Groot Rotterdams Atelier Weekend – 26 & 27 September 2020
In September there was a chance to visit my studio during the Groot Rotterdams Atelier Weekend – 26 & 27 September 2020, 11h00 – 17h00. Address: Ackersdijkstraat 20 (studio nr 18), Rotterdam
The Big Art Calendar 2020
On 5 September I was pleased to be the artist of the Grotekunstkalender 2020.
Publieke Werken Rotterdam
This recent work of mine was hanging in the outside space of Rotterdam city as part of the “Publieke werken” exhibition during 9 July till 9 August.
Since so many events are not taking place in the city due to Covid 19, the billboards spaces are empty.
Public Works is an outdoor exhibition throughout the city of Rotterdam with work by more than 300 visual artists.
An edition of 9 extra large poster prints 119 x 84 cm will be for sale for 35 euros each. Contact me if you would like to order one.
His kinetic optical installation ‘Moving Images II’ was nominated for the 5th International André Evard-Art Award by the Kunsthalle Messmer in Germany and was part of the exhibition until February 2019 in the Messmer museum. | http://michaelbom.com/?blackhole=294dc83212 |
History And Physical Exam
To diagnose all forms of hepatitis, your doctor will first take your history to determine any risk factors you may have.
During a physical examination, your doctor may press down gently on your abdomen to see if thereâs pain or tenderness. Your doctor may also check for any swelling of the liver and any yellow discoloration in your eyes or skin.
What Are The Symptoms And Signs Of Viral Hepatitis
The period of time between exposure to hepatitis and the onset of the illness is called the incubation period. The incubation period varies depending on the specific hepatitis virus. Hepatitis A virus has an incubation period of about 15 to 45 days Hepatitis B virus from 45 to 160 days, and Hepatitis C virus from about 2 weeks to 6 months.
Many patients infected with HAV, HBV, and HCV have few or no symptoms of illness. For those who do develop symptoms of viral hepatitis, the most common are flu-like symptoms including:
How Is It Spread
Hepatitis A is spread when a person ingests fecal mattereven in microscopic amountsfrom contact with objects, food, or drinks contaminated by feces or stool from an infected person.
Hepatitis B is primarily spread when blood, semen, or certain other body fluids- even in microscopic amounts from a person infected with the hepatitis B virus enters the body of someone who is not infected. The hepatitis B virus can also be transmitted from:
- Birth to an infected mother
- Sex with an infected person
- Sharing equipment that has been contaminated with blood from an infected person, such as needles, syringes, and even medical equipment, such as glucose monitors
- Sharing personal items such as toothbrushes or razors
- Poor infection control has resulted in outbreaks in health care facilities
Hepatitis C is spread when blood from a person infected with the Hepatitis C virus even in microscopic amounts enters the body of someone who is not infected. The hepatitis C virus can also be transmitted from:
- Sharing equipment that has been contaminated with blood from an infected person, such as needles and syringes
- Receiving a blood transfusion or organ transplant before 1992
- Poor infection control has resulted in outbreaks in health care facilities
- Birth to an infected mother
Read Also: Drugs Used For Hepatitis C
How Long Before I Have Symptoms
Many people have mild symptoms or no symptoms, which is why hepatitis is sometimes called a âsilentâ disease.
Hepatitis A. The symptoms usually show up 2 to 6 weeks after the virus enters your body. They usually last for less than 2 months, though sometimes you can be sick for as long as 6 months.
Some warning signs that you may have hepatitis A are:
Hepatitis B. The symptoms are the same as hepatitis A, and you usually get them 3 months after you’re infected. They could show up, though, anywhere from 6 weeks to 6 months later.
Sometimes the symptoms are mild and last just a few weeks. For some people, the hep B virus stays in the body and leads to long-term liver problems.
Hepatitis C. The early symptoms are the same as hepatitis A and B, and they usually happen 6 to 7 weeks after the virus gets in your body. But you could notice them anywhere from 2 weeks to 6 months later.
For about 25% of people who get hep C, the virus goes away on its own without treatment. In other cases, it sticks around for years. When that happens, your liver might get damaged.
Remember, it’s possible to spread all the types of hepatitis even if you don’t show any signs of being sick.
How Is Hepatitis A Diagnosed
Some people have only a few symptoms and no signs of jaundice. Without visible signs of jaundice, its hard to diagnose any form of hepatitis through a physical examination. When symptoms are minimal, hepatitis A can remain undiagnosed.
After you discuss your symptoms with your doctor, they may order a blood test to check for the presence of a viral or bacterial infection. A blood test will reveal the presence of the hepatitis A virus.
Complications due to a lack of diagnosis are rare.
You May Like: Is Hepatitis B And Hiv The Same Thing
Who Should Be Vaccinated
Children
- All children aged 1223 months
- All children and adolescents 218 years of age who have not previously received hepatitis A vaccine
People at increased risk for hepatitis A
- International travelers
- Men who have sex with men
- People who use or inject drugs
- People with occupational risk for exposure
- People who anticipate close personal contact with an international adoptee
- People experiencing homelessness
People at increased risk for severe disease from hepatitis A infection
- People with chronic liver disease, including hepatitis B and hepatitis C
- People with HIV
Other people recommended for vaccination
- Pregnant women at risk for hepatitis A or risk for severe outcome from hepatitis A infection
Any person who requests vaccination
There is no vaccine available for hepatitis C.
How Do You Get Hepatitis C
Just like hepatitis B, you can get this type by sharing needles or having contact with infected blood. You can also catch it by having sex with somebody who’s infected, but that’s less common.
If you had a blood transfusion before new screening rules were put in place in 1992, you are at risk for hepatitis C. If not, the blood used in transfusions today is safe. It gets checked beforehand to make sure it’s free of the virus that causes hepatitis B and C.
It’s rare, but if you’re pregnant and have the disease, it’s possible to pass it to your newborn.
There are some myths out there about how you get hepatitis C, so let’s set the record straight. It’s not spread by food and water . And you canât spread it by doing any of these things:
- Joint pain
See your doctor as soon as possible if you have any of these symptoms.
Sometimes, people have no symptoms. To be sure you have hepatitis, youâll need to get tested.
Recommended Reading: Hepatitis C Genotype 1a Treatment
How Serious Is It
- People can be sick for a few weeks to a few months
- Most recover with no lasting liver damage
- Although very rare, death can occur
- 15%25% of chronically infected people develop chronic liver disease, including cirrhosis, liver failure, or liver cancer
- More than 50% of people who get infected with the hepatitis C virus develop a chronic infection
- 5%-25% of people with chronic hepatitis C develop cirrhosis over 1020 years
Hepatitis Testing Prevention And Treatment In Pearland Tx
Prime Urgent Care provides hepatitis testing and treatment to residents in Pearland and the Greater Houston area. We accept most major insurance providers, and we offer low out-of-pocket rates for patients without insurance. To get tested today, book an appointment online or just walk into our clinic for immediate medical care.
You May Like: Chronic Hepatitis C Without Hepatic Coma Hcc
How Is Hepatitis Diagnosed
To diagnose hepatitis, your health care provider:
- Will ask about your symptoms and medical history
- Will do a physical exam
- Will likely do blood tests, including tests for viral hepatitis
- Might do imaging tests, such as an ultrasound, CT scan, or MRI
- May need to do a liver biopsy to get a clear diagnosis and check for liver damage
Doctors Without Waiting Rooms
Health
Viral hepatitis is an infection that causes liver inflammation and damage. Inflammation is swelling that occurs when tissues of the body become injured or infected. Inflammation can damage organs. Researchers have discovered several different viruses link that cause hepatitis, including hepatitis A, B, C, D, and E.
Recommended Reading: How You Get Hepatitis B And C
What Is Hepatitis E
Hepatitis E, also called enteric hepatitis , is similar to hepatitis A, and more prevalent in Asia and Africa. It is also transmitted through the fecal-oral route. It is generally not fatal, though it is more serious in women during pregnancy and can cause fetal complications. Most patients with hepatitis E recover completely.
What Are The Symptoms Of Hepatitis
Some people with hepatitis do not have symptoms and do not know they are infected. If you do have symptoms, they may include:
- Joint pain
- Jaundice, yellowing of your skin and eyes
If you have an acute infection, your symptoms can start anywhere between 2 weeks to 6 months after you got infected. If you have a chronic infection, you may not have symptoms until many years later.
Also Check: Hepatitis C Home Test Kit
How Do You Get Hepatitis A
The main way you get hepatitis A is when you eat or drink something that has the hep A virus in it. A lot of times this happens in a restaurant. If an infected worker there doesn’t wash their hands well after using the bathroom, and then touches food, they could pass the disease to you.
Food or drinks you buy at the supermarket can sometimes cause the disease, too. The ones most likely to get contaminated are:
- Shellfish
- Ice and water
You could catch or spread it if you’re taking care of a baby and you don’t wash your hands after changing their diaper. This can happen, for example, at a day care center.
Another way you can get hep A is when you have sex with someone who has it.
What Are The Treatments For Hepatitis B
If you think you may have been exposed to hepatitis B, its important to talk with a healthcare professional as soon as possible.
A doctor or other healthcare professional may administer the first dose of the hepatitis B vaccine and a shot of hepatitis B immunoglobulin. This is a combination of antibodies that provide short-term protection against the virus.
Though both can be given up to a week after exposure, theyre most effective at preventing infection if administered within 48 hours.
If you receive a diagnosis of acute hepatitis B, a doctor may refer you to a specialist. They may advise you to get regular blood tests to ensure you dont develop chronic hepatitis.
Many people with acute hepatitis B dont experience serious symptoms. But if you do, it can help to:
- get plenty of rest
- take over-the-counter pain mediation, like naproxen, when needed
Other lifestyle changes may also be needed to manage your infection, such as:
- eating a nutritious, balanced diet
- avoiding substances that can harm your liver, such as:
- alcohol
- certain herbal supplements or medications, including acetaminophen
If blood tests show you still have an active infection after 6 months, your doctor may recommend further treatment, including medications to help control the virus and prevent liver damage.
Also Check: Hepatitis C How Does It Spread
What Is Autoimmune Hepatitis
The liver is a large organ that sits up under your ribs on the right side of your belly . It helps filter waste from your body, makes bile to help digest food, and stores sugar that your body uses for energy. Autoimmune hepatitis occurs when your bodys infection-fighting system attacks your liver cells. This causes swelling, inflammation and liver damage.
It is a long-term or chronic inflammatory liver disease.
Autoimmune hepatitis:
- May occur at any age
- Affects women more than men
- Is often linked to other diseases where the body attacks itself
How Is Hepatitis Treated In A Child
Treatment will depend on your childs symptoms, age, and general health. It will also depend on how severe the condition is.
Your childs treatment will depend on whats causing his or her hepatitis. The goal of treatment is to stop damage to your childs liver. Its also to help ease symptoms. Your childs treatment may include:
-
Medicines. These can control itching, treat the virus, or control an autoimmune disease.
-
Supportive care. This includes eating a healthy diet and getting enough rest.
-
Reducing risk. Not using alcohol or illegal drugs.
-
Blood testing. This can tell if the disease is progressing.
-
Hospital stay. This is done in severe cases.
-
Liver transplant. This is done for end-stage liver failure.
-
Helping to prevent the spread of viral hepatitis. Having good personal health habits, such as handwashing.
Don’t Miss: Genotype 4 Hepatitis C Treatment
What Causes Alcoholic Hepatitis
When alcohol gets processed in the liver, it produces highly toxic chemicals. These chemicals can injure the liver cells. This injury can lead to inflammation and, eventually, alcoholic hepatitis.
Although heavy alcohol use can lead to alcoholic hepatitis, experts arent entirely sure why the condition develops in some people but not in others.
Alcoholic hepatitis develops in a minority of people who heavily use alcohol no more than 35 percent, according to the American Liver Foundation. It can also develop in people who use alcohol only moderately.
Because alcoholic hepatitis doesnt occur in all people who heavily use alcohol, other factors may influence the development of this condition.
Risk factors include:
- having genetic factors that affect how the body processes alcohol
- living with liver infections or other liver disorders, such as hepatitis B, hepatitis C, and hemochromatosis
- abdominal CT scan
- ultrasound of the liver
Your doctor may order a liver biopsy to confirm a diagnosis of alcoholic hepatitis. A liver biopsy requires your doctor to remove a tissue sample from the liver. Its an invasive procedure with certain inherent risks, but biopsy results can show the severity and type of liver condition.
What Causes Hepatitis In A Child
Hepatitis in children can be caused by many things. Your child can get hepatitis by being exposed to a virus that causes it. These viruses can include:
-
Hepatitis viruses. There are 5 main types of the hepatitis virus: A, B, C, D, and E.
-
Cytomegalovirus. This virus is a part of the herpes virus family.
-
Epstein-Barr virus. The virus causes mononucleosis.
-
Herpes simplex virus. Herpes can affect the face, the skin above the waist, or the genitals.
-
Varicella zoster virus . A complication of this virus is hepatitis. But this happens very rarely in children.
-
Enteroviruses. This is a group of viruses often seen in children. They include coxsackie viruses and echoviruses.
-
Rubella. This is a mild disease that causes a rash.
-
Adenovirus. This is a group of viruses that causes colds, tonsillitis, and ear infections in children. They can also cause diarrhea.
-
Parvovirus. This virus causes fifth disease. Symptoms include a slapped-cheek rash on the face.
Conditions can also cause hepatitis in children. These can include autoimmune liver disease. For this disease, your childs immune system makes antibodies that attack the liver. This causes inflammation that leads to hepatitis.
Don’t Miss: What Type Of Doctor Treats Hepatitis C
How Is Autoimmune Hepatitis Treated
Treatment works best when autoimmune hepatitis is found early. The goal of treatment is to control the disease and to reduce or get rid of any symptoms .
To do this, medicines are used to help slow down or suppress your overactive immune system. They also stop your body from attacking your liver.
Once you have started treatment, it can take 6 months to a few years for the disease to go into remission. Some people can stop taking medicine, but often the disease comes back. You may need treatment now and then for the rest of your life. Some people need to remain on treatment if they have relapsed many times or if their disease is severe.
In some cases autoimmune hepatitis may go away without taking any medicines. But for most people, autoimmune hepatitis is a chronic disease.
It can lead to scarring of the liver . The liver can become so badly damaged that it no longer works. This is called liver failure.
If you have liver failure, a liver transplant may be needed.
Be sure to ask your healthcare provider about recommended vaccines. These include vaccines for viruses that can cause liver disease.
What Is Alcoholic Hepatitis
Alcoholic hepatitis is an inflammatory condition of the liver caused by heavy alcohol consumption over an extended period of time. Ongoing alcohol use and binge drinking can both aggravate this condition.
If you develop this condition, its important that you consider stopping alcohol use gradually. Continued drinking can lead to additional health conditions, such as cirrhosis, excessive bleeding, or even liver failure.
Read Also: How Us Hepatitis C Transmitted
How Long Does It Last
Hepatitis A can last from a few weeks to several months.
Hepatitis B can range from a mild illness, lasting a few weeks, to a serious, life-long condition. More than 90% of unimmunized infants who get infected develop a chronic infection, but 6%10% of older children and adults who get infected develop chronic hepatitis B.
Hepatitis C can range from a mild illness, lasting a few weeks, to a serious, life-long infection. Most people who get infected with the hepatitis C virus develop chronic hepatitis C.
Hepatitis A And E Symptoms
Hepatitis A and hepatitis E present with similar symptoms. The diseases may develop without any signs or symptoms, or symptoms may be nonspecific. If you experience any of the symptoms below for more than two weeks, make an appointment with a gastroenterologist.
There are three phases of hepatitis A and E, and symptoms may differ depending on the stage. Early in the disease, called the prodromal phase, symptoms may include:
- Fever
Also Check: Where Can I Get My Hepatitis B Vaccine
What Is The Treatment For Viral Hepatitis
Treatment of acute viral hepatitis and chronic viral hepatitis are different. Treatment of acute viral hepatitis involves resting, relieving symptoms, and maintaining an adequate intake of fluids. Treatment of chronic viral hepatitis involves medications to eradicate the virus and taking measures to prevent further liver damage.
Acute hepatitis
In patients with acute viral hepatitis, the initial treatment consists of relieving the symptoms of nausea, vomiting, and abdominal pain . Careful attention should be given to medications or compounds, which can have adverse effects in patients with abnormal liver function . Only those medications that are considered necessary should be administered since the impaired liver is not able to eliminate drugs normally, and drugs may accumulate in the blood and reach toxic levels. Moreover, sedatives and “tranquilizers” are avoided because they may accentuate the effects of liver failure on the brain and cause lethargy and coma. The patient must abstain from drinking alcohol since alcohol is toxic to the liver. It occasionally is necessary to provide intravenous fluids to prevent dehydration caused by vomiting. Patients with severe nausea and/or vomiting may need to be hospitalized for treatment and intravenous fluids.
Chronic hepatitis
Medications for chronic hepatitis C infection include:
- oral daclatasvir
Medications for chronic hepatitis B infection include: | https://www.hepatitisprohelp.com/what-are-the-5-types-of-hepatitis/ |
Mobile access to learner support services and digital content have been transformed as a result of the availability of relatively low-cost mobile (m) devices and sophisticated m-applications. However, the limited input functionalities of m-devices means accessing information, content and services is often an unfriendly user experience ultimately affecting the uptake of m-technologies within the institution. To increase the impact of m-deployments the Waikato Institute of Technology is actively exploring the potential of Quick Response (QR) Codes to enable ready access for stakeholder. During this exploration a framework, A.C.E, has been used firstly, to guide the introduction of QR learning activities undertaken within the institution and secondly, to provide a diagrammatic overview to increase institutional awareness of these technologies. The A.C.E. framework is underpinned by three As, Cs and Es. It is aligned with the identifiable stages of a project life cycle, the foundation pillars of flexible learning and indicators of how e-deployments can be monitored. This interactive presentation will encourage participants to critically review how the ACE framework for the introduction of QR codes within the institution has been constructed. | http://researcharchive.wintec.ac.nz/764/ |
A short-term memory (STM) paradigm has been used to examine the influence of frequency separation versus frequency ratio on the processing of pure-tone dyads presented outside of a musical (tonal) context. The physical interaction produces a sensation termed beating when the frequency separation between a dyad's two tones is less than a single critical bandwidth. Models of sensory consonance/dissonance (C/D) predicted that all pure-tone dyads with frequency differences greater than a critical bandwidth should be considered to be constant. The representation of musical C/D typically reflects an integration of the sensory properties of a complex-tone signal, the musical context, and the listener's exposure to intervals. Nonmusicians displayed more accurate memory for large-integer compared with small-integer ratio dyads.
Keywords
Bandwidth; Mathematical models; Storage allocation (computer); Musical context; Short-term memory (STM); Small-integer ratio dyads
Full Text:PDF
Refbacks
- There are currently no refbacks. | https://jcaa.caa-aca.ca/index.php/jcaa/article/view/1898/0 |
Purpose: The aim of this study was to assess whether the removal of blood donation “barriers” facilitates blood donation intentions, using a sample of African migrants, and to identify the implications for social marketing. African migrants are currently under-represented as blood donors in Australia. Some members of the African community have unique donation needs that can only be served by this community. Design/methodology/approach: Interviews were conducted with 425 people from the African community in Victoria and South Australia. Factor analysis was performed on the barriers and the removal of barriers. Item groupings for both constructs differed, suggesting that barriers and their removal are not necessarily opposite constructs. Findings: The cultural society factor was negatively associated with blood donation intention (i.e. a barrier), whereas engagement and overcoming fear were positively associated with blood donation intention (i.e. facilitators). Cultural issues and lack of understanding were not seen to impede blood donation. Additionally, the removal of cultural barriers did not facilitate increases in blood donation intentions. Thus, the removal of barriers may not be sufficient on their own to encourage donation. Research limitations/implications: This only examines the issue with regards to whether the removal of barriers is a facilitator of blood donation with one group of migrants, and relationships may vary across other migrant and non-migrant groups. Practical implications: Policymakers often use social marketing interventions to overcome barriers as a way of facilitating blood donation. This research suggests that removing barriers is indeed important because these barriers impede people considering becoming blood donors. However, the findings also suggest that the removal of barriers is insufficient on its own to motivate blood donations (i.e. the removal of barriers is a hygiene factor). If this is the case, social marketing campaigns need to be multifaceted, removing barriers as well as leveraging facilitators, simultaneously. Social implications: This work identified that the impact of barriers and their removal may facilitate effective social marketing campaigns in differing ways, in the context of blood donation. Originality/value: How barriers and their removal impact social marketing activities (i.e. blood donation behaviour) has generally not been explored in research.
School/Institute
Centre for Health and Social Research
Document Type
Journal Article
Access Rights
ERA Access
Access may be restricted. | https://researchbank.acu.edu.au/fhs_pub/3462/ |
When a close family member passes away, you might assume that you will be named as a beneficiary in their will. It can be surprising to learn that this was not the case, and it can be distressing if you suspect that undue influence was used in the creation of the will in question. But what does this really mean, and do you have grounds to contest the will?
Manipulation or Coercion
Undue influence in will drafting is essentially the manipulation or coercion of the individual creating the will. A will explicitly favours a certain individual or individuals, to the detriment of other friends and family members who might have reasonably assumed that they would be named in the will.
Proving Undue Influence
This undue influence can be difficult to prove, and it might be necessary to engage a specialist will lawyer to formally dispute the matter. There are a number of factors which can make you suspect that undue influence was used when the will was created:
The Pros and Cons
You must weigh up the pros and cons of disputing a will, and this goes beyond the cost and legal complexities of proceeding. Attempting to change a will might put you at odds with other beneficiaries, who might well be members of your immediate family. And yet you might also feel you have an obligation to contest the will if you legitimately believe that undue influence was used, as the resulting will was not in line with your loved one's wishes.
It can be upsetting to think that a family member has been manipulated into parting with their assets, and your decision to dispute the will might stem from your desire to right this wrong.
If you want to learn more about will drafting, feel free to contact a will lawyer for more information. | http://whcrimestoppers.com/2020/07/29/the-problem-of-undue-influence-and-will-drafting/ |
A young dragoon tasked with guarding the sealed dragon in Dragon's Village. He has mastered the art of dragon sealing, a skill which only a chosen few can inherit. Knowing that his dragoon ancestor made the ultimate sacrifice in order to seal the dragon, Duke is ready to follow in those footsteps should the time ever come.
In-game description
Duke is a supporting character in Final Fantasy Brave Exvius. He comes from a long line of dragoons who watch over Dragon's Village, in the continent of Georl, and keep an eye over a sealed evil dragon. He is also a leader of the Resistance against the Aldore empire. He is assisted in his duties by his childhood friends Olif, Mystea and Charie.
Profile[edit | edit source]
Appearance[edit | edit source]
Personality[edit | edit source]
Having developed quite a stern personality due to his dragoon training, Duke is the quiet type, but he cares deeply about his friends. He is reliable and earnest—his comrades place a great deal of trust in him—, but also has a self-sacrificial and stubborn side: Duke feels a deep admiration for his ancestor, who sacrificed his life in order to seal the dragon, and is ready to follow in those footsteps should the time ever come, even though his friends' supportive skills are meant to prevent that from happening.
Story[edit | edit source]
A descendant of the legendary dragoon Ryunan, Duke was born to the ancient clan known as the dragon sealers that resides in Dragon's Village. As such, Duke was entrusted with the mission to protect the seal of a dragon that once terrorized Paladia, and trained from childhood to become a dragoon. In his duty, he had the company of peers who likewise work to keep the dragon sealed: Olif, Mystea and Charie. Olif, the oldest, is the unofficial guardian of the group, while Mystea, almost like an older sister to Duke, is one of the few people who knows Duke's secret, and thus watches over him affectionately. Charie, for her part, is like a younger sister to Duke. When they were litte, Charie once showed Duke a weird dance to make him loosen up, which Mystea found amusing.
Duke trained himself day in and day out until he mastered the art of dragon sealing, a skill which only a chosen few can inherit. When Duke gained prominence among his clan as a Dragoon, he retired his beloved Drakeshorn Spear and took up the legendary Virtue Drake, a heavy spear passed on through his clan's generations that had long been in storage.
Eventually everyone in the village joined the Resistance, which opposes the tyrannical rule of the Aldore Emperor. Duke himself serves as a leader of the Resistance, but finds it difficult to fulfill the duties of his post whilst attempting to accomplish his dragoon duties.
In "Where Destinies Intersect" Duke is hiding atop a tree when Lexa and Merald, members of Morze's Soirée, hide in the bushes below while trying to avoid Galas of The Orders and his soldiers. Duke jumps to Lexa and Merald's side and offers to help them get away. Far away from the soldiers, the girls thank Duke, but he simply asks whether they saw Charie, who is missing. He reflects that searching blindly won't get him anywhere and decides to wait for Charie at the village. He also helps Lexa and Merald save a grandmother and her grandson from soldiers. When the stewardesses ask him about Dragon's Village, Duke leads them to his hometown. Eventually, Charie turns up at the village with Elbis and Nichol.
Duke briefly meets Lasswell's party when they are led to the village by a prisoner they saved from execution at the hands of Galas. Duke is guarding the entrance and orders them not to move, but the man they saved swears on his honor that they aren't enemy spies. Duke lets them enter the village, and apologizes to the villager for not being able to go rescue him.
In "Fate of the Dragoon", Charie confides in Duke that she senses the seal may be weakening. When Duke checks, the dragon's seal is functioning properly, and wonders if it is due to spending time away to help with Resistance activities. As they move farther in, a dragon attacks them and the two end up separated. Trusting her resilience, Duke returns to the village to meet her, Olif and Mystea, all the while suspecting that the seal has been broken. At the village, he meets with Olif and Mystea; they ask about Charie's whereabouts and Duke explains that they lost sight of each other. The two then accompany Duke up the mountain where they are to re-seal the dragon.
Anxious to do his duty, Duke rushes ahead of Mystea and Olif, and waits for them next to a waterfall when he meets a young dragoon, who demands to know why Duke is by himself. When he assures that his friends are just a litte behind, the dragoon boy tells Duke to go further only if his friends are with him; otherwise he will be in trouble. He then leaves. Duke, demanding to know his identity, follows him and goes ahead once again. When Mystea and Olif finally catch up with Duke, she rebukes him for his lack of self-preservation. Olif later voices that they understand Duke's determination, but that they also want to protect him from such a dire fate. They finally meet Charie and, after fending off a monster's attack, Duke apologizes and asks his friends to lend him their strength.
The young dragoon reappears, whom Charie calls by his name: Ryunan. Duke reacts to the name, as it is that of his legendary ancestor. Ryunan thus admits to being merely a strong "memory", guessing that Duke is the offspring of his younger brother. Duke is honored to be in his presence, but Ryunan reveals that the legends embellished his death and asks Duke not to make the mistake that he did. Before parting, he asks that they retrieve his spear Northern Lights and return it to the village. Ultimately Duke is able to seal the dragon and find the spear, which they later return to their ancestors' graves.
Sometime later, the quartet undertake a cleanup operation at the Vesta Ruins, where a den of monsters has gone berserk, likely an effect of the dragon's resurrection. As the dragon's influence should have waned due to the seal, they suspect that someone unknown is interfering with its power and decide to investigate some more. After defeating the fairy responsible, they return to the village.
Gameplay[edit | edit source]
Duke appears as a 5-7★ summonable unit. He is a Dragoon closely associated to the Ice element, and is geared towards jumping. When using his 'timed jumps', he can drop from the sky at the player's pressing of a button, and cap chains for increased damage (and is thus a reliable 'finisher', especially against Dragon-type monsters). His weapon of choice is the spear, but he can also equip swords and greatswords. As for armor selection, he can equip light shields, hats, helms, light armor, heavy armor and accessories.
He appeared as a guest unit in the story event Fate of the Dragoon. | https://finalfantasy.fandom.com/wiki/Duke_(Brave_Exvius) |
What's needed with a claim that a will is invalid?
- Posted
- AuthorRachel Leech
You may be considering a claim against the validity of a will, but do not know what you need to establish to be successful. Here at Birkett Long we have a team of specialists who have experience in every way to challenge a will.
The validity of a will can be disputed on a number of different grounds, which include:
- The deceased’s lack mental capacity to make the will, known as testamentary capacity
- The deceased was unduly influenced into making the will
- The will is a forgery
- The will does not comply with the necessary formalities
- The deceased did not know and approve the contents of the will
It is not uncommon for a will to be challenged on a number of different grounds.
What do I need to prove to successfully challenge the validity of a will?
Each ground has a different legal test which needs to be fulfilled before the will is found to be invalid.
Lack of mental capacity (testamentary capacity)
With a claim that the deceased lack the mental capacity to make a will (also known as testamentary capacity) you need to establish that the deceased did not fulfil the Banks v Goodfellow test. This is a case which established that for someone to have sufficient mental capacity to make a will, they must:
- Know they are making a will, and what the will does
- Know the approximate size of their estate at the time they make the will
- Appreciate who might expect to inherit from them, but they do not have to include them in the will
- Not be suffering from a disorder of the mind, or insane delusions, which affects the disposition of the deceased’s estate.
Undue Influence
Establishing a will is invalid due to undue influence is notoriously difficult. Largely because you need to establish that the deceased was influence, and this influence overpowered the deceased’s ability to do what they wanted.
The five things which need to be established are:
1. There was an opportunity to exercise influence
2. That there was an actual exercise of influence
3. The actual exercise of the influence was in relation to the will
4. The influence was undue i.e. went beyond mere persuasion
5. That the will was brought about by these means
Undue influence does not have to be physical. It can be verbal abuse, or verbally talking to someone who is old or weak. Influence may also be a ‘drip, drip’ effect where someone makes a number of comments over a long period.
Mere persuasion must not be mistaken for influence. It is perfectly accepted for someone to suggest to someone making a will that they should be included, or even try to persuade the person. What’s important is when it affects the person’s ability to exercise their own free will.
One of the difficulties in establishing undue influence is having proof. The person alleging undue influence must prove it. This can be difficult as often the coercion will take place behind closed doors. The court can find undue influence when there is no direct witness evidence from drawing an inference of undue influence from other proven facts. However, it is not an easy task.
Fraudulent calumny
Another ground which is similar, but different, to undue influence is fraudulent calumny. This is when A poisoned the deceased’s mind against B, who otherwise would have been a beneficiary. A must have poisoned the deceased’s mind against B by making dishonest allegations about B’s character, which A either knows to be untrue or does not care if it is true or not.
Fraud or forgery
A will is invalid if it can be established that it is a forgery. This often comes does to expert evidence to look at the signature on the will. However, these challenges can be very difficult and cases have been unsuccessful even when two experts have agreed that the signature was a forgery.
Formalities
S.9 of the Wills Act 1987 sets out how the formalities of a will.
For a will to be valid, it must:
- Be in writing
- Signed by the deceased, or by someone else at the deceased’s direction
- The Deceased must sign the will (or acknowledge their signature) in the presence of two witnesses, who must also sign the will
Please note that there is a temporary amendment allowing wills to be witnessed via video link during the Coronavirus pandemic. Please see my colleague’s blog on the same here
Knowledge and approval
A will is only valid if the deceased knew and approved it’s contents when they signed it. The will must truly represent the deceased’s wishes. Therefore, if the deceased did not know the provision of the will and/or did not approve the provisions, then the will is invalid.
This often goes hand in hand with lack of capacity, as someone who lacks capacity cannot understand the contents of the will and therefore cannot approve it’s contents.
How do I prove the will is invalid?
The evidence needed to establish a will is invalid can depend on the ground upon which it is challenged. The most common pieces of evidence can be the will file (often obtained by making a Larke v Nugus request), the deceased’s medical records and witness statements from people who knew the deceased or were involved in the preparation of the will.
It is also common to obtain an expert report on the deceased’s mental capacity or on whether the will is a forgery, if those are the grounds on which the will is being challenged.
What happens if a will is invalid?
If you are successful in challenging the validity of a will, then the estate will be distributed in accordance with the previous will and it is possible to challenge more than one will. If there is no previous will, the estate will be disputed in accordance with the rules of intestacy. Therefore, before considering challenging a will, you need to establish whether you will be better off under the previous will, or the rules of intestacy.
If you believe a loved one’s will may be invalid, please contact our team of experts at Birkett Long who will be happy to discuss your case. | https://www.birkettlong.co.uk/site/blog/will-disputes/whats-needed-with-a-claim-that-a-will-is-invalid |
MORE than a year ago, British newspaper Mail on Sunday published an article alleging that Shehbaz Sharif had stolen and laundered the UK government aid money while he was chief minister of Punjab. Now, it will have to prove its assertions with substantial evidence in court — or risk losing the case.
While defamation lawsuits are hardly an uncommon occurrence in the UK, particularly for this publisher, in Pakistan the case is being seen as a decisive one that will determine the guilt or innocence of Shehbaz Sharif, who is currently in custody and facing a NAB corruption reference.
The story was used to politically damage Mr Sharif, as senior PTI officials said it confirmed their beliefs about his alleged corruption. “British newspapers do not publish anything until they have triple checked. Unlike in other places they fear being sued. The Sharifs won’t sue daily mail because they know they will lose and the penalties would be in millions of pounds,” Shafqat Mahmood had tweeted when it came out in 2019.
Contrary to this assertion, Mr Sharif did file a defamation claim against the “grotesque allegation” in January 2020, claiming a retraction, damages and an apology.
A year after the claim, this week Justice Matthew Nicklin at a preliminary hearing heard the arguments of both sides to determine the meaning of the words in the article. This ‘meaning hearing’ is a relatively new phenomenon in English courts which is done to save time and costs for both parties prior to the trial. At this stage, the judge determines how the defamatory words would be understood by an “ordinary reasonable reader”.
Outcome will either exonerate Shehbaz or give the govt more political ammunition
In this particular case, the judge ruled that the article meant that Mr Sharif is guilty of some very specific crimes. The publication now has the uphill task of proving these crimes to be substantially true.
Although the meaning hearing is by no means a conclusive decision on the defamation claim itself, it is a critical step in the case as it lays the framework for the defence that can be used by the publication. To sue the Mail on Sunday successfully for defamation, Mr Sharif’s lawyers will need to prove that first the article is identifiably about him; second, that the article means he is guilty of stealing tens of millions from DFID and laundering it to the UK; third, that the article was published by Associated Papers Limited and lastly that its publication caused or is likely to harm the reputation of Mr Sharif – none of which will be difficult to prove.
The Mail on Sunday, however, can use the defence of truth to defend its publication of the defamatory statement. According to the current law and as established in the case of Chase v News Group Newspapers Ltd, “the defendant does not have to prove that every word he or she published was true but has to establish the “essential” or “substantial” truth of the sting of the libel”.
Unfortunately for the publication, Justice Nicklin held that the allegations made about Mr Sharif are clear, and there is insufficient evidence to lead an ordinary reader to think otherwise about his guilt.
The publication now has to prove that it is substantially true that, as it alleged, Mr Sharif was party to and the principal beneficiary of money laundering to the extent of tens of millions of pounds which represented his proceeds of embezzlements while he was the chief minister — and that the public money included funds from a DFID grant payment.
British law also says that it is no defence to an action for defamation for the defendant to prove that he or she was only repeating what someone else had said — known as the “repetition rule”.
This makes the publication’s challenge more complicated, as it will have to provide evidence to substantiate its claims.
According to accountability adviser Shahzad Akbar, proving this will not be a difficult task. “Everything can be substantiated. The standard of proof in this case is higher than a civil case and lower than criminal,” he said to Dawn.
Mr Akbar added that, in the reference filed against Mr Sharif, there is ample evidence in the form of “TTs, cheques, on the record confessions”.
But interestingly, Justice Nicklin said that even if Mr Sharif is convicted in Pakistan, that conviction in itself does not amount to substantial evidence that can be used in a successful defence of truth.
This means that, if the case goes to trial in the UK, regardless of the outcome in Pakistan and what officials here charge Mr Sharif with, the UK trial court will form its own conclusions about how compelling the evidence is as regards the corruption specified in the article.
The coming days promise to be interesting, as the outcome will either exonerate Mr Sharif and hurt the government’s accountability narrative, or give them more political ammunition. | https://www.dawn.com/news/1605937 |
The Newark Evening News is an all-weekly newspaper published in Downtown. It was founded on Sept. 1st, 1883. It ran until December. 8, 1968. It is the most comprehensive news source for New Jersey since its inception. The News contains articles written by many writers which include Charles Bowers, an editorial cartoonist and state historian. The newspaper also features articles by famous authors like Joseph McCarthy, Robert Frost, and Charles Dickens.
The Newark Evening News was a major newspaper in history of New Jersey journalism. The newspaper had a large bureau that included Montclair, Elizabeth, Metuchen, Plainfield, Kearny, Trent, and Washington, DC. It's still the most trusted news source in New Jersey, and its digital edition is completely free. Donate to the Newark Public Library if you are able to.
The Newark Evening News, printed on the same presses as the Star-Ledger is one of the most reliable sources of news from New Jersey. Its newspaper archive contains over 98 years of coverage. It was also the state's official newspaper of record. The Newark Star-Ledger has replaced the Newark Evening News but the former monopoly was still a formidable competitor.
In the past, newspapers were the primary source of information about the community. Newspapers were a major source of information for people from all walks of life and often contained information about their ancestral ancestors. You can now access decades worth of Newark issues from the comfort of your home. You can easily locate a name or an event in Newark newspapers by using the database of newspapers. When searching online, you may also use keywords. You can also specify the year range of the event.
Princeton Packet
In its first hundred years, The Princeton Packet was considered to be the Best New Jersey News Source The name is logical. The newspaper was published in Princeton, New Jersey. It's still operating and is published by Packet Pub. Co. It has been reporting on events in the state and the surrounding areas since the 11th of February, 1916. In actual fact, Princeton Packet Newspapers are still the oldest continuously operating newspaper in the state.
Cape May Star and Wave
The Cape May Star and Wave provide information on Cape May and its surrounding communities. The Cape May Star and Wave, Best New Jersey News Source was first published on May 17 1919. The publication dates range from May 17 to July 8, 1954. It is also known by "1854 to 1954: A century of service" issue. The Star and Wave can also be purchased in Spanish in Middlesex and Monmouth counties.
New Jersey Business
You've found the most reliable source for New Jersey business news. NewsNow strives to be the most accurate aggregater of New Jersey business news. By aggregating the headlines on finance of the best online news sources, they deliver stories constantly and within ten minutes of publication. NewsNow reviews each story to determine its relevance. If you're having problems you are having trouble, please contact us.
New Jersey news organizations have struggled for years to survive due to declining revenue from advertising. Many of these news organizations have had to shut their doors, cut down on journalists and consolidate their operations. Some are seeking new funding sources including franchising, membership programs and foundation grants. There is no better source for local news in New Jersey, whether you prefer the print or online version. The state's media institutions are responding by placing their reputations at risk.
NJTV
As of November 2015 the station has relocated to a permanent residence in downtown Newark, New Jersey. The Agnes Varis Studio, funded through a donation from the Agnes Varis Charitable Trust, is a ten thousand square-foot facility that is open to guests and on-air talent from across the state. The studio was designed to help foster an "diaspora" of a group of former NJTV employees. It encourages the sharing of ideas and keeps everyone up-to-date on the happenings in New Jersey.
NJTV is one of the most trusted news sources in the state. A Pew study of how news sources affect local communities found that residents in New Jersey rely most on local television stations. This isn't surprising as local television stations have the greatest coverage and have the highest number of viewers in each community. So , which is the most reliable news source for New Jersey residents? NJTV combines national and local sources to report on stories that impact New Jersey. | https://dailybournemouthandpooleuknews.com/2022/07/21/best-new-jersey-news-source |
Our policies are designed to help you get the most from your library experience.
Rose-Hulman expects its students, faculty, staff and visitors to be responsible adults and to behave at all times with honor and integrity.
Visitors from neighboring colleges and the general public are permitted to use the Logan Library during public library hours. However, priority for library services, resources and study space is given to members of the Rose-Hulman community.
Circulation Policy: Currently, only those with Rose-Hulman ID cards and members of schools with whom Rose-Hulman has reciprocal agreements (e.g. Indiana State University and Saint Mary-of-the-Woods College) can check out library materials directly from the Logan Library. Library cards are not available to members of the general public. However, members of the general public may have their local public library submit an interlibrary loan request to borrow library materials from Rose-Hulman. Members of the general public may use books and several electronic resources within the library.
Food and Drink Policy: Beverages and snacks are permitted. Please be polite and clean up after yourself.
Smoking Policy: All Rose-Hulman buildings are smoke-free and vape-free (no e-cigarettes).
Unattended Items Policy: Personal items and other items left unattended in the Library for more than 15 minutes are subject to removal by library staff or Public Safety. The library is not responsible for the loss or damage of personal belongings. Unattended Child Policy: Public Safety will be notified if a child is left unattended in the library.
Elevator Policy: The library elevator may be used by Rose-Hulman faculty, staff, library student workers and vendors. Students or visitors who qualify under the provisions of the Americans with Disabilities Act (ADA) of 1990, provisions of Section 504 of the Rehabilitation Act of 1973, have temporary medical conditions or have obvious accessibility issues, may also use the library elevator. Individuals who do not fall into one of these categories are required to use the main entrance of the library and/or the stairs to access the various floors and offices of the John A. Logan Library. If you have questions or to discuss additional exceptions to this policy, please contact the Library Director, library staff member or email the library at [email protected].
Other: Suggestions for improving the library and its collections are always welcomed. Please drop a note to the library at [email protected].
Click "log-in" at the top right of the screen.
Enter the BARCODE on the back of your student or faculty ID and your pin number. If you do not have a PIN number, you will need to create one. To create a PIN just enter your barcode and leave the PIN field blank and click LOGIN. You will then be prompted to create a PIN number.
Place a check mark by the books you would like to renew and click "renew."
Note: You can only renew this way one time. After that, you will need to bring the item to the library.
Items are subject to recall.
Reference items are to be used in the library unless special permission is given by the librarian. Reserve materials are to be used in the library unless otherwise instructed by the professor. | https://www.rose-hulman.edu/academics/learning-and-research-facilities/logan-library/general-information/policies.html |
For most people, imagining what Archangel Michael actually looks like can help them envision while making the Archangel Michael Prayer. While we don’t have any scripture to confirm facial features such as hair colour, skin colour or details of that matter.. what we do know is that Archangel Michael is one Powerful Angel.
Arhchangel Michael is a fierce warrior who is prophecised to lead the Angels to victory over Satan in the end times (Revelation 12:7-12), so we know that Michael’s appearance is one of a warrior and fighter so its fair to say he has a strong appearance.
Archangel Michael is Often Depicted With A Sword
As a leader of heavenly armies, Archangel Michael has a powerful sword that emits a wonderful blue light, the sword itself as a whole is the spiritual symbol of power and luminous justice against evil forces.
In Romain times of early Christianity leaders of armies wore glorious armour and had powerful swords, as Archangel Michael is the leader of heavenly armies he is often depicted similar to a mighty Roman.
So we can see the Archangel Michael looks like a fierce warrior with a big blue sword, wearing the armour of God and big angelic wings.
Archangel Michael’s appearance is one of a fierce, bold yet gentle character who protects against evil and stands up for righteousness. You will know when you see archangel Michael as he will be blowing his trumpet announcing his return and on the front line fighting against evil. | https://21stcenturycatholicevangelization.org/catholicism-101/what-does-archangel-michael-look-like/ |
Santorini, one of the Cycladic islands, is a historically active volcano and part of the South Aegean (or Hellenic) volcanic arc in the Aegean Sea, located about 120 km north of Crete.
Santorini, or officially called Thira, consists actually of a group of islands:
– The main island Thera (75,8 km2, ca. 7000 inhabitants)
– Therasia (9,3 km2, ca. 250 inhabitants)
– Aspronisi (0,1 km2, uninhabited)
– Palea Kameni (0,5 km2, 1 inhabitant)
– Nea Kameni (3,4 km2, uninhabited)
Apart from a small non-volcanic basement represented in the south-eastern part of Thera these islands are composed of volcanic rocks from hundreds of eruptions during the last 2 million years, some of them being large caldera-forming events.
Palea and Nea Kameni formed during several lava eruptions in historic time within the caldera created by collapse of the magma chamber after the Minoan eruption. Nea Kameni is still active with the last eruption in 1950. | https://europe-greece.com/greek-nature/santorini/ |
On January 23, the Information Technology & Innovation Foundation (ITIF) published a report entitled Wake Up, America: China is Overtaking the United States in Innovation Output, which applies innovation and industrial performance metrics for comparing relative innovation outputs from foreign technological rivals China and the United States. The report, produced by ITIF’s Hamilton Center on Industrial Strategy, is the latest indicator that China is close to surpassing the United States in terms of innovation output per capita and calls upon U.S. policymakers to develop a national economic and technology policy to restore U.S. dominance in innovation.
Worldwide IP filings increased by 3.6% in 2021, according to a report published November 21 by the World Intellectual Property Organization (WIPO). The increase came during a turbulent time for the world economy, at the height of the COVID-19 pandemic, as well as a global economic downturn. The biggest increase in patent filings was in Asia, where 67.6% of worldwide patent applications were filed. The United States saw a 1.2% decrease in filings and a 1% increase in trademark filings. Trademark applications grew at a much faster rate than patent applications, with a 5.5% in trademark filing activity. Industrial design filing activity also rose by 9.2% with the largest uptick again in Asia. China saw high rates of growth and is a global leader in sheer numbers across all indicators.
The China National Intellectual Property Administration (CNIPA) released a draft of new measures that would downgrade the ratings of Chinese patent agencies that approve abnormal or fraudulent patents. CNIPA released the draft on October 8, which expands on a trial started in January 2022 in four provinces. The draft sets out to “crack down on illegal and untrustworthy acts” carried out by Chinese patent agencies and promote a healthier development of Chinese intellectual property.
During the final day of IPWatchdog LIVE in Dallas, Texas on Tuesday, a panel of attorneys discussed issues surrounding “dangerous fakes,” which are counterfeit goods that pose health risks to consumers. The panelists began with a brief overview of how U.S. Customs and Border Protection (CBP) identifies and seizes infringing goods. The panel also outlined the role that U.S. Consumer Product Safety Commission (CPSC) plays in working to identify dangerous fakes in conjunction with CBP.
The U.S. Court of Appeals for the Third Circuit on Monday said in a precedential decision that Jiangsu Tie Mao Glass Co. Ltd. (TMG) should have shown up sooner in a trade secrets misappropriation lawsuit brought against it by PPG Industries if it wanted to have a chance at winning. But by failing to enter the litigation until after PPG asked the district court to enter default judgment and award damages for unjust enrichment, “its protestations were and are too little and much too late,” said the appellate court.
As the concept of a unified “metaverse” is gaining traction, savvy brand owners are shifting their focus to securing rights in this emerging sector. In pursuit of intellectual property (IP) rights, individuals and corporations are turning to metaverse trademark filings to provide protection for goods and services in the virtual world. As of the summer of 2022, the China National Intellectual Property Administration (CNIPA) has received more than 16,000 applications that either contain the word “METAVERSE” (in English or its Chinese translation: “YUAN YUZHOU,” or both) or that include descriptions of goods and services in the virtual world, or both. These applications were filed by individuals as well as companies (big and small, both foreign and domestic). The rejection rate for traditional trademark applications in China is typically high, around 60-70%, at least in the first instance. However, the rejection rate for these new metaverse applications is even higher, hovering around 80%.
On August 9, President Joe Biden signed into law the Creating Helpful Incentives to Produce Semiconductors (CHIPS) and Science Act, enacting a major legislative package that will provide $280 billion in federal funding to encourage the domestic production of semiconductor products in the United States as well as fund research and development projects in advanced technological fields like quantum computing and artificial intelligence. Although the 1,000+ page bill establishes massive investments into several areas of developing technologies, it focuses very little on the intellectual property rights that are critical for protecting the new technologies that would be developed through federally funded projects.
An “NNN” agreement is short for Non-Disclosure/Non-Use/Non-Circumvention agreement, which means the information cannot be shared with anyone, it cannot be used in any way, and “behind-the-back” or design around tactics are forbidden. In recent years, signing NNN agreements has become widely adopted and is now the standard initial step in dealings with Chinese companies, particularly original equipment manufacturers (OEMs). An NNN Agreement is much more than just a Non-Disclosure Agreement (NDA). An NDA focuses narrowly on preventing secret information from being revealed to a third party or to the public, which is not sufficient for OEMs in China. In contrast, an NNN agreement not only contains confidentiality provisions, but also prevents misuse of confidential information.
The idea of patented inventions brings to mind machines fully realized – flying contraptions and engines with gears and pistons operating in coherent symphony. When it comes to artificial intelligence (AI), there are no contraptions, no gears, no pistons, and in a lot of cases, no machines. AI inventors sound much more like philosophers theorizing about machines, rather than mechanics describing a machine. They use phrases like “predictive model” and “complexity module” that evoke little to no imagery or association with practical life whatsoever. The AI inventor’s ways are antithetical to the principles of patent writing, where inventions are described in terms of what does what, why, how, and how often.
“The nine most terrifying words in the English language are: I’m from the Government, and I’m here to help,” said President Ronald Reagan during a press conference on August 12, 1986. This is one of President Reagan’s most often quoted quips, and for a reason. The Government can certainly help people in times of need, but it can also be a scary bureaucracy, particularly when it shows up unannounced and uninvited. Fast forward 31 years and the 12 most terrifying words in the English language for any business should be: “I’m from China, and my company would like to partner with yours.”
Following a week of round-the-clock deliberations, the World Trade Organization (WTO) this morning announced a deal on waiver of IP rights for COVID-19 vaccine technologies under the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS). The final text has made almost no one happy and largely mirrors the draft text going into negotiations, with a few key changes. With respect to open questions in the draft text, the final agreement indicates that all developing country WTO Members will be considered eligible to take advantage of the waiver, but that those with “existing capacity to manufacture COVID-19 vaccines are encouraged to make a binding commitment not to avail themselves of this Decision.” This language is primarily targeted at China, which has publicly stated that it would not use the waiver provision but had objected to language based on percentage of global vaccine exports that would have categorically excluded it. The draft text had encouraged members with vaccine export capabilities to opt out rather than to make a binding commitment.
The World Trade Organization’s (WTO’s) 12th Ministerial Conference is set to take place this week, June 12-15, at WTO headquarters in Geneva, Switzerland. As part of the four-day meeting, discussions around the latest text of the proposal to waive intellectual property (IP) rights under the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS) for COVID-19 vaccine technology will take place around the clock, and it is expected that some agreement will be reached. TRIPS Council Chair, Ambassador Lansana Gberie of Sierra Leone, said on June 7 that “delegations have entered into real negotiation mode in the last 24 hours,” and that she is “feeling cautiously optimistic now that we will get this text ready for adoption by ministers in time for the coming weekend.”
Infringing patented inventions feels like stealing, from the innovator’s perspective, much like a smash and grab at a jewelry store. Politicians refuse to fix the gutted patent system so it can protect U.S. startups and small inventors. The American Dream is slipping away, as it consolidates into the hands of just a few tech giants and sending whatever is left to China. Case in point, ParkerVision v. Qualcomm, which illustrates just how anti-patent some courts have become. In this case the importance of ParkerVision’s seminal semiconductor chip technology that helped to transform cellphones into smartphones is at issue. ParkerVision invested tens of millions in R&D, but the courts have allowed it to be taken from them and transferred to a multinational corporation free of charge.
With their creative minds, marketing and advertising folks never disappoint in coming up with brilliant ways to distinguish their goods and services from the competition – for example, Tiffany’s robin’s egg blue and Hermes’ orange. This type of marketing genius allows one to immediately recognize a brand without even seeing the word “Hermes” or knowing how to pronounce it. On the flip side, these ideas are prime targets for copycats. After all, by simply changing the jewelry box color to the exact pantone shade of Tiffany’s turquoise blue, a seller could immediately quadruple his/her revenue by profiting from consumer confusion without having to increase the inventory quality or spend a dime on marketing. The question then is: is it possible to protect a color (or color combination) in all jurisdictions by registering it as a trademark?
In Part II of this series, we reviewed three of the most popular jurisdictions for global patenting and their foreign filing restrictions for cross-border inventions. Our final article in this series will discuss how an applicant’s failure to comply with foreign filing restrictions may result in various penalties, ranging from invalidation of the patent to criminal consequences accompanied by fines, and even imprisonment. Before strategically planning the global patent filing of a new invention, the first step for a practitioner is to gather the relevant information. Because the foreign filing requirements vary greatly among countries, information beyond simply name and address of each inventor (or applicant if different from the inventor) needs to be collected. Given that some countries’ foreign filing license requirements are based on residency as well as nationality, it is necessary to know the citizenship and/or residency status of each inventor. | https://ipwatchdog.com/category/international/china/ |
Born in a village in Karnataka, Dr. M.B. Chetti went on to become one of the most prominent figures in the world of agriculture. Today, he is the Vice Chancellor of University of Agricultural Sciences, Dharwad. Formerly, he was the Assistant Director General, Education Division of Indian Council of Agricultural Research.
Fuelled by his passion for academia, Dr. Chetti moved to the United States after earning his degree in M.Sc. in 1981. From 1985 to 1987, he remained a post-doctoral fellow at the University of California, USA. Again, between the years 2004 and 2005, he received a Certificate of Training from McGill University, Canada.
Dr. Chetti has served at several positions since the inception of his long and remarkable career. He has been an associate professor, professor, university head, registrar, university librarian, and has worked in a number of other capacities at some of the most respected Indian institutions. Choosing academia over money, Dr. Chetti has been a guide and inspiration to many students.
Krishi Jagran’s Shruti Joshi Nigam spoke with Dr. M.B. Chetti for a special session on canopy management in perennial horticulture crops.
Krishi Jagran: How is canopy management better than traditional farming methods and what are its advantages?
Dr. M.B. Chetti: Canopy management is for the horticulture sector and in that too, particularly for perennial crops. Perennial crops, unlike annual crops, are those which do not require to be replanted every year. They grow back again after every harvest. Examples include mango, guava, peach, plum, pear, apple, and more. Canopy management is of great importance for such perennial crops.
I would like to give the simple example of mango trees. In today’s time, farmers have become more informed. However, earlier there wasn’t enough research about horticulture. Back when I was born, there used to be, say, 500 mango trees in one acre of land. The quantity of fruits these 500 trees produced was very less. In horticulture, we use the term “source and sink.” A source is a location in a plant where water and other nutrients are taken up or synthesized (leaves). A sink is a location where these resources are used (flowers and, eventually, fruits). There is a relation between the source and the sink: the amount of resources taken up by the source must be equal to the one utilized by the sink. If we talk about the farmers of Kashmir and Himachal Pradesh, for instance, we see that they have received training and have learnt to maintain the correct ratio in their trees, which has led them to produce bigger yields.
Mango is cultivated on a large scale in Dharwad, Karnataka. To have a very good harvest, it is important to prune them and remove those branches which bear fruits only every alternate year or bear no fruit at all. It is also important to remove extra leaves in order to maximize the utilization of sunlight and prevent the growth of insects. Here, I would like to cite an example. Many years back, our university had developed a chikoo hybrid which was also cultivated by framers in the area. They earned a lot of profit from these trees but about a decade ago, some of the farmers approached us and said that they were no longer bearing fruits. We formed a committee of experts and went to inspect the trees. We found them so overgrown that they had started shedding flowers as the sunlight was not reaching their branches. We advised the farmers to cut some of the branches from the top.
Therefore, it is very important to trim the trees which are in dormant state. It is also to be noted that this must be done at a particular season: when there is no rainfall, usually in October. Or, in case of north-east rainfall, the trees could be pruned in March or April. It is imperative that they be pruned when there is no rainfall. For trees that are very old, there is a technique which involves removing all the branches and only keeping the trunk. However, this must be learnt from experts. In fact, every tree has a specific length to which it must be pruned. Furthermore, there are some chemicals which are applied to the trees after the pruning is done. They also must be watered once after they are pruned.
What would happen if canopy management is not practiced?
Dr. Chetti: It is very simple: the trees would not give a good yield. It is the aim of every farmer to have bigger yield and maximize profits. Moreover the tree would require the application of more nutrients and water if canopy management is not done.
Do the farmers need to train themselves for canopy management?
Dr. Chetti: Indeed. Canopy management does not mean that any or every branch has to be removed. The farmers need to cut selected branches, and with a saw. This is to ensure that the cut which is being made is sharp. In case the tree is not cut properly, it gets wounded and becomes more susceptible to insects and diseases. The farmers would understand this only by seeing how it is done. Therefore, demonstration is very necessary.
Can the farmers get trained at Krishi Vigyan Kendra (KVK)?
Dr. Chetti: Indeed, they can get training at each and every KVK. There are 721 KVKs in the country at the moment. KVKs essentially have horticulturalists because horticulture is a very important occupation in today’s times. KVKs are doing a very commendable job by giving training to farmers about every aspect of agriculture, from coconut cultivation to product making.
Is the method of canopy management different for every tree?
Dr. Chetti: Yes. What I am telling here are the basic principles of canopy management. In fact, canopy management is done by looking at the canopy. However, there is one general rule for canopy management: sunlight should reach every leaf on the tree.
There are two kinds of canopy: open canopy and closed canopy. In open canopy, the leaves are not uniformly distributed. In closed canopy, the leaves are so evenly distributed that light falls on the leaves, and not directly on the ground. The latter type is considered best because all the leaves receive sunlight.
According to agriculture department, in the years 2020 and 2021, around 320 million tonnes of horticulture crops shall be produced in the country. What are your comments?
Dr. Chetti: 2021 is the international year of fruits and vegetables. So we have to commit ourselves to learn more about fruits and vegetables.
India has become an important exporter of spices during the Covid-19 pandemic. How can the canopy management of spice-producing trees be done?
Dr. Chetti: There are spice trees which are larger. Examples are clove and cinnamon trees. Canopy management must be done for these trees as well.
What are the mistakes farmers should avoid while practicing canopy management?
Dr. Chetti: Proper spacing must be maintained between trees. The parameters for this are fixed and differ from tree to tree. Furthermore, as I have already explained, pruning must be done during the right season. The methods of pruning are also decided by experts and must be followed for best results.
Do climatic and environmental conditions of different regions also impact these standards?
Dr. Chetti: The basic principles of canopy management remain the same in different region. The most determining factor everywhere is the size of the plant.
Organic farming has gained popularity recently. How is it better than traditional farming methods?
Dr. Chetti: Basically, the farming which was done in the olden times was organic farming. Chemicals were introduced for increasing yields much later. However, while using chemicals, we forgot soil health. In my personal opinion, the satisfaction and quality farmers can get from organic farming cannot be matched by inorganic farming.
It is to be noted that framers would not see the benefits for the first two or three years. This is the time it takes for the soil quality to be replenished. Once it is replenished, he plants would start warding off insects and become more resistant. Organic farming combined with modern research and techniques can prove to be very beneficial for the farmers.
Note: This interview has been edited and condensed for clarity purposes. | https://krishijagran.com/interviews/a-exclusive-interview-on-canopy-management-in-perennial-horticulture-crops/ |
If you were asked to multiply 24 x 36, how would you do it? No calculuator!
Most of us (parents) learned what we now call the traditional algorithm, as shown below.
One technique that teachers have been demonstrating in the Common Core era is the box method for multiplication. At first, it seemed like too many steps but once I saw high schoolers using the method to expand polynomials, it all clicked! Fourth grade students in our county are taught the box method for multiplying two 2-digit numbers. By the time students reach Algebra 2 in high school, they are doing problems like my last example below… but the technique is the same!
First, I will demonstrate our example 24 x 36 using the box method and then I will use the same method to multiply two linear binomials. To begin, separate your numbers by place value. The number 24 is made of two 10s (20) and four units (4). The number 36 is made of three 10s (30) and six units (6).
Then multiply the numbers in the grid.
Finally, add all the numbers in the grid and that’s the answer!
We can extend this lesson by thinking of polynomial multiplication. Below is an example that many of us learned using the FOIL method (first, outside, inside, last) but the box method works here too.
Want to really go bananas with the box method? Check out this monster problem made much, much easier with this technique! For a little bonus math, notice that the like terms are diagonal from each other! | https://eastcobbtutoringcenter.com/multiplication-box-method/ |
DJ'ing is the art of masterfully mixing music. DJ's entertain and bring life to any event. DJ'ing is an essential element to the Hip-Hop culture. DJ'ing has evolved from two turntables and a crate of vinyl records to MP3 players and software.
Here at Beat Street AZ our students will be introduced to DJ'ing by various software programs. Students will also be introduced to different genres of music to appreciate the diversification of music. This will allow our students to appreciate the different sounds of music and its culture.
Our students will then learn the fundamentals of music production through the DJ course. Students will make their own beats using tablets and editing software provided by Beat Street AZ. During this course students will learn how music can be used as an avenue of escape and self-expression. | https://www.beatstreetaz.org/dj-den |
The role of public policy for overall economic development has undergone a metamorphic transformation over the centuries. In the ‘laissez-faire’ economy, government, as a public policy authority, was undertaking limited functions, such as defence (to protect sovereignty), offence (to maintain internal law and order) and public utilities (to provide public utility services) to the extent possible within the budget constraint. It was believed that there was no conflict between individual interest and social interest. Hence, individuals left to themselves could pursue professions that maximised the wellbeing of individuals as well as society. The individual interest and social interest may come in conflict with each other. Nevertheless, classical economists continued to believe that the market mechanism works perfectly well with the limited role of the Government and therefore it should pursue annual balancing of the budget.
Adam Smith’s ‘invisible hands’ and J. B. Say’s market-clearing hypothesis in particular and classical economists’ views on the limited role of the Government, in general, came under severe criticism during the 1930s as the world economy went through a prolonged period of depression. Intellectual debate, spearheaded by John Maynard Keynes, abandoned the market-clearing hypothesis. The government assumed a bigger role in reviving the economy through the fiscal deficit. The dominance of fiscal deficit as a public policy tool to stimulate the economy persisted for more than three decades. Instead of ‘pump-priming’ the economy through capital expenditure, the government more often misused the tool to meet its revenue expenditure. This together with a sharp increase in crude oil prices in the early 1970s pushed the global economy towards worldwide inflation in the mid-1970s. Till such time, the central banks around the world were vested with the supplementary role of managing currency to facilitate transactions of goods and services.
Unfettered fiscal deficits and its financing by central banks contributed significantly to high inflation worldwide in the 1970s. Inflation was diagnosed as ‘always and everywhere a monetary phenomenon’ by monetarists, led by Milton Friedman. They argued that inflation arising out of supply shocks may be temporary unless supported by accommodative monetary policy. Since the mid-1970s, the central banks emerged as a major public policy authority, vis-a-vis the government, to control inflation. For about a decade, monetary targeting was pursued by many central banks to achieve their objective of low inflation. Since the 1980s, many central banks adopted inflation targeting as a formal framework of monetary policy. Fiscal discipline was necessary as a precondition to achieve price stability. Many economists argued that ‘fiscal profligacy’ creates serious macro-economic imbalances in the economy. Inefficiency in the use of resources by the government was also argued. Nevertheless, fiscal dominance continued despite serious reservations put forward by economists about the adverse outcome of the sustained fiscal deficit.
Table 1: Objectives of Public Policy
There was a need for coordination between monetary policy and fiscal policy so that an optimum solution could be found in achieving public policy objectives. As can be seen from Table 1, there is a good deal of overlap as regards the objectives of monetary policy and fiscal policy. If one major wing of public policy fails to achieve its objectives, it expects the other wing to do something for it to achieve the same. Can one wing of public policy be a good substitute for the other to achieve the common objective? If not, what kind of coordination mechanism is put in place between public policy authorities so that the public policy objectives are achieved seamlessly?
Market forces, left to themselves may produce a real business cycle which may not be smooth. Public policy has a role to play to reduce the amplitude of boom and bust so that a high rate of growth is achieved over a medium-term without high volatility. One would, therefore, expect the public policy to be countercyclical. So far as fiscal policy is concerned, revenue collections are generally buoyant in the boom while social sector expenditures are contained. This could be attributed, inter alia, to a low level of unemployment benefit required during the boom which helps prune government expenditures. Hence, the government is naturally in a better position to pursue a surplus budget during the boom. The reverse is true in case of recession. Revenues are less while social sector expenditures are expected to rise which warrants deficit budget. Overall, a responsible government can achieve a cyclical balancing of the budget without much difficulty, given the nature of revenue buoyancy and expenditure pattern over a real business cycle. Violation of this golden rule may pose serious macroeconomic problems for an economy.
During the great moderation before the recent global financial crisis, both developed and developing countries pursued fiscal deficit leading to a phenomenal rise in debt-GDP ratio. As a result, when the global economy was pushed to a recession, the government could not pursue an accommodative fiscal policy. Either there was a sovereign debt crisis (Europe) or unsustainable debt-GDP ratio forced governments to pursue fiscal consolidation during a period of recession. When it was very much expected for the government to stimulate the economy, they were helpless due to imprudent policy pursued during the great moderation. Now the issue arises as to whether monetary authority can fill the gap and try to achieve the fiscal objective by pursuing an ultra-accommodative monetary policy.
Real GDP growth is a function of real saving, real investment productivity, and technology. While productivity and technology generally do not change in the short-run, growth is likely to hamper if real saving and real investment are not sustained. In the long-run, real saving and real investment primarily depend on real income and profitability, respectively. In the short-run, possibly real interest rate can, to some extent, promote both real saving and real investment. In fact, the real saving is more influenced by real income rather than the real interest rate. In other words, the real interest rate is at best a weak determinant of real saving. In developing countries, inflation rates are often higher than the deposit rate. Hence, households do not get a real return from financial savings and hence prefer to save in physical assets such as gold and real estate. Similarly, the real interest rate is a weak determinate of gross domestic capital formation. As interest cost as a proportion to total costs is generally low (3-4 per cent in India), it becomes difficult to stimulate the economy through interest rate policy unless wage cost, input cost, and other expenses are held in check.
In the post-crisis period, developed countries could pursue a low-interest rate policy (zero lower bound) to stimulate their economies. Since it was not sufficient, quantitative easing was pursued in several ways so that the economy could be quickly revived from the great recession. It was generally observed that quantitative easing could not revive their economies. In other words, monetary policy cannot be a good substitute for fiscal policy. At least developed countries could pursue both low-interest-rate policy and quantitative easing as the inflation rate remained benign. The developing countries were not in such an advantageous position as inflation expectations continued to remain elevated. At least, developed countries experimented for a considerable period of time with ultra-accommodative monetary policy as a substitute for fiscal policy. This option was not available for developing countries due to different growth-inflation dynamics in such countries.
Another aspect of public policy is to maintain external sector balance which includes low external current account deficit as a proportion to GDP, low inflation differential between the major trading partners, and stable exchange rate. Both monetary policy and fiscal policy have a definite role to play in this regard. While fiscal consolidation reduces twin deficit, the prudent export-import policy of the government improves the trade balance. Moreover, if a country has not achieved capital account convertibility, capital flows can be influenced by both monetary policy and fiscal policy. Maintaining a low rate of inflation by the central bank shall provide a congenial atmosphere to sustain export competitiveness. In India, while the Foreign Exchange Management Act (FEMA) is enacted by the government, it is administered by the RBI. Capital account convertibility option lies with government although RBI can play a role in advising the government in this regard.
We are living in an interconnected world. There is a large cross-border movement of capital. The exchange rates ought to be volatile in a globalised world. Except for a few countries of the world which have adopted a fixed exchange rate regime, others pursue some form of the flexible exchange rate. The magnitude of volatility in the exchange rate shall be low if the medium-term fundamentals of the economy are strong. Frictional factors, causing volatility in the exchange rate can be handled with limited interventions in the foreign exchange markets.
Inflation growth dynamics in India is complex. In the post-crisis period, growth collapsed around the world together with the softening of inflation. In the emerging market economies like India, growth slowed down while inflation remained at an elevated level. The so-called ‘Phillips Curve’ type relationship was not observed in the case of India. Inflation control received priority over stimulating growth as option before the monetary authority was limited.
Empirical research in India shows that a low rate of inflation up to 6 per cent ‘greases the wheel of commerce’ while the same above 6 per cent ‘puts sand on the wheel of commerce’. In fact, inflation beyond a threshold harms growth by adversely affecting financial savings as a proportion to GDP, creating uncertainty in the economy and more so, through the loss of competitiveness due to appreciation of real effective exchange rate. To ensure a reasonably high rate of growth on a medium-term basis, inflation needs to be kept at a low level.
A historic agreement was signed between RBI and the Government on February 20, 2015, that empowers the RBI to pursue flexible inflation targeting to ensure price stability on a medium-term basis. The new framework of monetary policy has several innovative features. First, there is a clear mandate for the RBI to achieve price stability, which would override other objectives if the inflation rate deviates from 4 ± 2 per cent from 2016-17. Second, the headline inflation measured in terms of new CPI shall be the nominal anchor in terms of which price stability shall be defined. Third, the RBI shall be accountable if inflation deviates beyond the limit of 4 ± 2 per cent. The RBI has to explain the circumstances under which the inflation has deviated from the target, the action plan required to bring back inflation to the target level, and the timeline required to achieve this. The RBI has been reasonably successful in reducing the CPI inflation from 9.5 per cent in 2013-14 to 4.8 per cent in 2019-20.
Our understanding of public policy debate is still evolving. First of all, monetary policy and fiscal policy coordination is a must, notwithstanding areas of activities specified under statutory arrangements. Secondly, the burden of deviation from the prudent fiscal policy falls on monetary policy and vice versa. Thirdly, the government is in a better position to stimulate growth by following countercyclical fiscal policy while the central bank is in a better position to achieve a low rate of inflation. Fourthly, achieving fiscal policy objectives with monetary policy and vice versa has not been very successful around the world as instruments in the hands of two public policy authorities are different. Fifthly, the external sector balance of the economy is the joint responsibility of monetary policy and fiscal policy.
*The author is former Principal Adviser and Head of RBI’s Monetary Policy Department. Views are personal. | https://theeducationpaper.com/public-policy-and-economic-development-dr-barendra-kumar-bhoi-former-head-of-monetary-policy-department-at-reserve-bank-of-india/ |
OceanGate Foundation is a not-for-profit educational outreach organization with a mission to create STEM-engaged students by using manned submersibles to explore the ocean and inspire curiosity, passion and lifelong learning.
OceanGate, Inc and OceanGate Foundation will collaborate to visit a number of National Marine Sanctuaries, share what we find, and do our best to Open The Oceans.
Through our work with the OceanGate Foundation we have helped developed outreach programs for local schools, museums, and community agencies to promote ocean issues, conservation, and educational initiatives with a focus on science, technology, engineering, and math (STEM). We are committed to giving back to the communities where we work and live by sharing time, talents and resources. We passionately promote ocean awareness with efforts to cultivate something significant and lasting — a stronger commitment to ocean health and longevity, stronger communities, and a direct link to the next generation of ocean stakeholder, our youth. | https://www.oceangate.com/about/partners/oceangate-foundation.html |
Topic 4.6 - Using Matrices to Solve Systems of Equations
In this matrix equation worksheet, students write a matrix equation for each of 3 systems. They then solve 6 matrix equations.
3 Views 8 Downloads
Additional Tags
Resource Details
Start Your Free Trial
Save time and discover engaging curriculum for your classroom. Reviewed and rated by trusted, credentialed teachers.Try It Free
- Folder Types
- Activities & Projects
- Assessments
- Graphics & Images
- Handouts & References
- Interactives
- Lab Resources
- Learning Games
- Lesson Plans
- Presentations
- Primary Sources
- Printables & Templates
- Professional Documents
- PD Courses
- Study Guides
- Units
- Videos
- Performance Tasks
- Websites
- Graphic Organizers
- Worksheets
- Workbooks
- Writing Prompts
- Constructed Response Items
- Apps
- AP Test Preps
- Articles
- Lesson Planet Articles
- Audios
- Courses
- eBooks
- Interactive Whiteboards
- Home Letters
- Rubrics
- Syllabis
- Unknown Types
- All Resource Types
- Show All
See similar resources:
Solving Systems of Equations by EliminationLesson Planet
Eliminate the obvious. A series of slides presents algebra classes with step-by-step procedures for solving systems of equations using the elimination method. It provides an introduction to the elimination method for solving linear...
8th - 10th Math CCSS: Adaptable
Solving Systems of Equations by SubstitutionLesson Planet
There is no substitute for checking the solution. A short introduction to the substitution method of solving systems of equations is an ideal resource for algebra classes. The slides present steps for the substitution method and apply...
8th - 10th Math CCSS: Adaptable
Graphing Systems of EquationsLesson Planet
With a graph and two linear equations, Sal explains how to graph systems of equations. He uses a table to pick points, completes the equations, and plots the lines on the graph. This video would be appropriate as a refresher or for more...
7 mins 9th - 11th Math
Solving Systems of Equations with Cramer's RuleLesson Planet
Show how matrices can be helpful for solving systems of equations. The video illustrates an example of solving a two-variable system utilizing Cramer's Rule. The instructor shows how to find the determinant of each matrix and then uses...
6 mins 9th - 12th Math CCSS: Adaptable
Solving Systems Using MatricesLesson Planet
Explore the concept of systems of equations in two variables and use matrices to solve them. Young mathematicians enter coefficients and constants into a matrix and then solve using row reduction. Instructions on how to use the shortcut...
9th - 11th Math CCSS: Designed
Applying Systems of Equations - Finding Break-Even PointsLesson Planet
Explore the concept of solving systems of equations with this project by finding the break-even points using linear equations. Learners need to interpret their graphs in terms of their real-world meaning and make recommendations based on... | https://www.lessonplanet.com/teachers/topic-46-using-matrices-to-solve-systems-of-equations |
or an MBA with a specialization in finance not only brings them a plethora of better career opportunities but also strengthens their management and networking skills.
MBA in Finance: A Brief Overview
Finance is one of the core domains of business management which primarily focuses on different types of investments and risk dynamics. During your MBA Finance program, you will learn the following:
Applying capital budgeting
Making decisions
Evaluating the relative strengths of assets
Evaluating assets and the techniques associated
Payback period
Internal rate of return
Discounted cash flow models, and much more
Top business schools and universities in India design and develop curriculums that call for proactive participation from students in the comprehensive study sessions. Through all the courses that focus on corporate finance, students get an opportunity to learn the art of evaluating complex investments apart from setting and executing serious financial policies. The students pursuing the corporate finance cover the following:
Financial analysis tools
Policy choices like dividends
Financing through debt or equity
Market volatility
Mergers
Acquisitions
Leveraged buyouts
Hostile takeovers, and
Initial public offerings (IPOs)
Here are some of the job profiles after an MBA or PGDM in Finance
Corporate Finance - Financial Modeling and Valuation
A finance analyst is one of the top job profiles for all those who earn a valid degree in the field of finance. In the role of a financial analyst, you will need to comprehensively understand all accounting standards and policies, apart from the following:
Deep analysis of the financial statements of different organizations
Preparation of financial models
Evaluating performances via ratio analysis
Assessing the feasibility of the model created in sync with the industry standards
Choosing a suitable valuation model
Setting assumptions for the valuation, and
Preparing the valuation report
The field calls for the following skill sets:
Sound knowledge of accounting
Strong excel skills
Ability to make a forecast
Problem-solving ability
Knowledge of linking different financial statements
Ability to distill huge data
Wealth and Portfolio Management
There is no dearth of people across the world who have an extremely attractive net worth in their account books but don’t know how to do proper financial planning followed by risk management. Wealth Management mainly focuses on the following:
Investment portfolios
Retirement planning funds
Tax planning
Financial planning, and so on
Wealth Manager is one of the most reputed and lucrative job profiles for a
PGDM
holder in the field of finance. As a wealth manager, you will be responsible for the following:
Comprehensively understanding your clients’ financial objectives
Long and short-term plans
Suggesting strategies for the better management of the investment portfolios
Matching financial goals with the investments
Conducting in-depth research for executing asset allocation
Managing portfolios for longer durations
The field calls for the following skill sets:
High efficiency in data interpretation
A penchant for research and analysis
In-depth understanding of financial markets
Great understanding of economics
Sound knowledge of portfolio theory
Staying customer-focused
Investment Banking - helping companies in acquiring funds for businesses
With the economies of different developing countries rapidly going up and industrialization growing by leaps and bounds, investment banking has emerged as one of the top-rated job profiles for postgraduates in the field of finance. In the role of an investment banker, you will be helping companies across sectors in acquiring funds to expand their business operations. Investment banking is considered to be a high-earning job profile and a large number of youths are getting attracted to it in place of typical MBA jobs. In the role of an investment banker, you will be getting involved in the following:
Managing assets
Doing financial leverages
Tracking the market
Offering advice about releasing FPO or IPO
The field calls for the following skill sets:
Sound analytical and numerical skills
Outstanding teamwork and team leadership skills
Self-confidence and the ability to take tough decisions
Competency to work under extreme pressure
Ability to cope well in stressful situations
Excellent communication and interpersonal skills
Dedication, enthusiasm, and commitment
Excellent project and time management skills
Ability to travel extensively
Merchant Banking
Merchant Banking is a unique combination of banking and consulting services. This domain involves supplying big organizations with financial advice. The individuals working in the merchant banking domain offer the following fee-based consulting services:
Consultation services for mergers and acquisitions
Issuing letters of credit
Trade consulting, and
Syndicating financing across projects
Raising funds for clients
Raising funds for clients includes fundraising from the national and global markets, and augmenting shares and securities to initiate newer projects.
Promotional activities
Promotional activities create a core part of the merchant banking domain of the finance field. It includes promoting a particular business, especially in the initial phase. A merchant banker plays a crucial role in this activity right from the idea to the approval of the government.
A large number of young postgraduates opt for merchant banking as their career stream where they often need to work with a firm offering the same services to a variety of clients.
Concluding Remarks
Finances are the core of every business, be it a large one or small. Although MBA has numerous specializations, finance has its own importance on the front of earning potential. Working in any domain related to financing calls for experience and expertise of the highest order. Those who want to take their career to the higher levels need to gain the required experience with a reputed firm before starting to work independently. Before that, pursuing a postgraduate degree or diploma in any of the best finance colleges or universities is a great idea to achieve the required educational qualifications.
Back
Recent Post
MBA in Finance: Your Gateway to a Future-proof Career
5 Career Avenues to Be Explored after PGDM Course
Top 6 MBA Specializations for Lucrative and Rewarding Careers
Top 4 Job Profiles after MBA or PGDM Finance
Top 5 Career Options after PGDM Courses in Delhi NCR
Top 8 Reasons for Working People to Pursue program from MBA Colleges in Delhi
Top 5 Job Options after MBA in Business Analytics
Top Reasons to Pursue Artificial Intelligence Courses
Search
Search for: | https://www.lbsim.ac.in/blog/top-4-job-profiles-after-mba-or-pgdm-finance/6 |
Europe’s rich and varied media industry is at a crossroads, fighting harder than ever for an audience amid threats from political extremism, fake news, digital giants and innovative start-ups.
While the proliferation of alternative news on digital platforms has led to a fact-based renaissance for some traditional media, the extension of populism across the region is posing a new challenge, with recent research showing that a growing proportion of citizens distrust mainstream news reporting on key political and economic issues.
Meanwhile, business models are changing rapidly, with print and television forced to compete ever more intensively with social media and other digital sources -- both established platforms and new start-ups -- with significant implications for advertisers and audiences.
The "FT Future of News Europe" series continue this year with a new focus on "Trust, Disruption and Growth in a Shifting Political Landscape", taking place on 29 November, in Brussels, at the Steigenberger Wiltcher’s Hotel.
ETNO Director General, Lise Fuhr, will be joining the panel titled "Going the distance - remaining competitive in a cut-throat industry", aimed at answering the following questions and more:
- What tools are disruptors and the disrupted using to ensure they can outrun - or at least keep up with - the competition?
- How can news providers harness from both emerging and existing tech like AI bots, podcasting and analytics to retain and grow their audiences?
- Which new roles in data and innovation are key to staying ahead?
- What can media learn from other industries that have been upended by the digital revolution?
The other speakers on the panel are:
- Olivier De Raeymaeker, CEO, Le Soir
- Nishant Lalwani, Director of Investments, Independent Media, Luminate, Omidyar Group
- Francis Morel, former CEO, Le Figaro and Groupe Les Echos-Le Parisien
Building on the success of the inaugural event in New York, FT Future of News Europe will gather top media executives, editors, academics and business leaders from across the region to discuss the opportunities for growth in an increasingly disruptive technological, political and social landscape.
More information and registration available at this link. | https://etno.eu/events/past-events/88:ft-future-of-news-europe-trust-disruption-and-growth-in-a-shifting-political-landscape.html |
to assign this modality to your LMS.
We have a new and improved read on this topic. Click
here
to view
We have moved all content for this concept to
for better organization. Please update your bookmarks accordingly.
To better organize out content, we have unpublished this concept. This page will be removed in future.
Venn Diagrams
Graphic organizer showing outcomes of an experiment
%
Progress
MEMORY METER
This indicates how strong in your memory this concept is
Practice
Preview
Assign Practice
Preview
Progress
%
Practice Now
Probability
Theoretical and Experimental Probability
...
...
Assign to Class
Create Assignment
Add to Library
Share with Classes
Add to FlexBook® Textbook
Edit
Edit
View Latest
Customize
Customize
Details
Resources
Download
PDF
Most Devices
Publish
Published
Quick Tips
Notes/Highlights
Summary
Vocabulary
Probability Using a Venn Diagram and Conditional Probability
Loading...
Found a content error?
Tell us
Notes/Highlights
Color
Highlighted Text
Notes
Show More
Image Attributions
Show
Hide
Details
Description
This lesson covers how to use Venn diagrams to solve probability problems.
Learning Objectives
Vocabulary
Authors:
Lori Jordan
Kate Dirga
Difficulty Level
At Grade
Grades
10
,
11
,
12
Date Created:
Last Modified:
Subjects:
mathematics
Algebra
Trigonometry
(1 more)
probability
Tags:
binomial expansion
combinations
dependent
(14 more)
event
factorial
Fundamental Counting Principle
independent
Intersection
multiple
mutually exclusive
permutations
probability
sample space
singular
Tree Diagram
union
Venn diagram
Language
English
Concept Nodes:
MAT.PRB.203.055 (Venn Diagrams - Probability)
.
artifactID: 1226640
artifactRevisionID: 20754977
Show
Hide
Resources
Reviews
Back to the
top of the page ↑
Please wait...
Please wait...
Make Public
Upload Failed
Learn more
Oops, looks like cookies are disabled on your browser. Click
here
to see how to enable them. | https://www.ck12.org/probability/venn-diagrams/lesson/probability-using-a-venn-diagram-and-conditional-probability-alg-ii/ |
The data presented in the global Sharing Economy market offers budding opportunities, which help users to make strategic moves and prosper their business. The report highlights the impact of numerous factors that might result in obstructing or propelling the Sharing Economy market at global as well as local level. The global Sharing Economy market research report offers the summary of key players dominating the market including several aspects such as their financial summary, business strategy, and most recent developments in these firms.
Key points of the global Sharing Economy market
• Theoretical analysis of the global Sharing Economy market stimulators, products, and other vital facets
• Recent, historical, and future trends in terms of revenue and market dynamics are reported
• Pin-point analysis of the competitive market dynamics and investment structure is predicted to grow
• Future market trends, latest innovations, and various business strategies are reported
• Market dynamics include growth influencers, opportunities, threats, challenges, and other crucial facets
The global Sharing Economy market research report offers users with an all-inclusive package of market analysis that includes current market size, expansion rate, and value chain analysis. The global Sharing Economy market is segmented on a regional basis Europe, North America, Latin America, Asia Pacific, and Middle East & Africa as well. To offer a comprehensive view and competitive outlook of the global Sharing Economy market, our review team employs numerous methodological procedures, for instance, Porter’s five forces analysis.
This research report includes the analysis of various Sharing Economy market segments {Shared Transportation, Shared Space, Sharing Financial, Sharing Food, Shared Health Care, Shared Knowledge Education, Shared Task Service, Shared Items, Others}; {Traffic, Electronic, Accommodation, Food and Beverage, Tourism, Education, Others}. The bifurcation of the global market is done based on its present and prospective inclinations. The regional bifurcation involves the present market scenario in the region along with the future projection of the global Sharing Economy market. The global market report offers an overview of expected market conditions due to changes in the technological, topographical, and economic elements.
Questions answered in the report include
1. What is the expected market size by the end of the forecast period?
2. What are the major factors initiating the global Sharing Economy market growth?
3. What are the latest developments and trending market strategies that are influencing the growth of the Sharing Economy market?
4. What are the key outcomes of the Sharing Economy market developments?
5. Who are the key players in the market?
6. What are the opportunities and challenges faced by the key players? | |
---
abstract: |
With recent [*Chandra*]{} observations, at least 75 percent of the X-ray background in the $2-10$ keV energy range is now resolved into discrete sources. Here we present deep optical, near-infrared, submillimeter, and 20 cm (radio) images, as well as high-quality optical spectra, of a complete sample of 20 sources selected to lie above a $2-10$ keV flux of $3.8\times 10^{-15}$ erg cm$^{-2}$ s$^{-1}$ in a deep [*Chandra*]{} observation of the Hawaii Deep Survey Field SSA13. The 13 galaxies with $I<23.5$ have redshifts in the range 0.1 to 2.6. Two are quasars, five show AGN signatures, and six are $z<1.5$ luminous early galaxies whose spectra show no obvious optical AGN signatures. The seven spectroscopically unidentified $I>23.5$ sources have colors that are consistent with evolved early galaxies at $z=1.5-3$.
Only one hard X-ray source is significantly detected in an ultradeep submillimeter map; from the submillimeter to radio flux ratio we estimate a millimetric redshift in the range $1.2-2.4$. None of the remaining 19 hard X-ray sources is individually detected in the submillimeter. These results probably reflect the fact that the 850 $\mu$m flux limits obtainable with SCUBA are quite close to the expected fluxes from obscured AGN. The ensemble of hard X-ray sources contribute about 10% of the extragalactic background light at submillimeter wavelengths.
From the submillimeter and radio data we obtain bolometric far-infrared luminosities. The hard X-ray sources have an average ratio of bolometric far-infrared to $2-10$ keV luminosity of about 60, similar to that of local obscured AGN. The same ratio for a sample of submillimeter selected sources is in excess of $1100$; this suggests that their far-infrared light is primarily produced by star formation.
Our data show that luminous hard X-ray sources are common in bulge-dominated optically luminous galaxies with about 10% of the population showing activity at any given time. We use our measured bolometric corrections with the $2-10$ keV extragalactic background light to infer the growth of supermassive black holes. Even with a high radiative efficiency of accretion ($\epsilon=0.1$), the black hole mass density required to account for the observed light is comparable to the local black hole mass density.
author:
- 'A.J. Barger,$\!$ L.L. Cowie,$\!$ R.F. Mushotzky,$\!$ E.A. Richards,'
title: 'The Nature of the Hard X-ray Background Sources: Optical, Near-infrared, Submillimeter, and Radio Properties'
---
Introduction {#secintro}
============
After more than 35 years of intensive work, the origin of the hard X-ray background (XRB) is still not fully understood. The XRB photon intensity, $P(E)$, with units photons cm$^{-2}$ s$^{-1}$ keV$^{-1}$ sr$^{-1}$, can be approximated by a power-law, $P(E)=AE^{-\Gamma}$, where $E$ is the photon energy in keV. The [*HEAO1*]{} A-2 experiment (Marshall et al.1980) found that the XRB spectrum from $3-15$ keV is well described by a photon index $\Gamma\simeq 1.4$, and this result has been confirmed and extended to lower energies by recent analyses of [*ASCA*]{} (Chen, Fabian, & Gendreau 1997; Gendreau et al. 1995; Miyaji et al. 1998; Ishisaki et al. 1998) and [*BeppoSAX*]{} (Vecchi et al. 1999) data. At soft ($0.5-2$ keV) X-ray energies, 70 to 80 percent of the XRB is resolved into discrete sources by the [*ROSAT*]{} satellite (Hasinger et al. 1998). Most of these sources are optically identified as unobscured active galactic nuclei (AGN) with spectra that are too steep to account for the flat XRB spectrum (Schmidt et al. 1998). Thus, an additional population of either absorbed or flat spectrum sources is needed to make up the background at higher energies.
XRB synthesis models, constructed within the framework of AGN unification schemes, were developed to account for the spectral intensity of the XRB and to explain the X-ray source counts in the hard and soft energy bands (e.g., Setti & Woltjer 1989; Madau, Ghisellini, & Fabian 1994; Matt & Fabian 1994; Comastri et al. 1995; Zdziarski et al. 1995; Gilli, Risaliti, & Salvati 1999; Wilman & Fabian 1999; Miyaji, Hasinger, & Schmidt 2000). In the unified scheme, the orientation of a molecular torus surrounding the nucleus determines the classification of a source. The models invoke, along with a population of unobscured type-1 AGN whose emission from the nucleus we see directly, a substantial population of intrinsically obscured AGN whose hydrogen column densities of $N_H\sim 10^{21}-10^{25}$ cm$^{-2}$ around the nucleus block our line-of-sight.
A significant consequence of the obscured AGN models is that large quantities of dust are necessary to cause the obscuration. The heating of the surrounding gas and dust by the nuclear emission from the AGN and the subsequent re-radiation of this energy into the rest-frame far-infrared (FIR) suggests that the obscured AGN should also contribute to the source counts and backgrounds in the FIR and submillimeter wavelength regimes (Almaini, Lawrence, & Boyle 1999; Gunn & Shanks 2000).
Due to instrumental limitations, the resolution of the XRB into discrete sources at hard energies had to wait for the arcsecond imaging quality and high-energy sensitivity of [*Chandra*]{}. Deep [*Chandra*]{} imaging surveys are now detecting sources in the $2-10$ keV range that account for 60 to 80 percent of the hard XRB (Mushotzky et al. 2000, hereafter MCBA; Giacconi et al. 2001; Garmire et al. 2001), depending on the XRB normalization. The mean X-ray spectrum of these sources is in good agreement with that of the XRB below 10 keV. Furthermore, because of the excellent $<1''$ X-ray positional accuracy of [*Chandra*]{}, counterparts to the X-ray sources in other wavebands can be securely identified (MCBA; Brandt et al. 2000; Giacconi et al. 2001). In this paper we determine spectroscopic redshifts and optical, near-infrared (NIR), submillimeter, and radio properties of a complete hard X-ray sample drawn from the MCBA deep [*Chandra*]{} observations of the Hawaii Deep Survey Field SSA13. We take $H_o=65\ h_{65}$ km s$^{-1}$ Mpc$^{-1}$ and use a $\Omega_{\rm M}={1\over 3}$, $\Omega_\Lambda={2\over 3}$ cosmology throughout.
Sample and Observations {#secdata}
=======================
The present study is based on a 100.9 ks X-ray map of the SSA13 field that was observed with the ACIS-S instrument on the [*Chandra*]{} satellite in December 1999 and presented in MCBA. The position RA(2000)$=13^h\ 12^m\ 21.4 0^s$, Dec(2000)$=42^{\circ}\ 41^{'}\ 20.96^{''}$ was placed at the aim point for the ACIS-S array (chip S3). Two energy-dependent images of the back-illuminated S3 chip and the front-illuminated S2 chip were generated in the hard ($2-10$ keV) and soft ($0.5-2$ keV) bands. MCBA chose the hard band energy range to be $2-10$ keV to facilitate comparisons with [*ASCA*]{} data. In the present paper we likewise use the $2-10$ keV range. Other recent [*Chandra*]{} studies of the XRB have used either the $2-8$ keV range (Brandt et al. 2000; Hornschemeier et al. 2000) or the $2-7$ keV range (Fabian et al. 2000; Giacconi et al. 2001) to minimize the backgrounds.
We provide here a detailed description of the data reduction techniques that were employed by K. Arnaud in December 1999 to analyze the SSA13 [*Chandra*]{} image for the MCBA paper. Improved X-ray data analysis techniques will be presented in Arnaud et al. (2001).
The X-ray images were prepared with xselect and associated ftools at GSFC. ACIS grades 0, 2, 3, 4, and 6 were used, and columns at the boundaries of the readout nodes, where event select does not work properly, were rejected. For the S3 chip the light curve was examined and times with high backgrounds were rejected, giving a total exposure of 95.9 ks; for the S2 chip the full time of 100.9 ks was used. The images were examined in chip coordinates to identify and remove bad columns and pixels. For S3 the spectrum of the entire chip was extracted and the Si fluorescence line was used to determine the gain. For the front-illuminated S2 chip radiation damage caused a systematic change in gain and spectral resolution across the chip. An observation from the [*Chandra*]{} Archive of the in-flight calibration sources was analyzed to determine an approximate correction for the spatial gain dependence. The PHA values were divided by (1-CHIPY\*0.0002) to correct the gross gain variation. The remaining systematic changes on the fluxes are small compared to statistical uncertainties. The calibration sources were also used to determine the conversion from adjusted PHA to energy.
Once the images in the $2-10$ keV and $0.5-2$ keV energy bands were generated for the chips, a simple cell detection algorithm was used to find the sources. The field area was stepped through in $2''$ steps, and the counts within a $5''$ diameter aperture (chosen to maximize the enclosed source counts throughout the image without becoming substantially background dominated) were measured, together with the background in a $5''-7.5''$ radius annulus around each position. The average background was 4.7 counts in a $5''$ cell in the $2-10$ keV S3 image and 1.5 counts in the $2-10$ keV S2 image. The distribution of counts is Poisson. A cut of 17 counts in the $2-10$ keV S3 image and 10 counts in the S2 image ensures that there is less than a 20% probability of a single spurious source detection in the entire sample. The background subtracted counts were searched for all positions within $4.5'$ of the optical axis that satisfied the appropriate count criteria. The source counts were next corrected for the enclosed energy fraction within the $5''$ aperture; the correction is small for this choice of off-axis radius and was determined from a second-order polynomial fit to the ratio of the $5''$ to $10''$ diameter aperture counts as measured from the brighter sources in the hard and soft band images. The final positions were obtained with a centroiding algorithm that determined the center of light of the X-ray sources.
The initial source positions were determined from the [*Chandra*]{} aspect information, but with a plate scale of $0.4905''$ per pixel. The absolute pointing was then refined using the 10 sources with $5\sigma$ radio counterparts (see § \[secradio\]). The offsets are $3.2''$ W and $0.3''$ N with no roll angle correction. With the offsets applied, the dispersion between the radio and X-ray positions for the 10 sources is $0.4''$. The present positions should be more accurate than those given in MCBA. Figure \[fig1\] shows the [*Chandra*]{} hard band image of SSA13 with the source positions identified by the small circles. The large circle illustrates the $4.5'$ radius region used in this paper.
The conversion from counts to flux depends on the shape of the source spectrum. For a power-law spectrum with a photon index $\Gamma$, the counts \[photons s$^{-1}$\] in an energy band $E_1$ to $E_2$ are given by $N_{E_1-E_2}=\kappa \int_{E_1}^{E_2}\ A(E)\ E^{-\Gamma} dE$, where $A(E)$ is the effective detector area at energy $E$. Once the normalization of the spectrum, $\kappa$, is determined from the observed counts, the flux in the $2-10$ keV band is $f_{HX}=\int_{2}^{10}\ (F(E)\ E^{-1})\ dE$ where $F=\kappa\ E^{2-\Gamma}$. The energy index is $\alpha=\Gamma-1$.
Often $\Gamma=2$ (typical of unabsorbed soft band sources) is assumed; however, in the absence of a correction for the actual opacity, such a procedure underestimates the average conversion of counts to hard X-ray flux. An alternative approach is to determine the value of $\Gamma$ for each source from the ratio of soft to hard band counts, $N_{0.5-2}/N_{2-10}$. We adopt this approach here; however, because there is substantial uncertainty in the individual source $\Gamma$ values, we use the counts-weighted mean photon indices of 1.2 (hard) and 1.4 (soft) to determine the hard and soft band fluxes. Our procedure has the advantage of giving the correct conversion for the ensemble of sources and therefore a correct comparison with the hard XRB, but it will result in errors in individual source determinations. The typical error is not large; for example, a source with an actual $\Gamma=2$ spectrum would have its $2-10$ keV flux overestimated by a factor of 1.35. A subsequent paper (Arnaud et al. 2001) will give the derived fluxes using the best fit spectral parameters to the individual sources.
For the S3 chip, the final flux calibrations were made using an array of effective areas versus energy at 12 positions. For the S2 chip a single conversion factor of $2.6\times 10^{-11}$ erg cm$^{-2}$ count$^{-1}$ was used in the $2-10$ keV band and $4.5\times 10^{-12}$ erg cm$^{-2}$ count$^{-1}$ in the $0.5-2$ keV band.
To check our conversion of counts to flux, we extracted the public [*ASCA*]{} and [*Chandra*]{} observations of G21.5, a compact crab-like supernova remnant with a well-determined simple spectrum. The vast difference in angular resolution between [*ASCA*]{} and [*Chandra*]{} means that we can only make direct source flux comparisons for small ($\theta<2'$) or point sources. We can also only make direct comparisons of data from instruments with different spectral resolutions for a simple spectrum source since there the transformation from source counts to flux is less model-dependent than it would be for a complex spectrum source. We derived the column density and power-law index for G21.5 from the [*Chandra*]{} S3 data and found that the values agreed well with the [*ASCA*]{} values. The measured [*Chandra*]{} flux was within 10% of the [*ASCA*]{} flux. At present there are no other time-stable compact or point sources with simple spectra in the public database that can be used for such a comparison.
We selected all sources that were more than $15''$ from the chip edges and that have $2-10$ keV fluxes greater than $3.8\times 10^{-15}$ erg cm$^{-2}$ s$^{-1}$, corresponding to the above counts cut-offs and the appropriate calibrations. This detection threshold is slightly higher than the value quoted in MCBA ($3.2\times 10^{-15}$ erg cm$^{-2}$ s$^{-1}$) in order to obtain a complete flux-limited sample that is uniform over our $4.5'$ radius area on the S2 and S3 chips. Table 1 details the 20 sources in the resulting 57 arcmin$^2$ area, ordered by decreasing hard X-ray flux. The first six columns in Table 1 include the source identifications (the MCBA identifications are given in parentheses), RA(2000), Dec(2000), $2-10$ keV flux, $0.5-2$ keV flux, and the value of $\Gamma$ required to match the ratio of the soft to hard X-ray fluxes in the absence of opacity. The remaining entries in Table 1 are discussed in subsequent sections.
While the low total X-ray source counts of most of our sources precludes a detailed analysis on an object-by-object basis, all of our sources are consistent with the [*Chandra*]{} PSF. They also all have rather hard X-ray spectra that are consistent with power-law or $kT>2$ keV thermal spectra (Arnaud et al. 2001). Thus, the sources are not likely to be emission from groups of galaxies. The size constraints on the sources are not sufficiently restrictive to discriminate between X-ray binaries, hot gaseous atmospheres, or AGN as the source of the X-ray emission. We note that an improved comparison between the expected and observed images shows that object CXO J131159.3+123928 (source 1 in the present sample), which was thought to be extended by MCBA, is in fact consistent with being a point source.
Optical and Near-infrared Imaging {#secimaging}
---------------------------------
We used the Low-Resolution Imaging Spectrometer (LRIS; Oke et al. 1995) on the Keck 10 m telescope in March 1997, February 1998 and 1999, and February and March 2000 to obtain $B$-band and $I$-band images that cover the [*Chandra*]{} SSA13 field.
Wide-field and deep $HK'$ observations were obtained over a number of runs using the University of Hawaii Quick Infrared Camera (QUIRC; Hodapp et al. 1996) on the 2.2 m University of Hawaii (UH) telescope and the 3.6 m Canada-France-Hawaii Telescope. The $HK'$ ($1.9\ \pm 0.4$ $\mu$m) filter is described in Wainscoat & Cowie (2001). The astrometry for the $HK'$ image was established by fitting a linear solution to the 75 VLA sources (see § \[secradio\]) with bright counterparts in the NIR image. The solution has a plate scale of $0.1891''$ per pixel and a rotation of $-0.27$ deg from a standard N-E orientation. The dispersion of the radio and NIR positions is $0.43''$. Using a large overlap sample of sources in the optical and NIR images, a third-order polynomial fit to the distortion in each LRIS $I$ and $B$ image was determined. Using the 90 VLA sources with bright $I$ counterparts in the corrected image, we find a dispersion of $0.67''$ between the radio and optical positions.
The excellent $<1''$ X-ray positional accuracy permits the secure identification of the optical counterparts to the X-ray sources. Figure \[fig2\] shows thumbnail $B$-band images of all 20 hard X-ray sources listed in Table 1. (Thumbnail $I$-band images can be found in MCBA.) In selecting the optical counterparts, we considered only $I\le 24.5$ sources within a $1.5''$ radius of the nominal X-ray position. The optical separations are given in column 10 of Table 1. Sixteen of the sources have one such counterpart, and none has more than one. None of the offsets exceed $1''$, and the dispersion of the offsets is $0.5''$. Monte Carlo simulations with a randomized sample show that the average number of spurious identifications with an optical counterpart within $1''$ is 0.5. At 95 percent confidence, less than 2 of the 16 identifications are spurious. For the 16 sources the magnitudes were measured at the optical center; for the remaining sources, the magnitudes were measured at the nominal X-ray position. For most of the sources the magnitudes were measured in $3''$ diameter apertures and corrected to approximate total magnitudes using an average offset (Cowie et al. 1994); henceforth, we refer to these as corrected $3''$ diameter magnitudes. However, for the bright extended sources (7, 18, 19) we used $20''$ diameter aperture magnitudes and applied no correction; these magnitudes may be as much as a magnitude brighter than the corrected $3''$ diameter magnitudes given in MCBA. The $B$, $I$, and $HK'$ magnitudes are given in columns 7, 8, and 9 of Table 1. The $1\sigma$ limits are approximately $B=27.6$ and $I=25.9$. The $1\sigma$ limits for the $HK'$ magnitudes are not uniform over the field and are given individually in parentheses after the $HK'$ magnitudes in Table 1.
Keck Spectroscopy {#seckeck}
-----------------
We obtained high quality optical spectra for 19 of the 20 hard X-ray sources using LRIS slit-masks on the Keck 10 m in March and April 2000. Source 14 was not observed because of mask design constraints. For the sources with $I\le 24.5$ counterparts, we positioned the slit at the optical center. For the remaining sources, we positioned the slit at the X-ray centroid position. We used $1.4''$ wide slits and the 300 lines mm$^{-1}$ grating blazed at 5000Å, which gives a wavelength resolution of $\sim 16$Å and a wavelength coverage of $\sim 5000$Å. The wavelength range for each object depends on the exact location of the slit in the mask but is generally between $\sim5000$ and 10000Å. The observations were 1.5 hr per slit mask, broken into three sets of 0.5 hr exposures. Fainter objects were observed a number of times; the longest exposure was 6 hrs. Conditions were photometric with seeing $\sim 0.6''-0.7''$ FWHM. The objects were stepped along the slit by $2''$ in each direction, and the sky backgrounds were removed using the median of the images to avoid the difficult and time-consuming problems of flat-fielding LRIS data. Details of the spectroscopic reduction procedures can be found in Cowie et al. (1996).
We successfully obtained redshift identifications for all 13 sources brighter than $I=23.5$ mag; the spectra are shown in Fig. \[fig3\], and the redshifts are given in column 11 of Table 1. We classify the spectra into three general categories: (i) quasars (broad-line sources), (ii) AGN (narrow and weak-line sources), and (iii) optically ‘normal’ galaxies (no AGN signatures in the optical). Henceforth, we denote these categories by [*q*]{}, [*a*]{}, and [*n*]{}, respectively. We also denote spectroscopically unidentified sources by [*u*]{} and our one source with a millimetric redshift (see § \[secdetection\]) by [*m*]{}.
Two sources (sources 3 and 6) are the quasars previously known to be in the field (Windhorst et al. 1995; Campos et al. 1999). Their spectra are very similar, and both coincidentally lie at $z=2.565$. These quasars are radio quiet.
Five sources (1, 2, 5, 11, and 15) show emission line characteristics that may be indicative of AGN activity. Sources 1 and 5 show \[OII\], NeIII\], and weak NeV\] emission, along with Ca H and K and G-band absorption. Sources 11 and 15 show narrow Ly$\alpha$ and CIV emission. Source 2 shows a P-Cygni profile in MgII and broad absorption in FeII.
Six sources (4, 7, 10, 12, 18, and 19) show no indication of an active nucleus in their optical spectra. We call these ‘normal’ galaxies since they have absorption and emission line properties which are common in optically selected field samples. Source 4 shows H$\alpha$ and weak \[OII\] and \[OIII\] emission and H$\beta$ absorption. Source 7 shows H$\alpha$, \[OII\], and \[OIII\] emission and H$\beta$ absorption. Source 10 shows narrow \[OII\] emission and MgII absorption. Source 12 has weak \[OII\] and H$\beta$ emission and strong \[OIII\] absorption. There may be hints of NeIII\] and NeV\]. Source 18 is rather unusual in that it has no H$\alpha$ while NII and SII are in emission. A high NII/H$\alpha$ ratio has been used to classify objects as AGN (Keel et al. 1985), but the absence of H$\alpha$ in source 18 is difficult to understand: in Veilleux & Osterbrock (1987) the highest ratio of NII to H$\alpha$ is 3:1, and no photoionization models (Ferland & Netzer 1983) have ratios larger than 2:1. Source 19 has H$\alpha$ emission but otherwise only absorption features.
Optically ‘normal’ X-ray luminous galaxies were thought to be relatively rare, unusual objects, perhaps explained by beaming (Elvis et al. 1981; Moran et al. 1996; Tananbaum et al. 1997). They are hard to find by association since small X-ray error boxes are required to be certain of their identification (e.g., discussion in Schmidt et al. 1998). The very large surface density of such sources in our sample, $\sim400$ deg$^{-2}$, indicates that they are common. In fact, in our sample they are much more common than quasars. There are two plausible explanations for the lack of observed optical AGN characteristics: i) absorption due to dust and gas or ii) an actual lack of ultraviolet/optical emission, as is the case in many low luminosity objects (Ho et al. 1999). The line of sight column densities inferred from the X-ray spectra in § \[secoptdepths\] are sufficiently large to obscure the optical AGN signatures, but the extent will depend on the geometry.
Submillimeter Observations {#secsmm}
--------------------------
The submillimeter observations were made with the SCUBA instrument (Holland et al. 1999) on the James Clerk Maxwell Telescope. SCUBA jiggle map observations were taken in mostly excellent observing conditions during runs in February 1999 (7 observing shifts), February 2000 (0.5 shift), and May-June 2000 (3.5 shifts). The maps were dithered to prevent any regions of the sky from repeatedly falling on bad bolometers. The chop throw was fixed at a position angle of 90 deg so that the negative beams would appear $45''$ on either side east-west of the positive beam. Regular “skydips” (Lightfoot et al. 1998) were obtained to measure the zenith atmospheric opacities at 450 and 850 $\mu$m, and the 225 GHz sky opacity was monitored at all times to check for sky stability. The median 850 $\mu$m optical depth for all nights together was 0.185. Pointing checks were performed every hour during the observations on the blazars 1308+326 or cit6. The data were calibrated using $30''$ diameter aperture measurements of the positive beam in beam maps of the primary calibration source Mars and the secondary calibration sources OH231.8, IRC+10216, and 16293-2422.
The data were reduced in a standard and consistent way using the dedicated SCUBA User Reduction Facility (SURF; Jenness & Lightfoot 1998). Due to the variation in the density of bolometer samples across the maps, there is a rapid increase in the noise levels at the very edges. The low exposure edges were clipped from our images.
The SURF reduction routines arbitrarily normalize all the data maps in a reduction sequence to the central pixel of the first map; thus, the noise levels in a combined image are determined relative to the quality of the central pixel in the first map. In order to determine the absolute noise levels of our maps, we first eliminated the $\gtrsim 3\sigma$ real sources in each field by subtracting an appropriately normalized version of the beam profile. We then iteratively adjusted the noise normalization until the dispersion of the signal-to-noise ratio measured at random positions became $\sim 1$. Our noise estimate includes both fainter sources and correlated noise.
We centered on the positions of the hard X-ray sources and measured the submillimeter fluxes using beam-weighted extraction routines that include both the positive and negative portions of the chopped images, thereby increasing the effective exposure times. The 850 $\mu$m submillimeter fluxes and $1\sigma$ uncertainties are summarized in column 12 of Table 1. For most of the objects the $1\sigma$ level is in the $1-2.5$ mJy range; however, for the two sources in the region where there is an ultradeep SCUBA image (Barger et al. 1998) the $1\sigma$ detection threshold is $0.6-0.7$ mJy.
Radio Observations {#secradio}
------------------
A very deep 1.4 GHz VLA radio map of the SSA13 region was obtained by Richards et al. (2001) using an 100 hr exposure in the A-array configuration. The primary image covers a $40'$ diameter region with an effective resolution of $1.6''$ and a $5\sigma$ limit of 25 micro-Jansky ($\mu$Jy). The radio fluxes were measured in $2.4''$ boxes centered on the X-ray positions; these fluxes are given in column 13 of Table 1. Of the 20 hard X-ray sources, 16 are detected in the radio above a $3\sigma$ threshold of 15 $\mu$Jy, including 10 of the 13 sources with spectroscopic redshifts. The radio-X-ray offsets are given in column 14 for the 10 sources with $5\sigma$ radio detections within $1.5''$ of the X-ray source. The absolute radio positions are known to $0.1-0.2''$ [*rms*]{}. The dispersion between the radio and X-ray positions is $0.4''$, and the maximum separation is $0.7''$. The radio to optical ratios for the hard X-ray sources are consistent with the sources being radio quiet. The radio properties will be discussed in more detail in Richards et al. (2001).
X-ray Properties of the Hard X-ray Sample {#secxray}
=========================================
Optical Depths {#secoptdepths}
--------------
With the exception of two ‘normal’ galaxies with $\Gamma<-1.5$, the photon indices given in Table 1 range from $-0.4$ to $1.82$. The two quasars have $\Gamma=1.75$ and $\Gamma=1.80$, consistent with most unabsorbed soft band sources, whereas most of the remaining sources have photon indices that suggest substantial line-of-sight optical depths.
If we generate counts-weighted mean photon indices for each population separately, we find 0.8 for the ‘normal’ galaxies, 0.9 for the AGN, 1.8 for the two quasars, and 1.2 for the spectroscopically unidentified sources. The ‘normal’ galaxies are presumably those where the AGN are the most highly obscured. Since the effective column density, $N_{eff}$, for observed-frame absorption in a source is related to the true hydrogen column density, $N_H$, by $N_{eff}\sim N_H/(1+z)^{2.6}$, flux corrections for absorption effects become less important with increasing redshift. The spectroscopically unidentified sources are likely higher redshift analogs of the ‘normal’ galaxies (see § \[seccolors\]) but have softer spectra because of this redshift dependence of the absorption.
In Fig. \[fig4\] we plot the logarithm of the ratio of the soft to hard X-ray fluxes versus redshift. Throughout the paper we use the notation of filled diamonds for quasars, filled triangles for galaxies with AGN signatures, filled squares for galaxies with apparently ‘normal’ spectra, and open squares for spectroscopically unidentified sources. We overlay on the data fixed $N_H$ curves which we generated assuming an intrinsic $\Gamma=2$ power-law spectrum and photoelectic cross sections computed for solar abundances by Morrison & McCammon (1983). Over the energy range $0.5-7$ keV, which determines the correction for these column densities and redshifts, the cross-section, $\sigma(E)$, can be well-approximated by a single power-law
$$\sigma(E)=2.4\times 10^{-22}\ E^{-2.6}\ {\rm cm}^{-2}$$
with $E$ in keV. For all but one of the spectroscopically identified galaxies the X-ray flux ratios can be described by a rather narrow range of neutral hydrogen column densities from $N_H=2\times 10^{22}$ cm$^{-2}$ to $3\times 10^{23}$ cm$^{-2}$, although more sophisticated models with scattering could permit higher opacities. The $N_H$ values and the true power-law indices are best determined directly from the X-ray spectra, as shall be discussed in a subsequent paper (Arnaud et al. 2001).
Could the absence of sources with column densities above $N_H=3\times 10^{23}$ cm$^{-2}$ be a selection effect? This is possible at low redshifts since the ratio of the absorbed to the actual $2-10$ keV flux drops rapidly for $N_{eff}>10^{23}$ cm$^{-2}$; however, at high redshifts this $N_{eff}$ corresponds to larger values of $N_H$ than are observed (see Fig. \[fig4\]). Even if we place all of the spectroscopically unidentified sources at $z\gg 1$, at most 3 of the 14 objects with $z>1$ have column densities above $N_H=3\times 10^{23}$ cm$^{-2}$. The simplest interpretation of Fig. \[fig4\] is that we are seeing most of the obscured AGN in the present sample. However, Compton-thick sources might be missed completely from the $2-10$ keV sample, and these sources may be needed to explain the 30 keV peak in the XRB. Unfortunately, this issue cannot be decided until either more precise information is obtained on the energy distribution of the individual sources, which, in principle, could deviate from power-law behavior, or until we can analyze the hardness of even fainter $2-10$ keV sources.
Redshift Distribution
---------------------
The redshift distribution of the spectroscopically identified sample is shown in Fig. \[fig5\], where we we plot the surface density of the sources in $\Delta z=0.5$ redshift bins (filled squares) with $1\sigma$ uncertainties. Below $z=3$ the data are consistent with a constant surface density. If we also include the unidentified sources spread uniformly through the $z=1.5$ to $z=3$ redshift interval (open squares), then the constancy with redshift becomes even more evident. The surface density is $\sim 200$ sources per square degree per bin or $\sim 400$ sources per square degree per unit redshift.
The present data sample is too small to justify a detailed examination of the luminosity function. However, the redshift distribution of the hard selected sample is very similar to that of previous soft selected samples with similar limiting sensitivities. We illustrate this in Fig. \[fig5\] by overplotting the redshift distribution of the Lockman Hole [*ROSAT*]{} sample of Schmidt et al. (1998) (filled diamonds). The limiting flux of this sample is $5.5\times 10^{-15}$ erg cm$^{-2}$ s$^{-1}$ ($0.5-2$ keV) which, for a source with $\Gamma=2$, would correspond to a limiting flux of $4.7\times 10^{-15}$ erg cm$^{-2}$ s$^{-1}$ ($2-10$ keV), similar to our limiting flux of $3.8\times 10^{-15}$ erg cm$^{-2}$ s$^{-1}$ ($2-10$ keV). The absolute surface density of the soft sample is far below that of the hard sample, indicating that most of the hard X-ray sources are substantially obscured; however, the redshift distributions are rather similar. The luminosity function evolution of soft X-ray samples has been extensively analyzed and shows a rapid rise between $z=0$ and $z=1.5$, followed by relative constancy at higher redshifts (Miyaji et al. 2000). While a proper analysis of the evolution of the $2-10$ keV luminosity function must await larger samples and a better understanding of the optical depths and K-corrections, the first impression is that the behavior will be similar to that inferred from the soft samples but with a much higher normalization of the luminosity function (about 7 times higher).
Luminosities {#seclum}
------------
The intrinsic flux, $F_{int}$, is related to the observed flux, $F_{obs}$, by
$$F_{int}=F_{obs}\ (1+z)^{2-\Gamma}$$
For an unabsorbed spectrum with $\Gamma=2$ the K-correction vanishes. An unabsorbed spectrum may be appropriate at the higher energies where opacity effects are not important. However, we believe that we can obtain a slightly improved estimate for the $2-10$ keV luminosities by allowing for the average effects of the opacity as follows. In calculating our hard X-ray luminosities, we normalized the flux at 4 keV for the $\Gamma=2$ spectrum to the flux at 4 keV calculated over the $2-10$ keV energy range for a spectrum with counts-weighted mean photon index $\Gamma=1.2$ (see § \[secdata\]). Then
$$L_{HX} = 4\pi\ d_L^2\ (0.85)\ f_{HX}$$
where $f_{HX}$ is the $2-10$ keV flux of Table 1 computed with the same $\Gamma=1.2$ assumption. Our hard X-ray luminosities are given in column 3 of Table \[tab2\]. If we had instead used $\Gamma=2$, the computed $2-10$ keV fluxes would be lower by a factor of 1.35 and our X-ray luminosities would be lower by a factor of 1.15. The X-ray luminosities of Fabian et al. (2000) and Hornschemeier et al. (2000) were based on $\Gamma=2$; in later comparisons with their luminosities, we ignore this difference since it is small compared to other uncertainties.
Figure \[fig6\] shows our hard X-ray luminosities versus redshift. Here the solid curve represents the detection limit. The open pentagons denote the ultraluminous starburst galaxy Arp 220, the ultraluminous, highly obscured AGN NGC 6240, and the radio quiet quasar PG 1543+489, in ascending $L_{HX}$ order (see § \[secsmmradio\]) for comparison. The hard X-ray luminosities of the spectroscopically identified sources range from just over $10^{41}$ erg s$^{-1}$ to $\sim 10^{45}$ erg s$^{-1}$. Sources at low redshift ($z<1$) do not have the high X-ray luminosities of the sources at high redshift. To determine whether there are also low luminosity sources at high redshift will require [*Chandra*]{} observations that probe to much deeper flux levels.
Even though the ‘normal’ galaxies, which typically fall near the detection threshold, are systematically less luminous than the AGN and quasars, they are still extremely X-ray luminous; only the two lowest luminosity sources even overlap the local galaxy populations (Fabbiano 1989). Furthermore, the ‘normal’ galaxies are generally more luminous than the hard X-ray sources in the nuclei of nearby giant elliptical galaxies in the Virgo and Fornax clusters that were chosen for study on the basis of their black hole properties; these have $1-10$ keV luminosities of $2\times 10^{40}$ erg s$^{-1}$ to $2\times 10^{42}$ erg s$^{-1}$ (Allen, Di Matteo, & Fabian 2000).
[ccrrrrr]{}
=0.2cm 0 & 2.000 & 1000 & 480 & 4500 & $<8800$ & 9400 1 & 1.048 & 150 & 41 & 3600 & $<19000$ & 4300 2 & 1.320 & 230 & 2400 & $<1200$ & $<17000$ & $<4600$ 3 & 2.565 & 930 & $2.8\times 10^5$ & 28000 & $<51000$ & $3.1\times 10^5$4 & 0.212 & 1.9 & 4.6 & $<18$ & $<1800$ & $<31$ 5 & 0.696 & 28 & 140 & 800 & $<7900$ & 1000 6 & 2.565 & 570 & 49000 & 11000 & $<41000$ & $62000$ 7 & 0.241 & 1.3 & 8.0 & 130 & $<2100$ & 140 8 & 1.800 & 120 & 320 & 6500 & 5200 & 7400 9 & 2.000 & 130 & 270 & 4700 & $<14000$ & 5500 10 & 1.427 & 59 & 520 & 1800 & $<12000$ & 2500 11 & 2.415 & 210 & 4400 & 8200 & $<16000$ & 14000 12 & 0.585 & 6.5 & 160 & 220 & $<5100$ & 400 13 & 2.000 & 130 & 94 & $<3200$ & $<14000$ & $<3800$ 14 & 2.000 & 120 & 140 & 6600 & $<12000$ & 7300 15 & 2.625 & 220 & 2200 & $<5900$ & $<10000$ & $<9000$ 16 & 2.000 & 110 & 110 & 3600 & $<5600$ & 4200 17 & 2.000 & 110 & 650 & 16000 & $<9600$ & 18000 18 & 0.110 & 0.12 & 23 & 27 & $<680$ & 51 19 & 0.180 & 0.34 & 14 & 77 & $<1300$ & 93 \[tab2\]
Contribution to the Hard XRB {#secxrb}
----------------------------
The integrated $2-10$ keV light of the 20 hard X-ray sources in the sample corresponds to an extragalactic background light (EBL) of $1.34\times10^{-11}$ erg cm$^{-2}$ s$^{-1}$ deg$^{-2}$, or between 58 to 84 percent of the $2-10$ keV XRB, depending on whether we assume the [*HEAO1*]{} value of $1.6\times 10^{-11}$ erg cm$^{-2}$ s$^{-1}$ deg$^{-2}$ (Marshall et al. 1980) or more recent higher estimates of $2.3\times 10^{-11}$ erg cm$^{-2}$ s$^{-1}$ deg$^{-2}$ (e.g., Vecchi et al. 1999). Throughout the rest of the paper we will conservatively adopt the Vecchi et al. (1999) value.
If we include 6 of the 7 spectroscopically unidentified sources in the $z<2$ population (source 13 is the only source not seen in the $B$-band and hence is the only very high redshift ($z\gg 4$) candidate), then 56% of the light arises from $z<2$ sources. If we instead restrict to the spectroscopic sample, then 34% of the light arises from the $z<2$ population. The four sources with spectroscopic redshifts $z>2$ contribute 13% to the light, and source 13 contributes just 1%.
Optical and Near-infrared Properties of the Hard X-ray Sample {#secopt}
=============================================================
Magnitudes
----------
At the time of this writing, all but one (source 13) of the hard X-ray sources was detected above the $2\sigma$ level in the NIR image, all but two (13, 16) in the $I$-band image, and all but one (13) in the $B$-band image. Source 13 has since been detected in the NIR using the CISCO infrared camera on the Subaru 8 m telescope (Cowie et al. 2001).
Figure \[fig7\] shows the redshift versus $I$ magnitude distribution of our spectroscopically identified $I<23.5$ hard X-ray sample (large symbols) and, for comparison, an optically selected $I<24$ field galaxy sample (small symbols). Hard X-ray sources with $5\sigma$ radio detections are indicated by surrounding open boxes. The tracks are from Coleman, Wu, & Weedman (1980) for an early-type galaxy (solid), an early spiral galaxy (dashed), and an irregular galaxy (dotted) with absolute magnitudes $M_I=-22.5$ in the assumed cosmology. The ‘normal’ galaxies and AGN follow the upper envelope of the star forming field galaxy population. Thus, the hard X-ray sources predominantly lie in the most optically luminous galaxies.
Colors {#seccolors}
------
Figures \[fig8\]a, b show $I-HK'$ versus redshift and $I-HK'$ versus $HK'$ for 19 of the 20 hard X-ray sources (source 13 is excluded). The overlays are Coleman, Wu, & Weedman 1980 tracks for an early-type galaxy (solid curve) and an early spiral galaxy (dashed curve) with $M_{HK'}=-25.0$. The colors of the spectroscopically identified $z<2$ galaxies are in the range of the galaxy tracks; this is also consistent with their morphological appearance (see Fig. \[fig2\]). Of the sources that were too optically faint for spectroscopic identification, the NIR magnitudes and colors for four (sources 0, 9, 14, 16) suggest that they are early galaxies in the $z>1.5$ redshift range (Fig. \[fig8\]a). Crawford et al. (2000) also found a number of sources of this type in a sample of [*Chandra*]{} hard X-ray sources. We were able to use radio and submillimeter detections for one of the optically faint sources (source 8) to estimate a millimetric redshift with central value $z=1.8$ (see § \[secdetection\]). We used radio detections and submillimeter $1\sigma$ limits on sources 0, 9, 14, 16, and 17 to estimate millimetric redshift upper limits of $z=$2.5, 3.0, 2.4, 2.2, and 1.4, respectively. Sources 8 and 17 are somewhat bluer than the curves in Figs. \[fig8\], but their AGN might be contributing substantially to the rest-frame optical light.
Bolometric Ultraviolet/Optical Luminosities
-------------------------------------------
We estimate the AGN luminosities in the ultraviolet (UV)/optical by adopting a shape appropriate to the radio quiet AGN (e.g., Zheng et al. 1997). At frequencies below the rest-frame Lyman limit we take the spectrum to be a $-0.8$ power-law; this steepens to a $-1.7$ power-law at higher frequencies. Normalizing to the observed flux at the wavelength corresponding to the rest wavelength 2500 Å, $f_{2500(1+z)}$, the UV/optical luminosity is then
$$L_{OPT} = 4\pi\ d_L^2\ (9.4\times 10^{15})\ f_{2500(1+z)}\ (1+z)^{-1}$$
where $d_L$ is the luminosity distance in cm and $f_{2500(1+z)}$ is in units erg cm$^{-2}$ s$^{-1}$ Hz$^{-1}$. We have estimated $f_{2500(1+z)}$ for the hard X-ray sample by interpolating from the observed fluxes corresponding to the magnitudes given in Table 1 and, where available, the $U'$ magnitudes at 3400 Å from MCBA. For the more extended objects we have used the corrected $3''$ diameter magnitudes from MCBA rather than the $20''$ diameter magnitudes of Table 1 to obtain better limits on the AGN fluxes. Many of the sources may still have substantial galaxy light contamination, so $L_{OPT}$ is strictly an upper limit on the AGN contribution. The inferred bolometric UV/optical luminosities are given in column 4 of Table \[tab2\] and are plotted versus redshift in Fig. \[fig9\].
The four sources with spectroscopic redshifts beyond $z=2$ are by far the most optically luminous sources in the sample. These are the two quasars (sources 3 and 6) and the two AGN (sources 11 and 15) with narrow Ly$\alpha$ and CIV lines. The AGN are likely dominating the light in these sources at the observed optical and NIR wavelengths, as the observed colors are quite blue (see Fig. \[fig8\]a).
An Optically Selected Sample: Hard X-ray Properties and Contribution to the Hard XRB {#secoptsample}
====================================================================================
The foregoing section presented the optical nature of the hard X-ray sources. We now invert the approach and ask what are the hard X-ray properties of an optically selected sample. In particular, we would like to know what fraction of optical sources are significant contributors to the hard XRB and whether these are drawn from a particular subsample of the optical population.
To address this issue, we use the complete subsample of 1151 $I<24$ galaxies and stars within a $4'$ radius of the optical axis of the [*Chandra*]{} pointing. The smaller radius was chosen to minimize aperture corrections to the X-ray counts so that the X-ray limits would be more uniform. For each of these sources, we extracted the X-ray counts in a $2''$ radius aperture centered on the nominal optical position and converted them to $2-10$ keV fluxes using the procedure outlined in § \[secdata\]. The X-ray fluxes determined in this way are plotted versus $I$ magnitude in Fig. \[fig10\]a.
Above a hard X-ray flux limit of $2.5\times 10^{-15}$ erg cm$^{-2}$ s$^{-1}$, we recover the twelve $I<24$ sources of Table 1 that are within the $4'$ radius. Monte Carlo simulations using an equal number of sources at random positions show that with this extraction aperture and flux limit, there will be, on average, 1.4 spurious cross-identifications of optical sources with X-ray sources ($<3$ at the 95 percent confidence limit). Allowing for this contamination, we find that a fraction $0.009^{+0.004}_{-0.003}$ of the optical sources are X-ray sources, where the uncertainties correspond to the 68 percent confidence ranges. At a lower hard X-ray flux limit of $1.0\times 10^{-15}$ erg cm$^{-2}$ s$^{-1}$, we find 32 sources and expect an average contamination of 10.3 ($<16$ at the 95 percent confidence limit). Here, a fraction $0.019\pm 0.005$ of the optical sources are X-ray sources.
Excluding the incompletely covered edges of the S3 and S2 chips, the observed area is 47 arcmin$^2$, and the contribution to the $2-10$ keV background of the 12 sources above the $2.5\times 10^{-15}$ erg cm$^{-2}$ s$^{-1}$ hard X-ray flux limit is $9.5\times 10^{-12}$ erg cm$^{-2}$ s$^{-1}$ deg$^{-2}$ (41% of the hard XRB).
If we assume that half of the sources in the flux range $1.0\times 10^{-15}$ erg cm$^{-2}$ s$^{-1}$ to $2.5\times 10^{-15}$ erg cm$^{-2}$ s$^{-1}$ are real, we estimate a further contribution of only $1.1\times 10^{-12}$ erg cm$^{-2}$ s$^{-1}$ deg$^{-2}$; thus, increasing the X-ray sensitivity adds little to the hard XRB contribution. Indeed, summing over all the optically selected sources above $1.0\times 10^{-15}$ erg cm$^{-2}$ s$^{-1}$ yields a total of $10.4 \pm 0.8 \times 10^{-12}$ erg cm$^{-2}$ s$^{-1}$ deg$^{-2}$, where the uncertainties are the 68 percent confidence ranges based on randomized samples.
We conclude that most of the hard XRB from the optically selected sources is dominated by the small number of sources with hard X-ray fluxes above $2.5\times 10^{-15}$ erg cm$^{-2}$ s$^{-1}$, as is also the case for the directly selected hard X-ray sample.
Another way to estimate the fraction of optically selected sources that are X-ray sources is to use a spectroscopic data sample. Of the 554 $I<23$ objects in the $4'$ radius region, 172 have measured redshifts or are spectroscopically identified stars. The measured $2-10$ keV fluxes of these galaxies are shown versus their absolute $I$-band magnitudes, $M_I$, in Fig. \[fig10\]b. Here K-corrections have been computed using the Coleman, Wu & Weedman (1980) spectral energy distribution (SED) for an early spiral galaxy.
Of the 25 galaxies with $-22.5<M_I<-24$, five have hard X-ray fluxes above $1.0\times 10^{-15}$ erg cm$^{-2}$ s$^{-1}$. By contrast, none of the 29 galaxies with $M_I$ fainter than $-20$ is an X-ray source at this level. If we correct for the incompleteness of the spectroscopically identified optical sample, we find that the fraction of optically luminous galaxies that are X-ray sources is $0.07^{+0.05}_{-0.03}$. Thus, a very substantial fraction of the optically luminous galaxies are undergoing X-ray activity at any given time.
Submillimeter and Radio Properties of the Hard X-ray Sample {#secsmmradio}
===========================================================
The new population of highly obscured, exceptionally luminous sources discovered by SCUBA (Smail, Ivison, & Blain 1997; Hughes et al. 1998; Barger et al. 1998; Barger, Cowie, & Sanders 1999; Eales et al. 1999, 2000) appear to be distant analogs of the local ultraluminous infrared galaxies (ULIGs; Sanders & Mirabel 1996). There is an ongoing debate on whether local ULIGs are dominantly powered by star formation or by heavily dust enshrouded AGN, and the same applies to the distant SCUBA sources (see, e.g., Trentham 2000). If the hard X-ray sources are highly absorbed, then the rest-frame soft X-ray through NIR radiation will be reprocessed by dust and gas, and the energy will appear in the FIR. At high redshifts ($z\gg 1$) the FIR radiation is shifted to the submillimeter.
Barger et al. (1999) carried out a spectroscopic survey of a complete sample of submillimeter sources detected in a survey of massive lensing clusters. Only three of the 17 sources could be reliably identified spectroscopically; of these, two showed AGN signatures. Thus, the possibility that most SCUBA sources contain AGN remains open. Several authors (Almaini, Lawrence, & Boyle 1999; Gunn & Shanks 2000) have modelled the X-ray and submillimeter backgrounds; they predict an AGN contribution to the SCUBA surveys at the level of $10-20$ percent.
The results of recent searches for submillimeter counterparts to [*Chandra*]{} X-ray sources have been mixed. In a study of two massive lensing clusters, A2390 and A1835, Fabian et al. (2000) identified three significant $2-7$ keV sources, but these were not significantly detected in the submillimeter (Smail et al. 1998). Likewise, Hornschemeier et al. (2000) did not see either of their $2-8$ keV sources in the ultradeep Hubble Deep Field SCUBA map of Hughes et al. (1998). In contrast, both of the $2-10$ keV sources detected by Bautz et al. (2000) in the A370 lensed field are submillimeter sources (Smail, Ivison, & Blain 1997). These two sources were previously identified spectroscopically as AGN (Ivison et al. 1998; Barger et al. 1999). The above mixed results probably reflect the fact that the 850 $\mu$m flux limits obtainable with SCUBA are quite close to the expected fluxes from the obscured AGN, as we address below.
The present study has the advantage that wide-area submillimeter (Barger, Cowie, & Sanders 1999 and the present paper) and extremely deep 20 cm data (Richards et al. 2001) exist over the entire X-ray field of 57 arcmin$^2$. An ultradeep 50 hr submillimeter map (Barger et al. 1998) also exists for one region of the field that contains two hard X-ray sources.
Submillimeter Detection of a Hard X-ray Source {#secdetection}
----------------------------------------------
With one exception, the hard X-ray sources are not detected in the submillimeter at the $3\sigma$ level. The exception is source 8, an optically faint, highly absorbed ($\Gamma=-0.06$) X-ray source in the ultradeep submillimeter map.
The bolometric FIR flux is related to the rest-frame 20 cm flux through the well-established FIR-radio correlation (Condon 1992) of local starburst galaxies and radio quiet AGN. The SEDs of submillimeter sources are reasonably well approximated by the thermal black-body spectrum of the ULIG Arp 220 (Carilli & Yun 2000; Barger, Cowie, & Richards 2000), which is powered by star formation (Downes & Solomon 1998). We can therefore use the submillimeter to radio flux ratio to infer an approximate redshift of 1.8 (with estimated redshift range $1.2-2.4$) for source 8 (see Eqs. 2 and 4 of Barger, Cowie, & Richards 2000). The millimetric redshift estimation technique is expected to hold for sources dominated by star formation or for radio quiet AGN.
Spectral Energy Distributions of the Hard X-ray Sources {#secsed}
-------------------------------------------------------
The two quasars are the softest sources in the hard X-ray sample. In Fig. \[fig11\]a we compare their average rest-frame SED (filled squares) to the rest-frame SED of the $z=0.4$ radio quiet quasar PG 1543+489 (solid curve), normalized to approximately match the soft and hard X-ray data of the averaged quasars.
Most of the sources in the sample show much harder X-ray spectra than the two quasars and are consistent with being obscured AGN. In Fig. \[fig11\]b we compare the rest-frame SED of source 8 (filled squares), based on the millimetric redshift of 1.8, with the SED of the local heavily dust obscured AGN NGC 6240 (solid curve), normalized to approximately match the hard and soft X-ray data of source 8.
Bolometric FIR Luminosities Inferred from Radio Fluxes
------------------------------------------------------
The sources with spectroscopic identifications are mostly at modest redshifts where radio observations provide a more sensitive route for obtaining total FIR luminosities than do submillimeter observations (Barger, Cowie, & Richards 2000). Assuming that the sources are well described by the FIR-radio correlation (which holds for both star forming and radio-quiet AGN) given in Sanders & Mirabel (1996) with $q=2.35$, then $f_{FIR}=8.4\times 10^{14}\ f_{20}$, where $f_{20}$ is the flux at 20 cm. We calculate the FIR luminosity by relating it to the observed 20 cm flux, assuming a synchrotron spectrum with an index of $-0.8$; then
$$L_{FIR}=4\ \pi\ d_L^2\ (8.4\times 10^{14})\ f_{20}\ (1+z)^{-0.2}$$
where $d_L$ is the luminosity distance in cm and $f_{20}$ has units erg cm$^{-2}$ s$^{-1}$ Hz$^{-1}$.
The bolometric FIR luminosities obtained by this method are given in column 5 of Table \[tab2\] and are plotted in Fig. \[fig12\] as open circles. Four sources are not detected above the 15 $\mu$Jy ($3\sigma$) limit in the 20 cm data and their luminosities are given as upper limits based on the $3\sigma$ radio limit. For our sources without spectroscopic or millimetric redshifts, we nominally assume $z=2.0$. It is interesting to note that three-quarters of the hard X-ray sources with radio detections in our sample would be classified as LIGs ($L_{FIR}>10^{11}\ L_\odot$; Sanders & Mirabel 1996) from their FIR luminosities, and half would be classified as ULIGs ($L_{FIR}>10^{12}\ L_\odot$).
Bolometric FIR Luminosities Inferred from Submillimeter Fluxes
--------------------------------------------------------------
By assuming an ULIG SED, we can use the 850 $\mu$m data to estimate the FIR luminosity of source 8 and to place upper limits on the FIR luminosities of the remaining sources. For our non-quasar X-ray sources with redshifts, we estimate the FIR luminosities by scaling the NGC 6240 luminosity by the ratio of the 850 $\mu$m fluxes (after placing NGC 6240 at the redshift of the source). For NGC 6240 we use the $T=42$ K cold component determined by Klaas et al. (1997), which has $L_{FIR}=2.0\times 10^{45}\ h_{65}$ erg s$^{-1}$. For our X-ray sources with no submillimeter detections, we use the $3\sigma$ upper limits on the submillimeter fluxes in calculating the $L_{FIR}$ values. For our sources without spectroscopic or millimetric redshifts, we nominally assume $z=2.0$. Since above $z\sim 1$ the luminosity distance dependence and the K-correction approximately cancel, there is a simple translation of flux to bolometric luminosity independent of redshift; thus, our procedure should give accurate upper limits for the unidentified sources as long as they are not at low redshift ($z<1$).
For our two quasar X-ray sources, we estimate the FIR luminosities as above using the SED and FIR luminosity of PG 1543+489. For PG 1543+489 we calculate $L_{FIR}=10.5\times 10^{45}\ h_{65}$ erg s$^{-1}$ over the range $40-500\ \mu$m using the $T=46$ K grey-body model given in Polletta et al. (2000).
The bolometric FIR luminosities or upper limits obtained from the submillimeter fluxes in this way are given in column 6 of Table \[tab2\] and are plotted in Fig. \[fig12\] as filled squares. The radio inferred FIR luminosities or upper limits are consistent with the submillimeter inferred FIR luminosities or upper limits. It is clear from Fig. \[fig12\] that more sensitive submillimeter observations would be required to detect even the higher redshifts objects.
Bolometric Luminosity Ratios {#secbololumratio}
----------------------------
In Fig. \[fig13\] we plot the UV/optical to FIR luminosity ratios versus the logarithm of the soft to hard X-ray flux ratios. If the X-ray sources are hardened by line-of-sight opacities and the same material extinguishes the UV/optical light, then as the opacities decrease, we would expect both the UV/optical to FIR luminosity ratios and the soft to hard X-ray flux ratios to increase. For the spectroscopically identified sources there is a broad overall trend in this sense but with large scatter.
In Fig. \[fig14\] we plot the ratios $L_{FIR}/L_{HX}$ (open squares) and $L_{OPT}/L_{HX}$ (filled diamonds) versus redshift. Only the two quasars and source 2 at $z=1.320$, with its curious optical absorption line spectrum, are dominated by the UV/optical light rather than by the FIR light. The two quasars are the softest sources in the sample and have X-ray photon indices (1.75 and 1.80) consistent with no absorption. Source 2 has a harder photon index (0.9) than the quasars; thus, the dominance of the UV/optical light here is perhaps less expected and may be due to substantial contributions of stellar light. For the remaining sources the FIR light dominates.
Bolometric Correction {#secbolo}
---------------------
If we assume that the FIR light in the hard X-ray sources is reprocessed AGN light, uncontaminated by star formation in the galaxies, then we may use the present data to compute the bolometric correction from the hard X-ray luminosity to the bolometric light of the AGN. Excluding the two quasars, which have an average $(L_{FIR}+L_{OPT})/L_{HX}=200$, the remaining sources have an average $L_{OPT}/L_{HX}=19$. If the sources with $3\sigma$ upper limits on $L_{FIR}$ are included at this level, then the average $L_{FIR}/L_{HX}=59$, where we have used the radio inferred FIR luminosities, and the average $(L_{FIR}+L_{OPT})/L_{HX}=78$. If the sources with upper limits are instead assigned zero luminosity ratios, then the average $L_{FIR}/L_{HX}=56$ and the average $(L_{FIR}+L_{OPT})/L_{HX}=74$.
The ratios are weighted to higher values by the two lowest redshift sources. If we instead weight the averages by the hard X-ray fluxes, then the $L_{FIR}/L_{HX}$ ratio drops to 33, the $(L_{FIR}+L_{OPT})/L_{HX}$ ratio to 42, and the $L_{OPT}/L_{HX}$ ratio to 9. Because galaxy contamination of the optical light is a problem, the actual bolometric correction probably lies somewhere between the values of $L_{FIR}/L_{HX}$ and $(L_{FIR}+L_{OPT})/L_{HX}$. As a conservative minimum, we adopt the $L_{FIR}/L_{HX}=33$ ratio for the bolometric correction in subsequent discussions while recognizing that this ratio could be larger by a factor of two or more. The average $L_{OPT}/L_{HX}$ ratio for the hard X-ray sources is much smaller than the $B$-band to $0.5-4$ keV luminosity ratios seen in local early-type galaxies, whether we consider the soft components thought to arise from thermal emission in the gaseous halos (luminosity ratios in the range $150-40000$) or the hard components which may be produced by X-ray binaries (luminosity ratios in the range $1200-8000$) (Matsumoto et al. 1997). The much higher X-ray luminosities of the present sources relative to their stellar luminosities provide strong evidence that they are powered by AGN.
It is also interesting to consider the total X-ray light. For an assumed photon index of $\Gamma=2$, the total X-ray light is only weakly sensitive to the adopted energy range. For an energy range from 0.1 to 100 keV, the ratio of the total X-ray luminosity to the $2-10$ keV luminosity is only a factor $\ln(1000)/\ln(5)=4.3$. The total X-ray luminosities of the sources, $L_{X}$, are still substantially smaller than the bolometric luminosities at other wavelengths, but an appreciable fraction of the AGN light (typically $1-20$%) is emerging in the X-rays. $L_{BOL}=L_{FIR}+L_{OPT}+L_{X}$ values are given in the last column of Table \[tab2\]; here again the radio determinations are used as the $L_{FIR}$ inputs.
FIR and Hard X-ray Luminosity Comparisons
-----------------------------------------
By considering the ratio of the FIR luminosity, $L_{FIR}$, to the hard X-ray luminosity, $L_{HX}$, we can eliminate any redshift or cosmology dependence and make relative comparisons of distant and local systems. Figure \[fig15\]a shows the $L_{FIR}/L_{HX}$ ratio versus $L_{HX}$ for our data sample. Following Fabian et al. (2000), we also show on the figure the data values for the local ultraluminous infrared galaxies Arp 220 and NGC 6240 (Klaas et al. 1997; Iwasawa 1999), the $z=0.4$ radio quiet quasar PG 1543+489 (Polletta et al. 2000; Vaughan et al. 1999), and the radio loud quasar 3C 273 (Kim & Sanders 1998; Türler et al. 1999). NGC 6240 hosts a powerful AGN and is highly absorbed with an inferred column density of $N_H\sim 2\times 10^{24}$ cm$^{-2}$ (Vignati et al. 1999). On the figure we have connected with a straight line the values of $L_{HX}$ calculated from the observed flux (Iwasawa 1999) and from the inferred intrinsic flux (Vignati et al. 1999) to illustrate the large effects that absorption can make for highly obscured AGN at low redshift.
Using the submillimeter inferred FIR luminosity of source 8 and the radio inferred FIR luminosities of the other hard X-ray sources, we find that the values of $L_{FIR}/L_{HX}$ for all the sources are comparable to that of NGC 6240. Also shown in Fig. \[fig15\]a (open diamonds) are the luminous $z>4$ radio quiet quasars observed in the submillimeter by McMahon et al. (1999) for which there are also X-ray detections (Kaspi et al. 2000; we converted the $f_\nu$ at 2 keV values given in their Table 2 to $2-10$ keV fluxes assuming $\Gamma=2$). These quasars are in the region of 3C 273 in Fig. \[fig15\]a. The two gravitationally-lensed sources in the field of A370 (open triangles) detected in hard X-rays by Bautz et al. (2000) and in the submillimeter by Smail, Ivison, & Blain (1997) have higher $L_{FIR}/L_{HX}$ ratios than the typical X-ray source. However, none of the hard X-ray selected sources remotely approaches the high $L_{FIR}/L_{HX}=3.4\times 10^4$ value of Arp 220.
Figure \[fig15\]b shows $L_{FIR}/L_{HX}$ versus $L_{HX}$ for the above submillimeter sources and for submillimeter sources in the literature that have millimetric redshifts (Barger, Cowie, & Richards 2000 and § \[secsmmsample\]) and X-ray detections or limits (Hornschemeier et al. 2000 and § \[secsmmsample\]). If most of the submillimeter sources are star-formers like Arp 220, then much deeper hard X-ray observations would be required to detect them.
A Submillimeter Selected Sample: Hard X-ray Properties and Contribution to the Hard XRB {#secsmmsample}
=======================================================================================
The foregoing section presented the submillimeter nature of the hard X-ray sources. We now invert the approach and ask what are the hard X-ray properties of a submillimeter selected sample. We may address this by looking at the ensemble properties of submillimeter selected sources in the SSA13 field. There are twelve $3\sigma$ source detections at 850 $\mu$m in the S2 and S3 chips within a $4.5'$ radius of the optical axis. The fluxes range from just over 2.3 mJy to 11.5 mJy. The positional accuracies for the submillimeter sources are relatively poor because of the large beam size. Six of the twelve submillimeter sources have 20 cm counterparts within $5''$, and the dispersion of the offsets is $3.4''$ (see also Barger, Cowie, & Richards 2000). In order to allow for the positional uncertainty, we determined the $2-10$ keV fluxes for each of the submillimeter sources in $10''$ diameter apertures. The results are shown in Fig. \[fig16\]. The only submillimeter source that is also a strong hard X-ray emitter is source 8 from Table 1. The submillimeter source with the second strongest hard X-ray flux in Fig. \[fig16\] is not in the hard X-ray sample but is a known soft X-ray emitter (source 28 in Table 1 of MCBA).
Because of the large apertures used, there is a significant probability of random overlap with a non-coincident X-ray source. In order to quantify this, we have again run Monte Carlo simulations using an equal number of sources at random positions. The $1\sigma$ dispersion is $0.5 \times 10^{-16}$ erg cm$^{-2}$ s$^{-1}$ mJy$^{-1}$. Only 5% of the simulations have values in excess of $1.2\times 10^{-16}$ erg cm$^{-2}$ s$^{-1}$ mJy$^{-1}$. In fact, the two sources that are contributing all of the X-ray signal have 20 cm radio counterparts that are closely coincident with the X-ray sources. Thus, both of these are real identifications rather than chance contamination.
The ratio of the total hard X-ray flux in the sample to the total submillimeter flux is $1.4\pm 0.5\times 10^{-16}$ erg cm$^{-2}$ s$^{-1}$ mJy$^{-1}$. We may use this ratio to estimate the fraction of the hard XRB that arises from submillimeter sources in this flux range. The EBL of the $2-10$ mJy source population is $9.3\times 10^3$ mJy deg$^{-2}$ (Barger, Cowie, & Sanders 1999), which would contribute $1.3\times 10^{-12}$ erg cm$^{-2}$ s$^{-1}$ deg$^{-2}$ in the $2-10$ keV band or 6% of the hard XRB. This contribution would rise to 19% to 27% if we normalized to the total submillimeter background of $3.1\times 10^4$ mJy deg$^{-2}$ of Puget et al. (1996) or $4.4\times 10^4$ mJy deg$^{-2}$ of Fixsen et al. (1998).
However, nearly all of the X-ray signal from the submillimeter sample is coming from source 8. If this single source is removed, the total hard X-ray to total submillimeter flux ratio drops to $0.6\pm 0.5\times 10^{-16}$ erg cm$^{-2}$ s$^{-1}$ mJy$^{-1}$. If we place the submillimeter sources at $z=2$ (consistent with the $z=1-3$ spectroscopic and millimetric redshifts from, e.g., Barger et al. 1999 and Barger, Cowie, & Richards 2000), then the $1\sigma$ lower limit on the ratio of $L_{FIR}$ to $L_{HX}$ would be approximately 1100. This lower limit is above the obscured AGN values and approaching that of Arp 220. It therefore appears that most of the submillimeter sources, at least above 2 mJy, are star formers with a small admixture of obscured AGN. Similar conclusions have recently been reached by Fabian et al. (2000), Hornschemeier et al. (2000), and Severgnini et al. (2000).
A Radio Selected Sample: Hard X-ray Properties and Contribution to the Hard XRB
===============================================================================
There are 107 20 cm sources above the $5\sigma$ radio limit of $25\ \mu$Jy that lie on the S2 and S3 chips within a $4.5'$ radius of the optical axis. We have measured their $2-10$ keV fluxes using $2.5''$ diameter apertures. Above a hard X-ray flux of $3\times10^{-15}$ erg cm$^{-2}$ s$^{-1}$, we recover only the 10 hard X-ray sources with $5\sigma$ radio counterparts in Table 1. Above $1 \times10^{-15}$ erg cm$^{-2}$ s$^{-1}$, 16 sources are detected. The contribution to the hard XRB from the entire radio selected sample is $5.9\times 10^{-12}$ erg cm$^{-2}$ s$^{-1}$ deg$^{-2}$ (26%), nearly all of which comes from the hard X-ray sources in Table 1. The small fraction ($10-15$%) of radio sources that are hard X-ray sources, and therefore AGN, is consistent with expectations that the great bulk of the 20 cm sources at these faint fluxes are due to synchrotron emission arising from supernova remnants in the interstellar medium of the galaxies rather than to nuclear activity. The radio morphologies, which will provide more insight into the nature of the hard X-ray sources, will be discussed in Richards et al. (2001).
Hard X-ray Contribution to the EBL {#secsmmebl}
==================================
The summed contributions of our [*Chandra*]{} hard X-ray sources to the EBL are shown versus wavelength in Fig. \[fig17\] as the filled diamonds. These are compared with measurements of the EBL (solid curves) and with the integrated light from direct counts (large squares); the latter now lie close to the EBL at all wavelengths.
For galaxies at $z\gg 1$ the X-ray power reprocessed by dust is expected to appear at submillimeter wavelengths. We computed the hard X-ray contribution to the submillimeter EBL by considering the average submillimeter properties of the whole X-ray sample and of selected subsamples. Error-weighted sums of the submillimeter sources are given in Table \[tab3\], together with the correspondingly weighted total hard X-ray fluxes for the sample and the ratios of the two. The total hard X-ray sample has a significant ($>3\sigma$) submillimeter flux of 20 mJy. As expected, most of this arises in the $z>1.5$ sources and in the unidentified sources. The total 850 $\mu$m flux of the sample corresponds to an EBL of $4\times 10^{3}$ mJy deg$^{-2}$, or from 9 to 13% of the 850 $\mu$m EBL, depending on whether the normalization of Fixsen et al. (1998) or Puget et al. (1996) is adopted. These values are similar to the obscured AGN model predictions of Almaini et al. (1999) and Gunn & Shanks (2000). However, it appears the the great bulk of the 850 $\mu$m EBL must arise in sources which are too faint at $2-10$ keV to be seen at the current submillimeter threshold.
We computed the hard X-ray contribution to the optical/NIR EBL by summing the fluxes corresponding to the corrected $3''$ diameter aperture magnitudes at these wavelengths. This light is dominated by the brighter galaxies, many of them with ‘normal’ galaxy spectra, so there is almost certainly strong contamination of the estimate of the AGN contribution by the light of the host galaxies. The optical/NIR EBL should therefore be considered strictly as an upper bound.
In summary, the hard X-ray sample contributes about 10% of the light at both UV/optical and submillimeter wavelengths. Both wavelength regimes have similar total bolometric light densities. However, because there is galactic light contamination in the optical/NIR, it is likely that the FIR is where most of the AGN light is emerging from the hard X-ray sources.
[ccccc]{}
=0.2cm All & $20\pm 6$ & 1800 & $0.9^{1.3}_{0.7}$ $z<1.5$ & $1\pm 6$ & 840 & $6.0^{\infty}_{1.2}$ $z>1.5$ (non-QSOs) & $7\pm 3$ & 340 & $0.5^{0.8}_{0.3}$ QSOs & $5\pm 2$ & 290 & $0.6^{1.0}_{0.4}$ Unidentified & $8\pm 3$ & 600 & $0.8^{1.2}_{0.5}$ \[tab3\]
Black Hole Mass Accretion
=========================
For nearby normal galaxies, there is an approximate empirical relation between the black hole mass, M$_{bh}$, and the absolute magnitude of the bulge component of the host galaxy, $M$(bulge) (e.g., Magorrian et al. 1998; Wandel 1999; Ferrarese & Merritt 2000; Gebhardt et al. 2000). Assuming that the relation also holds at high redshift, we can estimate the black hole masses of our ‘normal’ galaxies, subject to the uncertainties in translating our observed $B$-band magnitudes into $M_B$(bulge) that would worsen the empirical relation (Kormendy & Ho 2000). The $M_{B}$ values for our hard X-ray sample are plotted versus redshift in Fig. \[fig18\]. The inferred ${\rm M}_{bh}$ values in our ‘normal’ galaxies are in the approximate range $5\times 10^{7}$ M$_\odot$ to $10^{9}$ M$_\odot$.
As discussed in § \[secoptsample\], luminous hard X-ray sources are common in bulge-dominated optically luminous galaxies with about 10% of the population showing activity at any given time. This preferential activity of luminous sources with massive bulges presumably reflects the relation between the central massive black hole and the luminosity of the bulge. If the fraction of galaxies showing such behavior reflects the fraction of time that each galaxy spends accreting onto its massive black hole, then we require each such galaxy to be active for somewhere between 1 and 2 Gyr. This is considerably longer than the theoretically estimated accretion time of 0.01 Gyr for black hole fuelling by mergers (Kauffmann & Haenelt 2000) and therefore may suggest that the activity is being powered by smaller mergers or by internal flows within the galaxies.
We may address the issue of the total accreted mass required to account for the luminosity in each galaxy by conservatively assuming that the hard X-ray flux-weighted average of $L_{FIR}/L_{HX}=33$ derived in § \[secbolo\] is the ratio of the AGN’s bolometric luminosity to its $2-10$ keV luminosity; hereafter, we denote this bolometric correction by $A$. The mass inflow rate is $\epsilon\Delta M_{bh} c^2/\Delta t=L_{BOL}=A L_{HX}$, where $\epsilon$ is the efficiency for re-radiation of the accretion energy. The total mass flow over the accretion time $\Delta t\sim 1.5$ Gyr is
$$\Delta M_{bh}=10^7\ {\rm M}_\odot \Biggl({L_{HX} \over {10^{42}\
{\rm erg\ s^{-1}}}}\Biggr)\Biggl({A \over 33}\Biggr)\Biggl({{\Delta t}
\over {1.5\ {\rm Gyr}}} \Biggr)\Biggl({0.1 \over \epsilon}\Biggr)$$
Our ‘normal’ galaxies have hard X-ray luminosities ranging from $0.1$ to $50\times 10^{42}\ {\rm erg\ s^{-1}}$, for which the above accretion masses are $10^6$ to $5\times 10^8\ M_\odot$. Thus, even for the maximum plausible efficiency, $\epsilon\sim 0.1$, we are seeing a substantial fraction of the growth of these supermassive black holes (Fabian & Iwasawa 1999).
We may quantify this in a fairly model-independent way as follows. The bolometric surface brightness at the present time is simply related to the $2-10$ keV XRB by
$$I_{BOL}=0.85\ A\ I_{HX}$$
where the factor 0.85 is discussed in § \[seclum\]. The bolometric EBL is related to the energy production as
$$I_{BOL}={{\epsilon\ c^3\ \rho_{bh}} \over {4\ \pi\ (1+\bar{z})}}$$
where $\rho_{bh}$ is the universal mass density of supermassive black holes, and $\bar{z}$ is the mean redshift of the contributing sources. The $(1+\bar{z})$ factor reflects the adiabatic expansion loss. The hard X-ray flux-weighted mean redshift is $\bar{z}=1.3$ for the spectroscopically identified sources (including source 8) and $\bar{z}=1.5$ if we place the remaining sources at $z=2$.
We combine the two equations and normalize to the observed $2-10$ keV background from Vecchi et al. (1999), $I_{HX}=7.6\times 10^{-8}$ erg cm$^{-2}$ s$^{-1}$ sr$^{-1}$, to find
$$\rho_{bh} = 2\times 10^{-35}\ \Biggl({{1+\bar{z}}\over 2.5}\Biggr)\
\Biggl({0.1 \over \epsilon}\Biggr)\ {\rm g\ cm}^{-3}$$
where we have again conservatively used $A=33$. Inclusion of the bolometric optical light would raise this estimate by about 40%.
If we adopt a spheroid mass density of $10^{-32}$ g cm$^{-3}$, which has a plausible multiplicative error of about 2 (Cowie 1988; Fukugita et al. 1998), the black hole-bulge relation expressed in mass terms
$${\rm M}_{bh} = 0.002-0.006\ {\rm M(bulge)}$$
gives a black hole mass density of $\rho_{bh} = 2-6\times 10^{-35}$ g cm$^{-3}$. Thus, even for high-end estimates of the radiative efficiency ($\epsilon\sim 0.1$) we are seeing a significant fraction of the black hole accretion. Conversely, since we are probably missing some fraction of the activity from sources that are too obscured to be seen in the $2-10$ keV sample, we must have a high radiative accretion efficiency ($\epsilon\gg 0.01$) or the required black hole mass density would be too high.
We may go one step further and ask when the black holes formed, since the above equations also apply to the sources divided into redshift slices. We know directly from the spectroscopic observations that at least 16% of the $2-10$ keV background arises below $z=1$, 18% between $z=1$ and 2, and 13% between $z=2$ and 3. The remaining 21% of the $2-10$ keV light seen in the current sample most plausibly lies in the $z=1.5-3$ range. Weighting by the mean redshifts in each interval, we find that at least 10% of the observed black hole mass formation occurs at $z<1$. This fraction could rise if there are more obscured hard X-ray sources missing from the $2-10$ keV sample which preferentially lie at low redshift.
Conclusions
===========
We carried out an extensive multi-wavelength observational program to determine the nature of the hard X-ray background sources. Our major conclusions are as follows
$\bullet$ In the 57 arcmin$^2$ [*Chandra*]{} SSA13 field, we detected 20 sources with $2-10$ keV fluxes greater than $3.8\times 10^{-15}$ erg cm$^{-2}$ s$^{-1}$.
$\bullet$ We spectroscopically identified the 13 hard X-ray sources brighter than $I=23.5$ mag; all are in the redshift range $z=0-3$. The spectra fall into three general categories: (i) 2 quasars, (ii) 5 AGN, and (iii) 6 optically ‘normal’ galaxies. For the latter category, the AGN are either very weak or undetectable in the optical.
$\bullet$ The soft to hard X-ray flux ratios of the hard sample can be described by a rather narrow range of neutral hydrogen column densities from $N_H=2\times 10^{22}$ cm$^{-2}$ to $3\times 10^{23}$ cm$^{-2}$. At most three of the 14 sources at $z>1$ have column densities above $N_H=3\times 10^{23}$ cm$^{-2}$, which suggests that we are seeing most of the obscured AGN in the present sample.
$\bullet$ The hard X-ray sample is consistent with a constant surface density of $\sim 400$ sources per square degree per unit redshift. The redshift distribution is very similar to that of previous soft selected samples with similar limiting sensitivities.
$\bullet$ The hard X-ray luminosities of the spectroscopically identified sources range from just over $10^{41}$ erg s$^{-1}$ to $\sim 10^{45}$ erg s$^{-1}$. All but two are more X-ray luminous than the local galaxy populations.
$\bullet$ The integrated $2-10$ keV light of the 20 hard X-ray sources corresponds to an EBL of $1.34\times 10^{-11}$ erg cm$^{-2}$ s$^{-1}$ deg$^{-2}$, or between 58 to 84 percent of the hard XRB, depending on the XRB normalization. Most of this light is from the $z<2$ population.
$\bullet$ The colors of the spectroscopically unidentified sources suggest that they are likely early galaxies at $z=1.5-3$.
$\bullet$ The ‘normal’ galaxies and AGN follow the upper envelope of the star forming field galaxy population.
$\bullet$ Between 4 and 12 percent of $I<23$ galaxies with $-22.5>M_I>-24$ have hard X-ray fluxes above $1.0\times 10^{-15}$ erg cm$^{-2}$ s$^{-1}$. Nearly all of these lie in the redshift range $z=0.3-1.5$, for which the flux threshold corresponds to $L_{HX}\sim 0.3-3\times 10^{42}$ erg s$^{-1}$.
$\bullet$ Excluding the two quasars, the hard X-ray flux-weighted average bolometric optical to $2-10$ keV luminosity ratio for the hard X-ray sources is low (9), providing strong evidence that the sources are powered by AGN.
$\bullet$ Excluding the two quasars, the hard X-ray flux-weighted average bolometric correction from the $2-10$ keV luminosity to the bolometric light of the AGN is somewhere between 33 and 42, depending on the optical light contamination by stars.
$\bullet$ Of the 20 hard X-ray sources in our sample, only one (source 8) was significantly detected in the submillimeter. The millimetric redshift of source 8, obtained from the submillimeter to radio flux ratio, is in the range $1.2-2.4$. Its rest-frame SED is similar to that of the heavily dust obscured local AGN NGC 6240.
$\bullet$ Bolometric FIR luminosities or upper limits inferred from the radio data using the FIR-radio correlation or from the submillimeter data using the NGC 6240 or PG 1543+489 SEDs and luminosities are consistent.
$\bullet$ The $>2$ mJy submillimeter source population contributes very little to the hard XRB (6%).
$\bullet$ Excluding the one submillimeter source that had a significant hard X-ray detection, the FIR to hard X-ray luminosity ratio for the submillimeter selected sample has a $1\sigma$ lower limit of 1100; this is above the obscured AGN values and approaching that of Arp 220. Thus, it appears likely that most of the $>2$ mJy submillimeter sources are star formers.
$\bullet$ The radio selected sample contributes 26% to the hard XRB, nearly all of which comes from the observed hard X-ray sources.
$\bullet$ The hard X-ray sources contribute about 10% of the light at both UV/optical and submillimeter wavelengths. However, because there is stellar light contamination in the optical/NIR, it is likely that most of the AGN light is emerging in the FIR.
$\bullet$ The masses of the black holes in our ‘normal’ galaxies are estimated from the Magorrian relation to be in the range $5\times 10^{7}$ M$_\odot$ to $10^{9}$ M$_\odot$.
$\bullet$ Luminous hard X-ray sources are common in bulge dominated optically luminous galaxies with about 10% of the population showing activity at any given time. Since the hard X-ray emission is likely associated with episodes of accretion onto the central massive black holes in these galaxies, the 10% may represent the duty cycle of galaxies that are active at any given time. Even with a high-end estimate of the radiative efficiency ($\epsilon=0.1$), the black hole mass density required to account for the observed light is comparable to the local black hole mass density.
We thank Keith Arnaud for generating the X-ray images for this analysis, and we thank Nicolas Biver and James Deane for taking the SCUBA observations. We thank an anonymous referee for helpful comments about the manuscript. AJB and EAR acknowledge support from NASA through Hubble Fellowship grants HF-01117.01-A and HF-01123.01-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA under contract NAS 5-26555. AJB and LLC acknowledge support from NSF through grants AST-0084847 and AST-0084816. The JCMT is operated by the Joint Astronomy Center on behalf of the UK Particle Physics and Astronomy Research Council, the Netherlands Organization for Scientific Research, and the Canadian National Research Council.
Allen, S.W., Di Matteo, T., Fabian, A.C. 2000, , 311, 493
Almaini, O., Lawrence, A., Boyle, B.J. 1999, , 305, 59
Arnaud, K.A., et al. 2001, in preparation
Barger, A.J., Cowie, L.L., Sanders, D.B., Fulton, E., Taniguchi, Y., Sato, Y., Kawara, K., Okuda, H. 1998, , 394, 248
Barger, A.J., Cowie, L.L., Sanders, D.B. 1999, , 518, L5
Barger, A.J., Cowie, L.L., Smail, I., Ivison, R.J., Blain, A.W., Kneib, J.-P. 1999, , 117, 2656
Barger, A.J., Cowie, L.L., Richards, E.A. 2000, , 119, 2092
Bautz, M.W., Malm, M.R., Baganoff, F.K., Ricker, G.R., Canizares, C.R., Brandt, W.N., Hornschemeier, A.E., Garmire, G.P. 2000, , 543, L119
Bernstein, R.A. et al. 1999, Low Surface Brightness Universe, ASP Conference Series 170, Eds. J.I. Davies, C. Impey, and S. Phillipps. Astronomical Society of the Pacific (San Francisco), ISBN: 1-886733-92-9 (1999), p. 341
Blain, A.W., Kneib, J.-P., Ivison, R.J., Smail, I. 1999a, , 512, L87
Brandt, W.N., et al. 2000, , 119, 2349
Campos, A., Yahil, A., Windhorst, R.A., Richards, E.A., Pascarelle, S., Impey, C., Petry, C. 1999, , 511, L1
Carilli, C.L., Yun, M.S. 2000, , 530, 618
Chen, L.-W., Fabian, A.C., Gendreau, K.C. 1997, , 285, 449
Coleman, G.D., Wu, C.-C., Weedman, D.W. 1980, , 43, 393
Comastri, A., Setti, G., Zamorani, G., Hasinger, G. 1995, A&A, 296, 1
Condon, J.J. 1992, , 30, 575
Cowie, L.L. 1988, The Post-Recombination Universe, NATO Advanced Science Institute Series, Eds. N. Kaiser, A.N. Lasenby, p. 1
Cowie, L.L., Gardner, J.P., Hu, E.M., Songaila, A., Hodapp, K.-W., Wainscoat, R.J. 1994, , 434, 114
Cowie, L.L., Songaila, A., Hu, E.M., Cohen, J.G. 1996, , 112, 839
Cowie, L.L., et al. 2001, in preparation
Crawford, C.S., Fabian, A.C., Gandhi, P., Wilman, R.J., Johnstone, R.M. 2000, , submitted (astro-ph/0005242)
Downes, D., Solomon, P.M. 1998, , 507, 615
Eales, S., Lilly, S., Gear, W., Dunne, L., Bond, J.R., Hammer, F., Le F[è]{}vre, O., Crampton, D. 1999, , 515, 518
Eales, S., Lilly, S., Webb, T., Dunne, L., Gear, W., Clements, D., Yun, M. 2000, , 120, 2244
Elvis, M., Schreier, E.J., Tonry, J., Davis, M., Huchra, J.P. 1981, , 246, 20
Fabbiano, G. 1989, ARA&A 27, 87
Fabian, A.C., Iwasawa, K. 1999, , 303, L39
Fabian, A.C., et al. 2000, , 315, L8
Ferland, G., Netzer, H. 1983, , 264, 105
Ferrarese, L., Merritt, D. 2000, , 539, L9
Fixsen, D.J., Dwek, E., Mather, J.C., Bennett, C.L., Shafer, R.A. 1998, , 508, 123
Fukugita, M., Hogan, C.J., Peebles, P.J.E. 1998, , 503, 518
Gardner, J..P., Cowie., L.L., Wainscoat, R.J. 1993, , 415, L9
Garmire, G.P., et al. 2001, in preparation
Gebhardt, K. et al. 2000, , 539, L13
Gendreau, K.C. et al. 1995, , 47, L5
Giacconi, R., et al. 2001, , submitted (astro-ph/0007240)
Gilli, R., Risaliti, G., Salvati, M. 1999, 347, 424
Gunn, K.F., Shanks, T. 2000, , in press (astro-ph/9909089)
Hasinger, G., Burg, R., Giacconi, R., Schmidt, M., Trumper, J., Zamorani, G. 1998, A&A, 329, 482
Hasinger, G. 2000, ISO Surveys of a Dusty Universe, Eds. D. Lemke, M. Stickel, K. Wilke (Springer), in press (astro-ph/0001360)
Ho, L.C., 1999, , 516, 672
Hodapp, K.-W. et al. 1996, NewA, 1, 177
Holland, W.S., et al. 1999, , 303, 659
Hornschemeier, A.E., et al. 2000, , 541, 49
Hughes, D.H. et al. 1998, , 394, 241
Ishisaki, Y., et al. 1998, Astron. Nach., 319, 68
Ivison, R., Smail, I., Le Borgne, J.-F., Blain, A.W., Kneib, J.-P., Bézecourt, J., Kerr, T.H., Davies, J.K. 1998, , 298, 583
Iwasawa, K. 1999, , 302, 96
Jenness, T., Lightfoot, J.F. 1998, Starlink User Note 216.3
Kaspi, S., Brandt, W.N., Schneider, D.P. 2000, , 119, 2376
Kauffmann, G., Haehnelt, M. 2000, , 311, 576
Keel, W.C., Kennicutt, R.C., Jr., Hummel, E., van der Hulst, J.M.1985, , 90, 708
Kim, D.-C., Sanders, D.B. 1998, , 119, 41
Klaas, U., Haas, M., Heinrichsen, I., Schulz, B. 1997, A&A, 325, L21
Kormendy, J., Ho, L.C. 2000, The Encyclopedia of Astronomy and Astrophysics (Institute of Physics Publishing), in press, \[astro-ph/0003268\]
Laor, A., Fiore, F., Elvis, M., Wilkes, B.J., McDowell, J.C. 1997, , 477, 93
Lightfoot, J.F., Jenness, T., Holland, W.S., Gear, W.K. 1998, SCUBA System Note 1.2
Madau, P., Ghisellini, G., Fabian, A.C. 1994, , 270, L17
Magorrian, J. et al. 1998, 115, 2285
Marshall, F. et al. 1980, , 235, 4
Matsumoto, H., Koyama, K., Awaki, H., Tsuru, T., Loewenstein, M., Matsushita, K. et al. 1997, , 482, 133
Matt, G., Fabian, A.C. 1994, 267, 187
McMahon, R.G., Priddey, R.S., Omont, A., Snellen, I., Withington, S. 1999, , 309, L1
Miyaji, T. et al. 1998, A&A, 334, L13
Miyaji, T., Hasinger, G., Schmidt, M. 2000, A&A, 253, 25
Moran, E.C., Halpern, J.P., Helfand, D.J. 1996, , 106, 341
Morrison, R., McCammon, D. 1983, , 270, 119
Mushotzky, R.F., Cowie, L.L., Barger, A.J., Arnaud, K.A. 2000, , 404, 459 (MCBA)
Oke, J.B., et al. 1995, , 107, 375
Polletta, M., Courvoisier, T.J.-L., Hooper, E.J., Wilkes, B.J. 2000, A&A, 362, 75
Puget, J.-L., Abergel, A., Bernard, J.-P., Boulanger, F., Burton, W.B., Desert, F.-X., Hartmann, D. 1996, A&A, 308, L5
Richards, E.A., et al. 2001, in preparation
Sanders, D.B., Mirabel, I.F. 1996, , 34, 749
Schmidt, M., et al. 1998, A&A, 329, 495
Setti, G., Woltjer, L. 1989, A&A, 224, L21
Severgnini, P. et al. 2000, A&A, 360, 457
Smail, I., Ivison, R.J., Blain, A.W. 1997, , 490, L5
Smail, I., Ivison, R.J., Blain, A.W., Kneib, J.-P. 1998, , 507, L21
Tananbaum, H., Tucker, W., Prestwich, A., Remillard, R. 1997, , 476, 83
Trentham, N. 2000, , submitted, \[astro-ph/0004370\]
Türler, M., Courvoisier, T.J.-L., Paltani, S. 1999, A&A, 349, 45
Vaughan, S., Reeves, J., Warwick, R., Edelson, R. 1999, MNRAS, 309, 113
Vecchi, A., Molendi, S., Guainazzi, M., Fiore, F., Parmar, A.N. 1999, A&A, 349, L73
Veilleux, S., Osterbrock, D. 1987, , 63, 295
Vignati, P. et al. 1999, , 308, L6
Wainscoat, R.J., Cowie, L.L. 2001, in preparation
Wandel, A. 1999, , 519, L39
Williams, R., et al. 1996, , 112, 1335
Wilman, R.J., Fabian, A.C. 1999, , 309, 862
Windhorst, R.A., Fomalont, E.B., Kellermann, K.I., Partridge, R.B., Richards, E., Franklin, B.E., Pascarelle, S.M., Griffiths, R.E. 1995, , 375, 471
Wright, E.L., Reese, E.D. 2000, , in press (astro-ph/9912523)
Zdziarski, A.A., Johnson, W.N., Done, C., Smith, D., McNaron-Brown, K. 1995, , 438, L63
Zheng, W., Kriss, G.A., Telfer, R.C., Grimes, J.P., Davidsen, A.F. 1997, , 475, 469
| |
Thyroid Diseases During Pregnancy
What is Thyroid Diseases During Pregnancy?
Thyroid disease is a disease that affects the thyroid gland. Sometimes the organism produces too little or too much Thyroid hormone. Thyroid hormones regulate the body’s energy use and metabolism. If too much Thyroid Hormone is produced, it is called HYPERTIROIDISM and most of the body functions are accelerated. If too little Thyroid Hormone is produced, the condition that occurs is called HYPOTYROIDZM, in which case body functions may slow down.
Thyroid hormone plays a critical role in maternal and infant health during pregnancy. Pregnant women with thyroid problems should be closely monitored for themselves and their babies. If the levels of thyroid hormones are low, medication should be given to increase them, and if they are high, to lower them.
Points to Remember
- Hyperthyroidism or Hypothyroidism pictures may occur as a result of thyroid diseases.
- Pregnancy; can also make normal function changes in the thyroid gland and cause disease.
- Uncontrolled hyperthyroidism during pregnancy can cause serious health problems in the mother and unborn baby.
- Mild Hyperthyroidism during pregnancy does not require treatment. Severe Hyperthyroidism must be treated.
- While uncontrolled hypothyroidism during pregnancy causes serious health problems in the mother, it adversely affects the growth and brain development of the unborn baby.
- Hypothyroidism during pregnancy is treated by giving Thyroxine, T4 hormone
- In Postpartum Thyroiditis, first Hyperthyroidism, then Hypothyroidism develops and then returns to normal. Sometimes Hypothyroidism is permanent.
Postpartum Thyroiditis (Postpartum Thyroid Gland Inflammation)
It is a type of Thyroid Inflammation that affects 4-10% of women in the first year after birth. Thyroid hormones stored in thyroid cells are released into the blood in this type of thyroid inflammation. In this disease, which is thought to be autoimmune, a mild hyperthyroidism picture occurs in the first few months. In many women, Hypothyroidism occurs between 6 and 12 months before the thyroid tissue regains its normal function. During this period, external Thyroid Hormone treatment should be given. A small number of patients require lifelong Thyroid hormone therapy. Generally, the patient who had thyroiditis after the previous birth will have it after the next births. As recurrent Thyroiditis Attacks destroy the Thyroid Gland, a lifetime external Thyroid Hormone treatment becomes inevitable.
The issue to be considered here may sometimes be confused with Postpartum Hypothyroidism and Postpartum Depression. Symptoms such as moodiness, restlessness, fatigue, weakness, exhaustion, lethargy that start after birth can be thought of as Postpartum Depression. If the symptoms of hypothyroidism are evident, Thyroid Hormone should be given.
Diagnosis and Treatment of Hypothyroidism
The following symptoms occur in pregnant women with hypothyroidism;
- Overstrain
- Intolerance to cold
- Muscle cramps
- Constipation
- Memory and concentration problems
TSH, T3, T4 levels are checked for diagnosis. Hypothyroidism must be treated for a healthy pregnancy and baby at any level. Thyroid Hormone is given externally in the treatment.
Hypothyroidism (Inadequate Thyroid)and Thyroid Diseases
Hypothyroidism during pregnancy is caused by a special Thyroid disease called Hashimoto’s Disease, which occurs in 5 out of every 1000 pregnant women. Hashimoto’s disease is a type of chronic inflammation of the thyroid gland. This is also an Autoimmune disease. In this disease, the immune system attacks the cells of the thyroid gland.
Uncontrolled Hypothyroidism during pregnancy causes some problems in mother and baby;
- Preeclampsia
- Anemia; Adequate oxygen intake cannot be provided to the body.
- Low
- Low Birth Weight Baby
- Stillbirth
- Congestive Heart Failure
Uncontrolled Hypothyroidism can adversely affect your baby’s brain and nervous system development, as Thyroid hormones play a huge role in the baby’s brain and nervous system development in the first three months.
Hyperthyroidism Treatment and Thyroid Diseases During Pregnancy
Mild Hyperthyroidism during pregnancy is not treated. More severe Hyperthyroidism is treated with Antithyroid drugs.
Radioactive Iodine treatment cannot be applied during pregnancy. Because Radioactive iodine can destroy the baby’s Thyroid tissue. If our pregnancy cannot tolerate antithyroid drugs, then it may be necessary to remove the Thyroid Gland with surgery.
Since antithyroid drugs may cross the placenta and cause hypothyroidism in the baby, the mother is given the smallest possible dose of medication so that the baby does not develop hypothyroidism.
Side Effects of Antithyroid Drugs
- Allergic reactions such as skin rash, itching.
- By reducing the white blood cells in the body, they can suppress the immune system.
- Rarely, they may cause Liver Failure.
If the following symptoms occur while using antithyroid medications, be sure to talk to your doctor to stop the medication;
- Tiredness
- Weakness
- Ambiguous abdominal pain
- Loss of appetite
- Skin rash or itching
- Exaggerated bruising even if you bump slightly
- Jaundice (in the skin and white parts of the eyes)
- Constant sore throat
- Fire
Mothers can safely use Antithyroid drugs during pregnancy and breastfeeding under supervision.
Diagnosis of Hyperthyroidism and Thyroid Diseases During Pregnancy
Some symptoms of hyperthyroidism are also present in a normal pregnancy. For example; Increased heart rate, increased body temperature, intolerance, fatigue are also seen in a normal pregnancy.
Fast and irregular heartbeat, tremors in the hands, weight loss, or inability to gain weight, severe nausea, vomiting, possibly associated with hyperthyroidism.
Such a patient should have TSH, T3, T4, and TSI tests in the blood.
Hyperthyroidism (Overactive Thyroid)
Hyperthyroidism during pregnancy is usually caused by Graves’ Disease and occurs in one out of every 500 pregnancies. Graves’ Disease is an autoimmune disease. Normally, the Immune System is programmed to protect the body from pathogenic factors such as bacteria, viruses, fungi, and parasites. In autoimmune diseases, the immune system attacks the body’s own cells and organs. In Graves’ disease, the immune system secretes a substance called TSI (Thyroid Stimulant Immune Globulin) and since this substance has a structure similar to TSH, it tricks the Thyroid gland and releases excessive amounts of Thyroid hormone. These patients have a special appearance that looks like their eyes popped out of their sockets.
Graves ‘disease may occur for the first time in pregnancy, or when a woman with Graves’ disease becomes pregnant, the symptoms may regress during the second or third trimester of pregnancy. The reason for this is the suppressed Immune system during pregnancy. The disease usually gets worse again shortly after birth. In addition to the pregnancy follow-up of the pregnant woman with Graves’ disease, it is also required to be followed up monthly for this disease.
Rarely Hypertitoid; It causes problems such as nausea, vomiting, dehydration, weight loss during pregnancy. It is accepted that nausea and vomiting, which is severe in the first half of this pregnancy and then passes, is caused by increased Thyroid hormones due to increased Big.
Uncontrolled Hyperthyroidism and Thyroid Diseases during pregnancy causes the following conditions in the mother;
- Congestive Heart Failure
- Blood Pressure Rise That May Cause Preeclampsia in Late Pregnancy
- The symptoms of Hyperthyroidism, which we call Thyroid Storm, may appear suddenly and very severely.
- Low
- Early Birth
- Low Birth Weight can cause a baby to be born.
The following can be seen in the baby of a hyperthyroidism mother;
Even if a woman with Graves’ disease receives Radioactive Iodine treatment before pregnancy and undergoes a surgical procedure, TSIs in her circulation pass the placenta and stimulate the baby’s Thyroid gland and reveal signs of Hyperthyroidism in the baby. However, if the mother is being treated with an anti-thyroid drug, the probability of these problems in the baby is extremely low.
- The baby’s fontanelles (fontanelles on the baby’s head) may close early.
- Tachycardia
- Hyperthyroidism Heart Failure
- Irritability
- Respiratory Failure due to the compression of the enlarged thyroid gland on the windpipe
- Insufficient weight gain
- It can lead to problems such as. After birth, the baby and the mother should be closely monitored.
How Does Pregnancy Affect Normal Thyroid Gland Functions?
Two hormones reach high levels in the blood during pregnancy. One of them is Big and another is Estrogen. These two hormones increase the secretion of Thyroid Hormone from the Thyroid gland. Therefore, it can sometimes be difficult to interpret these normal physiological changes.
Thyroid Hormone is very important for the brain and nervous system development of the baby. The mother provides Thyroid Hormone to the baby in the first three months. The thyroid hormone secreted from the mother’s thyroid gland passes to the baby through the placenta and fulfills its function. After the twelfth week, the baby begins to produce its own Thyroid hormone.
The thyroid gland normally grows somewhat during pregnancy. But this is not enough to be determined manually. If we are dealing with a Thyroid gland that is enlarged and enlarged enough to be palpable, this may be a symptom of Thyroid disease. Since increased Thyroid hormone levels and a slightly enlarged Thyroid gland can be encountered in a normal pregnancy, it is necessary not to miss any Thyroid disease. Because it can be difficult to diagnose during pregnancy.
Thyroid gland
The thyroid gland is a butterfly-shaped gland about 2.5 cm in length and 25 grams in weight. It is located under the larynx, in front of the voice box, where the windpipe is divided into two. The thyroid gland is one of the glands that make up the Endocrine System. The glands belonging to the Endocrine System secrete substances called the Hormone they produce into the blood. Hormones restore their functions, not in the gland they are secreted from, but in the target cells or in the whole body by taking long journeys with blood.
The thyroid gland secretes two types of Thyroid hormones. One of them is T3, Triiodothyronine, and the other is T4, Thyroxine. T3 is an active hormone and is derived from T4. Thyroid hormones; affect metabolism, brain development, respiratory, heart, and nervous system functions, body temperature, skin moisture level, muscle strength, menstrual cycle, and blood cholesterol levels.
Production of Thyroid Hormone; It regulates a hormone called TSH (Thyroid Stimulating Hormone), which is made and secreted in the pituitary gland in the brain. When Thyroid Hormone levels in the blood are high, the Pituitary gland responds by decreasing the TSH level, and when the Thyroid Hormone levels in the blood are low, the Pituitary gland increases TSH secretion. | https://www.healthydaysblog.com/thyroid-diseases-during-pregnancy/ |
An important part of being effective at work involves knowledge of the forces at work in today's workplace.
There are a number of occupational and societal trends that you should understand to maximize your effectiveness at work. These trends involve changes in the work you do, how you do it, and where you do it.
Change in the WorkplaceIf there is one thing you can count on in the workplace, it is that change is everywhere. Change in the workplace has occurred in:HOW YOU DO YOUR WORKThe last two decades of the 20th Century ushered in a era of great change in how Americans work. Some of these changes were brought about by changing technology, however, many were also brought about by demographic changes in the country.
Change in Technology: Computers and The Age of Informationâ¢Computers have changed how most jobs are performed.
â¢Computers have had such a significant impact that they have driven a shift from a "manufacturing" economy to an "information" economy.
Increasingly, the most common product we are trading in is "information." In fact, the amount of information available to us is doubling every 5 years. The growth in information and in information technology has become such a critical issue that the U. S. Government has created an agency just to handle Information Technology Research and Development.
â¢The Third Wave: In their book "War and Antiwar," Heidi and Alvin Toffler divided human history into three distinct phases or "Waves". Phase One, which they termed the "Agrarian" phase was called the First Wave; the Second Wave corresponded to the Industrial Revolution; while the present "Age of Information," characterized by the "digitalization" of society, was termed the Third Wave. As we begin the New Millenium, we have truly embraced the Age of Information.
â¢Effective individuals understand that technology... | https://www.writework.com/essay/occupational-trends |
When we exercise, we often feel like we are breathing out, which is likely because we are not breathing properly. Did you know that proper breathing technique can improve your performance?
Breathing is central to all the activities we do, but we don't really notice it when we eat, walk or talk, especially when we exercise. Sometimes, during physical activities, one has the impression that breathing is interrupted: this unpleasant feeling almost always occurs before the muscles are exhausted, which is mainly due to the reduced supply of oxygen. This is because when the lungs cannot meet the
oxygen demand of the skeletal muscles, a neurochemical reaction begins, reducing the supply of oxygen to the peripheral muscles, thus creating this feeling of muscle fatigue and pain. In fact, peripheral oxygenation is blocked, which inevitably leads to a decrease in the intensity of the activity performed. However, by increasing our breathing endurance, our performance capabilities will reach the same limits as the muscles involved in the given exercise.
That is why respiratory muscle training delays fatigue and increases endurance. And learning how to breathe properly can have beneficial effects on our bodies and minds.
Breathe better to run better
Take, for example, the relationship between breathing and running: Whether running outdoors or on a treadmill, all beginner runners lose their breath. But what are the secrets to good breathing while running? First, it is necessary to know the difference between thoracic and diaphragmatic breathing.
Chest breathing uses only the upper part of the lungs, and the inhaled air only stays in the lungs for a short time, preventing them from filling completely and thus reducing the oxygen supply. On the other hand, diaphragmatic breathing, which is also called diaphragmatic breathing, allows the maximum
amount of oxygen to enter the body. The diaphragm is the main respiratory muscle, and it allows the rib cage to expand and contract naturally and effortlessly. This type of breathing is very important when running, as it improves effort by reducing lactic acid build-up, or the classic side sutures often experienced by those who don't train regularly.
To feel the abdominal breathing, lie on your back and put your hands on your stomach: inhale and exhale as deeply as possible, until you see your hands rising and falling. It may be difficult at first, but with a little practice, this type of breathing will soon feel natural to you, and it is very beneficial. The purpose of the exercise is to focus attention only on the expansion of the stomach while breathing.
Is it better to breathe through the nose or through the mouth?
When the body is under extreme stress, it needs more oxygen, and by breathing through the mouth we can absorb more. In fact, we all know that when the activity becomes more intense, breathing through the nose is no longer enough. Therefore, the best way to breathe while running is through your nose and mouth. Finally, regarding respiratory rate, do not think it necessary to force it during the race; On the contrary, the best way is to breathe as naturally as possible.
The role of food
Athletic performance is not only improved by training the muscles involved in breathing: nutrition also plays an important role. Breathing and eating are two interrelated processes: in respiration, we take in oxygen which, through our digestive system, is then used to oxidize the nutrients in the food we eat, thus creating the energy we need to live. Breathing deeply and properly means that our body has enough oxygen to generate the most energy from the food we eat, and to keep us healthy.
Air and food, properly processed and used, give the body all the energy it needs to meet the needs of a living organism. We cannot change the quality of the air we breathe, but we can certainly choose the most appropriate quality and quantity of food to meet our needs. | https://www.fitnessgroupem.com/2022/09/breathing-your-ally-to-improve-your.html |
Statement of Purpose
In an age of increasing concerns about the privacy of individuals’ records, the Thomas Memorial Library is committed to safeguarding all borrowers’ records to the extent the law allows. The ethical responsibilities of librarians, the Maine State Statute regarding public library records, and the United States Constitution protect the privacy of library users.
Information Collected
The Thomas Memorial Library collects only information that is necessary to conduct efficient and effective library services.
This information includes:
- Name
- Address
- Email address
- Phone number
- Library card number
- Date of birth
- Materials currently checked out
- Billed items
Thomas Memorial Library maintains no record of those items that have been borrowed and returned unless a charge for a lost or damaged item was incurred. Access to patron information within the library is limited to a need-to-know basis, enforced by staff authorization passwords.
Requests for Library Records
- No records can be made available to any inquiries, including law enforcement, unless a subpoena or warrant has been served by a court of competent jurisdiction.
- Library representatives shall not honor requests from federal law enforcement officers unless a subpoena or search warrant is presented pursuant thereto.
- Records will be made available immediately upon presentation of a valid warrant.
- Records will be released in response to a subpoena after consultation with legal counsel.
It is the responsibility of all library staff to protect the privacy of the library’s circulation records and registration materials identifying the names of library users with regard to specific materials. Access to such records will be given only to the cardholder, and to staff who need to access the records for library purposes. Exceptions to this policy will be granted only with the express written permission of the patron involved, or as a result of a court order. This policy is in keeping with the Maine State Revised Statute, title 27, section 21 that states:
“Records maintained by any public municipal library, including the Maine State Library, which contain information relating to the identity of a library patron relative to the patron’s use of books or other materials at the library, shall be confidential. Those records may only be released with the express written permission of the patron involved or as the result of a court order. Public municipal libraries shall have up to 5 years from the effective date of this chapter to be in compliance with this section.” 1983, c. 208.
In accordance with the ALA Code of Ethics, confidentiality extends to “information sought or received, and materials consulted, borrowed, or acquired,”* and includes database search records, interlibrary loan records, and other personally identifiable uses of library materials, facilities, or services. Parents should be aware that, since juvenile patron records are not specifically exempted in this statute, in keeping with the Library Bill of Rights, the library trustees have interpreted the statute to cover all patron records without exception.
* ALA Code of Ethics, adopted by the ALA Council June 28, 1995.
Reading History
Patrons may choose to opt-in to have their reading history retained as part of their library record for their own use. Once they have opted-in to this service, library cardholders can access this information by logging into their account online. This service is provided as a convenience to library users who wish to have a record of items they have borrowed. Library staff do not have access to this information via the library’s circulation system.
Internet and Other Computer Resources
Personally identifiable usage information related to computer searches is automatically collected in “temporary internet files,” “history,” and “cookies” folders on the library’s public computers. These histories are cleared, and any downloaded information erased, daily when the public computers are shut down and restarted. | https://www.thomasmemoriallibrary.org/confidentiality-policy-2/ |
We invite applications from researchers with diverse backgrounds and professional experiences, who wish to contribute to the range of the institute’s transdisciplinary Internet research.
Located in the heart of Berlin, the HIIG provides a dynamic and intellectual environment for fellows to pursue their own research interests and actively shape their stay. We invite fellows to collaborate with an international and interdisciplinary team of researchers and offer a number of opportunities to share and discuss their ideas. These include, but are not limited to:
commenting on current developments in your field in form of HIIG blog posts
holding presentations in one of our (virtual) lunch talks
engaging in joint projects with other fellows and HIIG researchers
participating in webinars and skill sharing sessions
enjoying a (virtual) coffee, having some inspiring conversation and meeting our research directors and senior researchers during our regular fellow coffee talks
Key Areas
For our 2022 class of fellows, we consider applicants who intend to pursue topics that fall within one of our research programmes or groups. Please read the information on each of the linked websites closely, and position yourself and/or your project within the programme that suits you best.
The evolving digital society: How do discourses on and imaginaries about artificial intelligence (AI), anthropomorphised interfaces and robots shape the development of policies, technologies and theories – and vice versa? How can cross-cultural perspectives inform our understanding of technological change? How do we tackle the challenges that lie ahead for public discourse, such as competing media realities, misinformation or deep fakes?
Data, Actors, Infrastructures: How can we make data held by private and public organisations usable for scientific purposes and for the general good of society, and still take legal, ethical, economic or organizational challenges and legitimate interest of all stakeholders into account, without resorting to “silver bullets” such as “open data” or “data sharing”?
Innovation, Entrepreneurship & Society: Understanding, informing and co-creating innovation and entrepreneurship in a digital economy & society: How do organisations address grand challenges such as the climate crisis or inequality using digital technologies related to artificial intelligence, open innovation, and platforms? How do digital technologies impact organisational practices, processes, and purpose?
AI Lab & Society: Exploring the concrete changes and challenges artificial intelligence introduces and their societal implications: What profound social changes go hand in hand with an increasing integration of AI in political, social and cultural processes? How can AI infrastructures be developed and deployed in order to realise the public interest?
Things to consider
Time Frame: Fellowships may range from a minimum of 3 months to a maximum of 6 months within the time span from March 1 to December 31, 2022.
COVID-19: We cannot foresee the pandemic situation next year. Generally, there will be the options of completing the fellowship virtually, partly or even fully in person, depending on the state of the pandemic upon the start of the fellowship and your personal situation.
Financial Issues: The fellowship is unpaid.
Qualifications
Master’s degree, PhD in process/planned (Junior Fellow) OR
Advanced PhD, post-doctoral researcher (Senior Fellow)
Fluency in English
Research experience and a research project of your own that you plan to pursue
Application documents
A) your project, and how it responds to one of research programs, B) the specific work you propose to conduct during the fellowship, C) deliverables, products or outcomes you aim to produce
Optional: one writing or work sample covering internet research (in English or German)
Please read our FAQ and review the information carefully before applying. If you have any questions, please send an email to [email protected].
Please submit your application via this form until November 24, 2021, 11:59 p.m.
About HIIG
The Alexander von Humboldt Institute for Internet and Society (HIIG) was founded in 2011 to research the development of the internet from a societal perspective and better understand the digitalisation of all spheres of life. As the first institute in Germany with a focus on internet and society, HIIG has established an understanding that centres on the deep interconnectedness of digital innovations and societal processes. The development of technology reflects norms, values and networks of interests, and conversely, technologies, once established, influence social values. | |
Posted Wednesday, August 19, 2020
A recent announcement from Chancellor Rishi Sunak has people across the country reconsidering their property habits. A stamp duty land tax (SDLT) "holiday" for property buyers in England and Northern Ireland (NI) is in place until 31st March 2021.
What is stamp duty, you ask?
Homeowners pay this when purchasing "property or land over a certain price in England and NI". It's paid when:
The threshold price for SDLT has traditionally been £125,000 for residential properties and £150,000 for non-residential land and properties, however, a new threshold of £500,000 has been instigated during this time of the pandemic.
The threshold was previously staggered according to property type and the following price ranges:
0% > £125
2% £125001-250000
5% £250001 – 925000
10% 925001-1500000
12% > £1500001
For first time buyers, the rules used to be:
0% <£300,000
5% £300,0001 - £500,000
With the new holiday in place, nobody pays SDLT on any purchase below the £500,000 mark. Bands above this, however, remain the same.
What does this mean?
Prior to this, buyers purchasing a property worth £400,000 would have paid £10,000 in stamp duty, but today would pay nothing. The Chancellor says average savings are projected at £4,500 per new purchase. Stats show that 9/10 people will skip on paying their stamp tax this year if downsizing or first-time buyers (The Treasury). This movement has been part of a government incentive scheme, which also offers 50% discounts on eating out in August, and vouchers of up to £5,000 for homeowners to improve their energy efficiency.
Whilst this change was made in efforts to boost the property market, not everyone agrees. Skeptics think that this holiday should be applied to homes still in development, and "off-plan" to increase the wiggle-room for construction of new homes that are in desperate demand.
If you’re a Developer looking for advice or even finance solutions, we recommend Sentient SDLT contact Matthew Winder on 07838242999.
You can calculate your updated SDLT with this tool provided by the Money Advice Service: | https://www.remus.uk.com/about/news-insights/2020/stamp-duty-land-tax/ |
Tree at my window, window tree,
My sash is lowered when night comes on;
But let there never be curtain drawn
Between you and me.
— Robert Frost
Calling the Element of Earth into your home creates stability, abundance, and healing. One of the most important ways you can work with the Guardian of the North and Earth for home blessings is to honor the trees that watch over you.
Yesterday, I recommended that you begin to get to know the tree guardians that watch over and bless your home. As I mentioned, there may well be more than one, and your tree need not be literally next to your building.
While you are discovering which tree wishes to be your home’s ally, try not to be prejudiced by its physical appearance or even species. Let it speak to you, heart to heart. But then, as you develop this relationship, you should learn more about its species. For, over the centuries, our ancestors discovered a lot about the energy of various kinds of trees.
For instance, as you may know, oak trees are sacred in many cultures for their longevity, beauty, and size. Of all the trees in Britain and Ireland, the oak is considered king. Famed for its endurance and longevity, even today it is synonymous with strength and steadfastness, as well as its long, miraculous transformation from acorn, to ancient giant.
John Evelyn in his work, Sylva, Or a Discourse of Forest-Trees, calls it the “pride and glory of the forest.” In the classic, The Fairy-Faith in Celtic Countries, W. Y. Evans-Wentz notes that “the oak is pre-eminently the holy tree of Europe.”
In the Classical world, it was regarded as the Tree of Life, as its deep roots penetrate as deep into the Underworld as its branches soar to the sky. It was held sacred to Zeus and Jupiter, to Aries, the Dagda, the Basque oak God, as well as to Herne, God of the hunt, and the Goddess Diana. In Scandinavian countries, the oak is the tree of the Thunder-God, Thor, as it similarly was to almost all other Gods of thunder and lightning.
For centuries, it was a crime to fell a living oak tree in Pagan Ireland. There was a grove of sacred oaks at Derry. And the name of Kildare, where the priestesses, and later the nuns, of Brigid have tended Her sacred fires, is from Cill Dara, which means “Shrine of the Oak.”
The ancient Prussians also revered sacred oak trees. The chief oak in the forest at Romove (now in the Czech Republic) had priests who tended a perpetual fire of oak wood. This tree, draped with a cloth, was considered the dwelling place of the God. They honored it by hanging gifts and images from it. There was also a sacred oak tree at Hesse called the Red Jove from which omens were received and to which sacrifices were made. Holy oaks were preserved in Germany into modern times.
If your home guardian tree is an oak, you are fortunate indeed, for the oak gives powerful magic and medicine. It is the tree of truth, is usually male energy, and is known as the marriage tree that represents the divine wedding between the Goddess and the God.
To consecrate or rejuvenate your own marriage, you will receive its blessings by donning some of its leaves and dancing below it. Because oaks are well-known oracle trees, if you have a concern or question, sit with it and ask its spirit to guide you.
Its wood (which should never be cut unless already dead or threatening in some way), its leaves and its acorns are powerful tools for protection, prosperity, and focusing yourself.
Tomorrow, I’ll share some more lore about the mighty magic of oak trees. | https://www.owlsdaughter.com/2007/02/getting-to-know-your-tree-guardians/ |
As ancient participants in sacrifices must on occasion or even habitually have done, we can readily apprehend divine appetites for smoke and meat. In other words, the Greek gods were not vegetarians, or at least not strict “Ambrosians”.
What did Greek gods eat?
What is Ambrosia? In Greek mythology, ambrosia was considered the food or drink of the Olympian gods, and it was thought to bring long life and immortality to anyone who consumed it.
Did ancient Greeks eat a lot of meat?
Food in Ancient Greece consisted of grains, wheat, barley, fruit, vegetables, breads, and cake. The Ancient Greeks grew olives, grapes, figs and wheat and kept goats, for milk and cheese. … Meat was rarely eaten as the Greeks felt that just killing and eating a domesticated animal (like goats) was wrong.
Is there a Greek god of meat?
As the god of animal husbandry, Hermes was also the god of meat and feasting. Alongside Hestia (goddess of the hearth) he presided over the banquet.
What is Zeus favorite animal?
Zeus’ sacred animals were the eagle and the bull. In myth he abducted the youth Ganymede in the shape of an eagle and the maiden Europa in the guise of a bull. His sacred plants were the evergreen holm oak and the olive tree.
Who was the ugliest god?
Hephaestus. Hephaestus is the son of Zeus and Hera. Sometimes it is said that Hera alone produced him and that he has no father. He is the only god to be physically ugly.
Did the Greeks eat beef?
Greek Meat Dishes. The most common meats in Greece are pork, lamb, beef, goat, chicken, veal and rabbit not necessarily in that order. Because it was expensive in the past, before the Greeks became affluent enough to eat it every day, meat was eaten perhaps twice a week and usually with vegetables, pasta or grains.
Did Spartans eat meat?
The Spartans, noted among ancient writers for their austerity, prepared a black broth of blood and boiled pig’s leg, seasoned with vinegar, which they combined with servings of barley, fruit, raw greens, wine and, at larger dinners, sausages or roasted meat.
What did the poor eat in ancient Greece?
Poor families ate oak acorns (βάλανοι balanoi). Raw or preserved olives were a common appetizer. In the cities, fresh vegetables were expensive, and therefore, the poorer city dwellers had to make do with dried vegetables.
Is Kronos a cannibal?
Cronus sired the Olympians Hestia, Demeter, Hera, Hades, Poseidon, and Zeus. His wife/sister was Rhea. … Cronus may not have killed his kids, but he did eat them.
Who did Hermes fear?
As one of the immortal and powerful Olympian gods, Hermes the messenger had little to fear from anyone, except perhaps his father and ruler of the…
Who ate Greek children?
Cronus was the ruling Titan who came to power by castrating his Father Uranus. His wife was Rhea. There offspring were the first of the Olympians. To insure his safety Cronus ate each of the children as they were born.
What is the Greek god of dogs?
Dogs were closely associated with Hecate in the Classical world. Dogs were sacred to Artemis and Ares. Cerberus is a three-headed, dragon-tailed watchdog who guards the gates of Hades. Laelaps was a dog in Greek mythology.
Why did Zeus marry his sister?
Fooled, Hera took the bird to her bosom to comfort it. Thus situated, Zeus resumed his male form and raped her. Why is Zeus married to his sister? To hide her shame, Hera agreed to marry him.
How did Zeus died?
The Greek God Zeus didn’t die. It was only in 4th century AD that a Roman Emperor Theodosius forcibly seized the temples of Zeus as he was a Christian Ruler. Poseidon also didn’t die. One of many myths among Greek mythology was that they shifted their allegiance as the new centre of power shifted. | https://picturesfrombulgaria.com/balkan-peninsula/did-greek-gods-eat-meat.html |
The Myth of Jonas Salk
The Myth of Jonas Salk
David Oshinsky’s Polio: An American Story is a richer and more complex book than Kluger’s. Oshinsky’s position on the mythmaking? “I am trying to stay away from it,” he said in a recent conversation.
Salk emerges here as a complex scientist. He was an outsider, Oshinsky writes. “Salk was marooned out there in Pittsburgh, fiddling with an old-fashioned killed-virus vaccine and doing the dog’s work that his betters refused to do.” Yet he was close to the National Foundation for Infantile Paralysis and to O’Connor. He was meticulous in his science. “It was a game of trial and error, testing and tinkering, and few knew it better than Jonas Salk.” He was confident about his work but aware of its dangers. “ ‘When you inoculate children with a polio vaccine,’ he said later, ‘you don’t sleep well for two or three months.’” He was sensible and accommodating, yet he could be insensitive and egotistic, especially when dealing with his laboratory team. “Once the goal was reached, the group would split apart amidst charges that Salk had not appreciated, much less acknowledged, the collaborative nature of his success.” He shied away from the media yet craved publicity. “One of his greatest gifts was a knack for putting himself forward in a manner that made him seem genuinely indifferent to his fame, a reluctant celebrity, embarrassed by the accolades, oblivious to the rewards.”
All this Oshinsky unfolds in the context of the National Foundation’s politicking and lobbying, and of the larger politics of the day. In the end, Oshinsky’s Salk emerges as someone we care to know something about, most notably his left-wing leaning early in life (which Oshinsky learned about from FBI files), his apolitical stance in midlife, and his mystical tendencies in old age. Yet Oshinsky’s account has problems of its own. Although early concerns about Salk’s vaccine were scientifically motivated, those at the end of the 1950s were broadly social. An immunity gap among different social and economic classes had developed; Oshinsky knows this but gives the subject only two pages.
In 1959, epidemiologists reported findings on the pattern of the disease. These suggested a shift in incidence according to age, geography, and race. By 1960, less than one-third of the population under 40 years of age had received the full course of three doses of the Salk vaccine plus a booster. Most of those who had were white and from the middle and upper economic classes. The disease raged on in urban areas among African Americans and Puerto Ricans and in certain rural locales among Native Americans and members of isolated religious groups.
The gap had to do with access to vaccination. Pediatricians were not well compensated. “This was the one thing they could do which was a guaranteed reasonable flow of cash,” explained Henderson. The physicians resisted losing that cash; they argued for a vaccine that required their professional training.
Late in 1960, at the mid-winter clinical session of the American Medical Association, the surgeon general of the United States presided over a symposium on the state of polio immunization. E. Russell Alexander, chief of the surveillance section at the Communicable Disease Center, said, “The residual pattern of disease represents a measure of our failures to apply vaccine completely enough.” A. D. Langmuir, chief of the epidemiological branch at the center, said, “[P]olio seems far from being eradicated. The dreamed-for goal has not been achieved. In fact, many students of the problem question that eradication of poliomyelitis infection with inactivated vaccine is a scientifically tenable concept.” One of the main concerns was that the Salk vaccine did not prevent infection in the gut and thus did not break the chain of transmission.
Beginning in January 1962, pediatricians in two Arizona counties, Maricopa and Pima, containing the state’s largest cities, Phoenix and Tucson, conducted separate but similar voluntary mass immunizations using Sabin’s vaccine. “Previous programs in the county, using the Salk vaccine, had failed to bring polio immunization to a satisfactory level,” they reported a year later in the Journal of the American Medical Association. The program was called SOS (Sabin Oral Sundays). More than 700,000 people were immunized–75 percent of the total population in both counties. The vaccine was given at the cost of 25 cents, for those who could pay. It was given to population groups that were socially, racially, and culturally diverse, on Indian reservations and military posts and in urban and rural areas. The program became a model for subsequent U.S. mass-immunization programs. By the mid-1960s, Sabin’s vaccine was the only one in use in the United States. It was the Sabin vaccine that closed the immunity gap and effectively put an end to polio in the States.
Yet Sabin’s vaccine, too, has a problem. Attenuated live virus can mutate back into a virulent form. This has happened in a small number of cases. In the United States, therefore, after the decades in which the Sabin vaccine extinguished polio, the Salk vaccine is, ironically, once again preferred for immunizations. But the Sabin vaccine, cheap and easy to administer, is still the one used in the current campaign to eradicate polio worldwide. This campaign has extinguished the disease in the rest of the Western Hemisphere and in Europe, and almost entirely in Asia, though recent flare-ups in central Africa remain ominous.
Angela Matysiak is completing her PhD at George Washington University, in history of science, and is writing a biography of Albert Sabin.
| |
Bangladesh football team will face visiting Tajikistan in the AFC Asian Cup Qualifiers play-off round-2 at Bangabandhu National Stadium on Tuesday.
The qualifiers battle will begin at 4:30 pm and state run television BTV will telecast the match live. Ticket price has been fixed for the match at Tk 100 for VIP while 50 for gallery.
The coach came up with the hope at a press conference at Bangladesh Football Federation (BFF) House on Monday.
However, Bangladesh suffered a humiliating 0-5 goals defeat against Tajikistan in their away play-off match of the AFC Asian Cup Qualifiers in the Tajik capital Dushanbe Thursday night.
“We must forget the first result 5-0 and be prepared for the tomorrow’s match. Because this is the play-off round and everything can happen. We will do our best,” said Tajikistan coach Khakim Fuzaylov.
The winners of the two-leg play-off matches between Tajikistan and Bangladesh will advance to the group stages of the qualifying competition while the losing team will have another chance to qualify through further two-leg basis play-off matches against Bhutan on September 6 and October 11 respectively. | http://www.bdchronicle.com/detail/news/32/27063 |
Schedule 1 of the Code prescribes the information that must be included in the offeror’s takeover notice and offer document. Clause 13 of Schedule 1 requires that, where the offer is a full offer for cash or an offer with a cash alternative, the offer document is to include a statement as to whether or not any person intends to acquire equity securities in the target company under rule 36. Clause 19 of Schedule 1 requires the offeror, or where the offeror is a company, the offeror’s Chief Executive Officer, Chief Financial Officer and two directors, to certify that to the best of their knowledge and belief the offer document is “true and correct and not misleading”.
(f) the acquisition is notified to the Panel immediately.
Some confusion has arisen because of an apparent inconsistency between clause 13 of Schedule 1, which refers to an intention to acquire, and condition (b) of rule 36 which (referring to clause 13) requires disclosure in the offer document of the possibility of an acquisition under that rule. The question arises whether clause 13 of Schedule 1, by using the word “intends”, requires the offeror to have formed a definite intention, at the time it despatches its offer document, that it or any of its related parties, will acquire, or will not acquire, equity securities using the exception provided in rule 36. Alternatively, does the wording of rule 36, which only requires that disclosure of the “possibility” of an acquisition under rule 36 be disclosed in the offer document, mean that the offeror and its associated parties do not have to have formed a definite intention to acquire or not to acquire securities outside of the offer at the time the offer document is despatched, but can legitimately form such an intention later?
The Panel recognises that while the offeror may not have formed the intention, at the time it despatches the offer document, of acquiring securities outside the offer under rule 36, it may not wish to be precluded from doing so as the takeover runs its course. Circumstances in a takeover can change rapidly, particularly in a contested takeover, and the Panel accepts that parties’ intentions can legitimately change. | https://www.takeovers.govt.nz/guidance/codeword/issue-06/clause-13-of-schedule-1-of-the-code/ |
5 non-obvious principles of product discovery processes
“Product discovery” is how we name the process in which we transform client’s ideas, thoughts, problems and insights into a high-fidelity prototype. Generally, the entrepreneur or product owner has an idea in his mind. We facilitate the process of taking those thoughts, understand the possible users, get deeply immersed in the market, and then transform all that information into a usable, self-explanatory prototype that looks good and is easy to use and understand. It is important to share and validate the concept, receive feedback, test the idea and learn from real users.
Implementing this process before developing the actual solution it is not just economical and fast, it is also smart. The prototype will help product managers to understand better if the need is a real need, giving users the chance to share their opinion around the solution. This information should be used to make decisions about priorities and the possible future roadmap.
Even though iterating is generally good, doing it early is double as important. The other relevant element is the iteration pace: doing it during the design phase is cheaper, quicker and easier than iterating when the actual product is being built.
The objective of this article is to share some of the most important not obvious learnings we have had while designing products for our clients and for us. Dozens, literally dozens in the last few years.
1. Product discoveries are not all the same.
At first glance, all product discovery processes seem to be similar. Some words are repeated once and again and again: wireframes, mockups, components, prototype, screens, user stories, pain-points, etc.
Reality is quite the opposite: every product we have built so far, have something different to discover, design or implement from scratch.
2. Processes are made to be followed; and challenged.
Let’s be clear: processes are made to be used and followed. The team leader is responsible for strictly protecting processes. Every step is important for the next one and for the overall goal.
But, as said before, every client and product is unique. When our guts are telling us something different than the process is saying, we should at least stop and listen to them – and think.
We welcome thoughts such as: “Ok, according to the process we should now implement some specific exercise… But… what if instead, we go two steps ahead or go back to the starting point?” It might enlighten the team: helping understand the big picture or showing where should the focus be directed.
3. Breaks make the work faster and better.
Some might think that if they invest many hours in front of screens or make lots of wireframes and tests, the goal will be achieved fast and the prototype will be ready soon. Experience has taught us it does not work like that.
Sometimes brakes are necessary: asking for feedback on what we have done so far to external people, looking for references outside the client´s market, conducting research of trends, for example, are powerful brakes. At a first glance, interruptions seem to be distracting, but when they are planned, they are not. They add perspective to the team, feedback highlight errors and the research process is generally enlightening, not just because of the findings, but also because the exercise feels like a breath of fresh air. Breaks are generally a positive and effective boost.
4. Iterations are necessary, but not infinite. Time is finite.
The product discovery phase generally takes between four and six weeks. It involves listening to stakeholders, analyzing features competitors, and trends, making presentations, receiving feedback, adjusting designs accordingly, etc. We could invest months looking for the perfect product trying to satisfy all people involved, iterating once and again and again; the truth is that the most – maybe only- important stakeholder is the real user. Thus, finishing the product discovery phase is important, even when the product is not 100% defined. Previous assumptions and ideas are worth zero, comparing to real user’s feedback and insights. Product discovery processes should finish, because that’s when the real learnings begin.
5. The more we know, the less we know.
Product discoveries processes aim to find something unique: a powerful insight, some gesture, an innovative product. For that, in Moove It we have incorporated practices from different parts of the world and universities, we have studied multiple authors, gained experience from clients and colleagues and we have implemented dozens of product discoveries. And yet, we still get surprised every time on how we feel every time we start a new process: that feeling of not knowing where are we heading, that uncertainty, boost our spirits and make us enjoy the ride. | https://blog.moove-it.com/5-non-obvious-principles/ |
Tom's Guide writes: "The term Silicon Valley is forever intertwined with the history of both the microchip and the United States. It’s just south of the Bay Area and San Francisco, with San Jose as a capital and much of the Santa Clara valley as a domain. This high-tech hub acts as the unofficial Mecca for all things involving modern technology, from the giants like Google, Apple, and Intel to the social networking masters like Facebook and Twitter, to car companies like Tesla. Silicon Valley is a proving ground as well as a homestead for the established, a haven for startups and veterans alike. Sure, other cities in the United States, and some countries around the world (Japan, England, and Israel), are adding tech giants to their list of corporate citizens every day, but Silicon Valley has been, and will always be, center stage (unless some crazy tycoon tries to destroy it again)." | https://techspy.com/news/570627/building-russias-silicon-valley |
Jessica Wyeth is no longer a fugitive hiding under assumed identities. Through sheer grit, she has reclaimed her life only to discover what she fought for was an illusion. She is not the child of the picture-perfect New England family, but an unwanted castaway. Her frail and reclusive aunt died without exposing the secret that she was Jessica’s mother. Jessica travels to IJessica Wyeth is no longer a fugitive hiding under assumed identities. Through sheer grit, she has reclaimed her life only to discover what she fought for was an illusion. She is not the child of the picture-perfect New England family, but an unwanted castaway. Her frail and reclusive aunt died without exposing the secret that she was Jessica’s mother. Jessica travels to Ireland—her mother’s home—to learn why.When Jessica rides in a world-class steeplechase, she is unwittingly used as an accomplice in a devastating bombing in an English shopping mall. The group behind the bombing is the Charity, a generations old support network of the IRA. Michael Conant, reluctant heir to the Charity and Jessica’s lover, must choose his allegiance to his violent family legacy or the woman he loves. Meanwhile, Jessica’s fight for her life leads her to uncover her mother’s secrets and the divided soul of the Irelands."The Troubles" is a high-concept suspense novel that views the conflict in Northern Ireland through the prism of American involvement. This sweeping, multi-generational tale gives witness to the delicate and dangerous layers inside an ever-unfolding world....
|Title||:||The Troubles|
|Author||:|
|Rating||:|
|ISBN||:||9780692417928|
|Format Type||:||Paperback|
|Number of Pages||:||391 Pages|
|Status||:||Available For Download|
|Last checked||:||21 Minutes ago!|
The Troubles Reviews
-
I had the pleasure of interviewing Connie Hambly on my show “Local Authors with Kameel Nasr”. She has the first two books in the series, and she promised a third in the near future. You can start by reading The Troubles. It’s well-written and interesting, containing three major elements. First, as the title implies, the story is about what has been referred to as “The Troubles” of Northern Ireland. Second, the protagonist has a troubled identity. She discovers that the woman who raised her is not her mother. Without giving away the story, she later discovers that who she thought her father was not. Third, this book is about horses, and one of the most exciting chapters is a steeple race which is out of a Ben Hur movie. These three elements are woven into the story. The horse business is tied to the protagonist’s identity, and her identity is related to the terrorist battle in Northern Ireland. One thing struck me on page one: Hambly relates a bombing that actually took place in Manchester. The IRA left a truck of explosives in a shopping center then called to police to evacuate the area. They wanted to make a statement, not kill innocent people (although they killed many in other circumstances), which made me reflect on how far we have descended where now killing innocents is normal.Kameel Nasr is author of The Symphony Heist
-
Loved this book. The back story really provides a lot of historical information as well as mystery into how all of the characters developed. I also can't help but root for Michael and Jessica. Can't wait for book 3.
-
The Troubles is my first foray into Connie Johnson Hambley’s tense world of identity, terrorism, and horses. No spoilers here, other than to say that it takes a deft hand to convey foreign culture and Anglo-Irish politics. Jessica Wyeth is a strong female character without the author having to resort to making her distant and difficult. The pacing is tense, the descriptions lyrical, and the action, realistic. Midway through the story, Jessica participates in a vivid race in which another jockey does something unsportsmanlike, but the real surprise is her discovery when she undresses after the steeplechase. Great writing. The author is an equestrian and her knowledge of horses is as accurate as it is harrowing. I’ve ridden polo ponies (read: fearless and fast thoroughbred-quarter horses), and I’ve experienced speed, chaos, and the terror of nearly losing control of the horse. Hambley captured that in hooves and heartbeats. The greater story arc, however, is a mystery about love, personal history and what to do with knowledge. The Troubles is also an intelligent and moral story about secrets, about illusions and the violence they create. Highly recommend.
-
“A female protagonist to be reckoned with”Ireland is one of a handful of countries in the world where certain elements are inextricably linked. Religion and politics; dark, forbidding towns and cities scattered around a desolate sprawling countryside that is both beautiful and mystical. Oh, and there are horses too.I haven't read 'The Charity' the first book in this trilogy, but the author cleverly jogs the memory of the returning reader and leaves enough clues for newcomers to understand what went before. Jessica Wyeth is an incredibly strong character, who needs to uncover her roots, no matter what it takes. The story, and the back story are superbly crafted together and I had no problem switching between past and present as the whole picture was gradually revealed. Religion and politics are to the forefront of the plot. The 'troubles' that scarred the dark towns and cities play a role too. As for the magical countryside, the local folk and the horses - well it wouldn't be Ireland without them, would it?I am neither a religious nor political man, but I found this story to be extremely well written and captivating.
-
Hambley's novel is exciting and intriguing. She deftly weaves together stories across the decades into a final conclusion that satisfies and surprises. Her recreation of "the Troubles" in Ireland and across the the Atlantic to the U.S., even across generations, beautifully recreates the history and feeling of these violent times by giving us thinking and feeling characters who ring true. The descriptions of place in Ireland make you want to book a ticket! She knows her setting! And being an "improver of the breed," I especially enjoyed her descriptions of racing and training horses. This is a thrilling read, with a strong female character I liked and rooted for.
-
Jessica Wyeth was new to me in reading the Troubles, even though she is undoubtedly the main character in this trilogy. Having not read the Charity, I was pleased to find that this did not detract from my reading experience of the Troubles.Jessica is a horse trainer by nature and the beautiful American finds herself thrust in a harsh Irish political world after unwittingly getting involved in the affairs of Magnus Connaught and then heading to Ireland to trace her roots...More at http://equus-blog.com/the-troubles-by...
-
If you’re like me and love well-done mysteries and thrillers with a historical bent, you’ll love Connie Johnson Hambley’s "The Troubles." The second volume in Hambley’s “Jessica Trilogy,” but readable as a stand-alone, "The Troubles" combines a contemporary (romantic!) suspense plot featuring Jessica Wyeth, a compelling, complex heroine with identity issues, with flashbacks to the morally ambiguous entanglements and agendas of the Irish freedom fighters of the 1960s and 1970s who are, quite literally, in Jessica's blood.
-
Hooked on The Charity, I of course, had to read The Troubles. The story continues in a volatile Ireland centered around a woman's quest for her roots. Politics, religion, secrets, and personal conviction are all undercurrents here. The feeling is rich, but the writing is sparse -- cuts to the bone. No excess. No fluff. Strong story with surprising twists.
-
This is the second is a series of three books. I enjoyed it a lot! It delves into Irish politics and history and how they connect to the heroine. The characters are real, warts and all. There are many times you can't stop turning the pages to see what happens next. I also really enjoyed that many different characters' voices are heard. | http://5.pdfmedia.co/25564682/the-troubles.html |
Lori specializes in sports medicine and orthopedics, and has a particular interest in the pediatric and adolescent-age athlete. This is an age group that has unique risks and unique injuries and also account for a significant number of musculoskeletal problems requiring medical diagnosis and treatment.
Lori has developed an understanding of a wide range of athletic injuries, injury prevention strategies, and rehabilitation programs to return the athlete to full athletic participation. In her practice, she focuses on identifying weaknesses, including problems from previous injuries that affect performance.
Assess and identifies problems and provides a wide range of guidance during the rehabilitation process.
Lori understands the demand of athletics from recreational enthusiasts to youth athletes to professionals, including world class Olympians.
Sports Affiliation
United States Ski/Snowboard Teams
United States Diving Team
Numerous local club teams
Education
BS Physical Therapy University of New Mexico
Athletic Training Certification University of New Mexico
Extensive Postgraduate Coursework in Sports Medicine and Orthopedic Physical Therapy
Associations/Organizations/Societies
American Physical Therapy Association
National Athletic Trainers Association
American College of Sports Medicine
About Lori Mock
In addition to Lori’s special interest and care for the pediatric and adolescent-age athlete, Lori works with the United States Diving Team, and is a physical therapist/athletic trainer for the US Ski and Snowboard teams. Lori is passionate and active in the community. She is a former collegiate athlete in swimming, track and cross country, and is a native of Bellevue, Washington. | http://seattlepediatricsportsmedicine.com/about/directors/lori-mock/ |
“As a Representative of the State of Florida, which has almost 8,500 miles of coastline, I am proud to have voted in favor of H.R.729. This bipartisan legislative package is the start of a comprehensive effort to make our vulnerable coastal communities and economies responsive and resilient to the devastating effects of climate change. As we stand on the precipice of irreparable damage to our ecosystems, it is incumbent on us to respond aggressively and immediately to the threats of climate change.
“As one of our coastal safeguards of biodiversity and against severe oceanic events, coral reefs are currently under severe stress due to numerous threats such as pollution and warming waters. We must empower stakeholders to work to understand and combat the causes of the degradation and loss of coral. These habitats provide a site for biodiversity to flourish and an opportunity for communities to be directly involved in restoration and conservation projects that will bolster economic activity. My amendments ensure a clear message of Congressional intent to encourage partnerships to respond to the deterioration of coral reefs along our coasts. Additionally, I was pleased to join my colleagues, Representatives Charlie Crist (D-FL), Francis Rooney (R-FL), Suzanne Bonamici (D-OR), and Marcy Kaptur (D-OH) in offering Amendment #4 relating to the prevention of and response to algal blooms. These blooms are harmful to humans, wildlife, and impact South Florida’s economy, as multiple sectors rely on a healthy natural environment to flourish.
“Thousands of jobs and livelihoods are supported in Florida by tourism, fishing, and other aspects of the coastal economy. In Florida alone, coral reefs support 70,000 jobs and create six billion dollars every year in local sales. Passage of this bill is a critical step in combatting climate change, and protecting the environment and economies of coastal communities in Florida and across our nation for generations to come.”
BACKGROUND:
Hastings Amendment #30 expands the list of eligible activities for the award of Coastal Climate Change Adaptation Project Implementation Grants to include projects to address the immediate and long-term degradation or loss of coral and coral reefs.
Hastings Amendment #31 includes coral reefs as eligible under the National Fish Habitat Conservation Through Partnerships program.
Crist/Rooney/Bonamici/Kaptur/Hastings Amendment #4 clarifies that Section 323, the Climate Change Adaptation Preparedness and Response Program, includes projects to address harmful algal blooms.
Congressman Alcee L. Hastings serves as Vice-Chairman of the House Rules Committee, Chairman of the U.S. Helsinki Commission, and Co-Chairman of the Florida Delegation. | https://alceehastings.house.gov/news/documentsingle.aspx?DocumentID=401282 |
As urban planners iron out a final plan to spark development in the northern section of Miami Beach, the City Commission voted on a few key changes that have been recommended.
At Wednesday’s meeting, commissioners gave initial approval to increasing height for developments along 71st Street and creating a zone where short-term rentals are regulated to encourage hotel-like uses. The process for developing historic districts began, and the commission voted to impose a six-month moratorium on demolition of historic structures while those districts are created.
The height change would increase height limits for a zone around 71st Street from 75 feet to 125 feet, which would allow for 12-story buildings in an area intended to be North Beach’s “town center.” The first four floors would have to be set back at least 10 feet from the property line, and for floors above that, a minimum of 25 feet — a suggestion made by planners to maintain a low-scale experience from the street while providing developers an incentive to build.
Commissioners Micky Steinberg and Kristen Rosen Gonzalez voted against the measure. Rosen Gonzalez said it was premature because the commission has not approved a final draft of the master plan, the guiding document for this neighborhood.
Commissioner John Elizabeth Alemán noted the draft plan recommends allowing taller buildings in the town center along with creating a program where owners of historic properties can sell development rights to builders in town center who would need the additional square footage. Since it will take longer to create such a program, she said she wanted to start now on implementing measures that will take less time.
“I want to point out there’s a lot of steps to creating a [transfer-of-development-rights] program,” she said. “It’s going to require districting. It’s going to require a referendum.”
Commissioners also gave initial approval to allowing short-term rentals in historic buildings that front Harding Avenue. The city hopes to encourage the rehabilitation of these old structures by allowing property owners to rent units for a week at a time. Both preservationists and developers support the measure in hopes it will give owners a more lucrative option than long-term leases and therefore encourage them to invest in their aging properties.
A six-month moratorium on demolition of historic buildings got final approval, as well. At the same time, the city will start finalizing boundaries for two potential local historic districts. The matter will now go before the Historic Preservation Board in September. | https://www.miamiherald.com/news/local/community/miami-dade/miami-beach/article89952807.html |
- Latest 4 maps for / including Italy (more..):
No maps (yet -> Upload some!)
News
Country Profile: Climate, Geography, Socio-Economic Context
Country Profile: Water Bodies and Resources
Country Profile: Legal and Institutional Environment
At national level, the Royal Decree of 1933 recognized water resources as a public good. In 1989, Law 183 established the river basin as the basic unit within which all regulatory actions concerning water resource management, water pollution control and soil protection are to be coordinated for economic and social development and for environmental protection. The law also established major basin authorities and entrusted them with planning responsibilities. In 1994, Law 36 introduced a reform under which municipal utilities were aggregated into Optimal Territorial Areas (OTAs), which are responsible for the management and supply of water services such as wastewater treatment, sanitation and drinking water provision. OTAs also have to draft Optimal Territory Plans (OTPs), which analyse the availability of water resources and plan for their current and future use. Basin authorities have the responsibility of verifying that the OTP is coherent with basin plans and objectives.
Legislative Decree 152 was introduced in 1999 to protect water resources by preventing and reducing pollution and improving water quality. It also required regions to classify water bodies (i.e. surface, ground and coastal waters) and establish limits for the pollution loads that can be discharged into the environment. The Water Protection Plan, which directly complements the basin plan required by Law 183, is the main instrument for implementing the laws enacted by Legislative Decree 152. This decree is considered a forerunner of the EU Water Framework Directive of 2000, as it also aims for a comprehensive action framework for water resources protection by introducing measures for specific uses (e.g. drinking water, bathing water) and for specific sources of pollution, such as agricultural and industrial effluents.
In 2006, a consolidated text on environmental protection, Decree 152, was approved. It includes rules for waste management, strategic environmental assessment and environmental impact assessment procedures, and water resources protection and management, as well as for dealing with environmental damage. The part concerning water resources protection and management formally adopts the contents of the EU Water Framework Directive, for example by creating river district authorities and assigning them the task of producing river basin management plans. As of October 2008, however, Legislative Decree 152 of 1999 was still in effect because the 2006 decree had not yet been implemented. | http://waterwiki.net/index.php?title=Italy |
Purification and properties of thermostable beta-xylosidase from immature stalks of Saccharum officinarum L. (sugar cane).
Thermostable beta-xylosidase was purified from immature sugar cane stalks to an electrophoretically homogeneous form by ammonium sulfate fractionation, ion-exchange chromatography on DEAE-cellulose and P-cellulose columns, heat treatment (70 degrees C, 20 min) and gel filtration on a Sephadex G-100 column. The purification was about 165-fold in specific activity with a high recovery of 43%. The apparent molecular weight of the enzyme, as determined by gel filtration, was 62,000. In SDS-polyacrylamide gel electrophoresis, the purified enzyme was homogeneous and consisted of only one polypeptide, having a molecular weight of approximately 62,000. The optimum temperature and pH were found to be 75 degrees C and 4.85, respectively. The enzyme was thermostable and especially stable in the presence of D-xylose. The enzyme retained full activity after incubation at 70 degrees C for 60 min in the presence of 0.1% D-xylose and when heated at 75 degrees C in the presence of 1% D-xylose, the enzyme was stable up to 30 min. Among the various sugars tested, D-xylose was found to be most effective stabilizer. The Km and Vmax values were 2.05 mM and 20.4 mumol/mg/min, respectively. The substrate specificity of purified sugar cane beta-xylosidase was investigated with 16 substrates. It was not able to hydrolyze any p-nitrophenyl glycopyranosides, larch wood xylan, or sugar cane except for p- and o-nitrophenyl-beta-D-xylopyranosides. The enzyme hydrolyzed p-nitrophenyl-beta-D-xylopyranoside more rapidly than o-nitrophenyl-beta-D-xylopyranoside. The hydrolysis of p-nitrophenyl-beta-D-xylopyranoside was markedly inhibited by AgNO3, HgCl2, and D-xylose. Competitive inhibition was shown to occur with both HgCl2 and D-xylose. AgNO3 was found to be a non-competitive inhibitor. The enzyme lost 20% of its activity by photo-oxidation in the presence of methylen blue for 8 h. By polyacrylamide disc gel electrophoresis, the enzyme was found to contain carbohydrate. The enzyme was then hydrolyzed and the carbohydrate content found to be 13.5%, the constituent sugars being arabinose and galactose.
| |
3. Selection and interview
To make an objective and gender-neutral selection of the most appropriate candidate for a post, it is important to focus on merits and skills.
You can find information in Swedish on how to carry out gender-neutral recruitment, Hundra möjligheter att rekrytera utan att diskriminera.
Handling application documents
There are several different ways to sort and select applications in our recruitment system. For example, you can mark by status, make colour markers or assign points. The user guide for the system provides more information in this area.
Interviews
The purpose of interviews is to evaluate the candidates’ abilities compared with the competencies named in the advertisement. Interviews are most effective if they are standardised and structured. This means that the questions are fixed in advance and the interviewer puts the same questions to all the candidates. All the candidates are then treated equally and we avoid any discrimination.
In common with many other authorities, as well as private companies, we recommend a method based on skills. By selecting interview questions carefully you can estimate the behaviour and actions of a person in different working situations.
The number of interviews required and their length is difficult to say, since this varies with the post in question and other factors, such as how many qualified applications are received. One recommendation is to have a first round of short interviews and then arrange a longer interview for fewer candidates.
Supplementary exercises at the time of the interview
Knowledge test
A test of knowledge can be used to establish the candidate’s learned knowledge of data, language or specialist areas, for example. There are three methods of doing this: a written test of knowledge, knowledge-related questions in the interview and simulation exercises. A test of knowledge is easy to administrate and can be used in large groups. It is a good method of screening applicants and reducing the number invited to an interview.
Simulation exercises and work samples
Work samples are examples of work that the candidate has carried out on previous occasions. Simulation exercises are designed to mimic realistic work-related situations under standardised forms. It is possible to evaluate knowledge and skills areas that are important for the post through such exercises. For example, you can ask the applicant to write an abstract for a relevant article that you have selected. It is important that the tasks and instructions are standardised to ensure equivalent conditions for all the candidates.
Skills and talents test
The purpose of the skills and talents test is to assess the candidates’ underlying intellectual aptitude and skills. The information you obtain from this test may be useful to see if the person has the ability to process complex problems or learn new things.
Expert opinion for the recruitment of Assistant Professors
When recruiting Assistant Professors, always seek an expert opinion from at least one person. This is done when the application deadline has passed and any supplementary information has been submitted. The person giving the opinion must meet the following requirements:
- Expert in the subject.
- At least a lecturer or the equivalent.
- No connections with KI - must be external.
- If appointing two experts, they may not be working at the same academic institution.
- Both sexes must be represented in the selection. If there are specific reasons for this not being so, the head of department or the recruitment group must provide an explanation and document this in the file.
- Not be biased in respect of any of the applicants.
The Head of Department decides on the appointment of an expert.
If there is only one applicant and it is clear that a test of skills is not necessary, an exemption can be made from this requirement.
Obtaining references
Obtaining references is a compulsory part of the recruitment process and is to be seen as an important complement to other information. The purpose is to confirm the information that the applicant has stated in the application. It is also a way of following up the competence requirements set out in the advertisement.
You can request references for one or more final candidates. Keep in mind that you must do this before you give an oral or written promise of employment. The referees should preferably be two previous superior managers, or supervisors.
References for people who have previously worked at KI
If a candidate has previously worked at KI, you should ask for references if they come from another section or department. It is important to ask for the most current and relevant references if the candidate does not give these.
Documentation
Be thorough in documenting the selection process in a structured manner, such as by using an assessment matrix.
Information that KI obtains during the recruitment process should be noted and saved in so far as it has been a determining factor in the employment process. For example, there may be two candidates with equivalent qualifications and information about their personal qualities and skills are noted during the interview which determine the selection. Another example is where a reference is to a candidate’s disadvantage.
It is important to make a habit of carrying out structured recruitment processes with thorough documentation. Personal suitability is often judged on what comes up in the employment interview, which is why this must be well documented. | https://staff.ki.se/3-selection-and-interview |
Mahdi Yeganeh, Department of Materials Science and Engineering, Faculty of Engineering, Shahid Chamran, University of Ahvaz, Ahvaz, Iran. E-mail: - [email protected]
Institute for Tropical Technology, Vietnam Academy of Science and Technology, Hanoi, Vietnam.
Received: 14-05-2019
Accepted: 24-05-2019
Published: 27-05-2019
Citation: Mahdi Yeganeh, Tuan Anh Nguyen (2019) Methods for Corrosion Protection of Metals at the Nanoscale, Kenk Nanotec Nanosci 5:37-44
Copyrights: © 2019, Mahdi Yeganeh et al.,
There was a lack of literature related to the corrosion and its prevention at the nanoscale. Recently, the nanotechnology based protective methods could offer many advantages over their traditional counterparts, such as protection for early-stage, higher corrosion resistance, better corrosion control, and controlled release of corrosion inhibitors. This review explores how metals can be protected at nanoscale by using both nanotechnology and nanomaterials. It covers the advanced methods using nano-alloys, nano-inhibitors, nano-coatings, nano-generators, and nano-sensors.
Keywords: Corrosion protection; nano-alloys; nano-inhibitors; nano-coatings; nano-generators; nano-sensors.
In 2016, the global cost of corrosion is estimated to be US$2.5 trillion (~ 3.4 % of the global Gross Domestic Product -GDP). In case of conventional metallic structures, the acceptable value of thickness lost due to the metal degradation is about ~100 µm/year. Nowadays, with the development of nanoscience and nanotechnology, the small metallic parts (or nanostructure materials) have been widely used in many products, such as print electronics, contact, interconnection, implant, nano-sensors, display units, ultrathin layers, drug delivery systems... Thus, their thickness loss should be controlled with acceptable values in range from 10 to 100 nm.
Traditional methods for protection of metals include various techniques, such as coatings, inhibitors, electrochemical methods (anodic and cathodic protections), and metallurgical design. In practice, effective corrosion control is achieved by combining two or more of these methods. Usually, highly corrosion resistant materials are associated with a high cost factor. Even then, such materials can undergo degradation in severe environments/stress. The use of cheaper metallic materials along with proper corrosion control strategies is therefore economic for many applications. Nanomaterials and nanotechnology based protective methods can offer many advantages over their traditional counterparts, such as protection for early-stage, higher corrosion resistance, better corrosion control, and controlled release of corrosion inhibitors.
This mini review explores how metals can be protected at nanoscale by using both nanotechnology and nanomaterials (nano-alloys, nano-inhibitors, nano-coatings, nano-generators, nano-sensors).
Nano-alloys (nanostructured alloys) are constructed from at least two different metallic nanomaterials, in order to overcome the limits of single components, to improve properties, to achieve new properties, and/or to achieve multiple functionalities for single metallic nanoparticles. Nano-alloys can also refer to the formation of nanocrystalline metal phases within the metallic matrices.
It was reported in literature that nano-alloys can offer many advantages over their conventional counterparts, such as higher corrosion resistance [1-7], high oxidation resistance [8,9], strong ductility enhancement , high hardness , and wear resistance .
To improve the corrosion resistance of Ti–6Al–4V alloy without modifying its chemical composition, Kumar et al fabricated the surface nanostructure for this alloy by using ultrasonic shot peening (USSP) method. Similarity used USSP method to fabricate the surface nanocrystallized AISI 409 stainless steel, for higher corrosion resistance.
For high-strength aluminum alloys used the thermo-mechanical treatment to impart them the nanocrystalline structures and to control their strength and ductility prepared the 3D honeycomb nanostructure-encapsulated magnesium alloys. In their study, graphene oxide (GO) was incorporated into AZ61 alloy (at 1 wt.%) to form the honeycomb nanostructure-encapsulated Mg alloys, which have the higher corrosion resistance and mechanical properties than the pure AZ61 alloy. The authors proposed 4 mechanisms: (i) GO promoted the nucleation of α-Mg grains, reduced their interconnections, thus refined their sizes; (ii) high anti-permeability of GO acting as the tight barrier against corrosion; (iii) GO reinforced the corrosion layer on the surface of Mg matrix; (iv) GO facilitated the formation of bone-like apatite due to its oxygen-containing groups.
For smart anticorrosion coatings, nano-inhibitors (nano-sized inhibitors) might refer to the inhibitor loaded nanocontainers. These nanocontainers exhibited the smart releasing property for their embedded inhibitors, by external or internal stimuli (such as pH-controlled release, ion-exchange control, redox-responsive control of release, light-responsive controlled-release, and release under mechanical rapture ). In addition, the smart nanoshells could prevent the direct contact between the inhibitors with both coating matrices and adjacent local environments. All nanocontainers can be divided into two category of polymer nanocontainers (core-shell capsules, gels) and inorganic nanocontainers (porous inorganic materials). Polymer nanocontainers require multi-step technology for their fabrication. Besides, their fabrication needs several equipment. On the other side, available porous inorganic materials can be directly applied as inorganic nanocontainers for self-healing coatings. Inorganic nanocontainers could be mesoporous silica or titania, ion-exchange nanoclays and halloysite nanotubes .
For organic coatings, various inhibitors have been loaded into nanocontainers, such as benzotriazole , mercaptobenzothiazole [17, 18], mercaptobenzimidazole , hydroxyquinoline , dodecylamine , molybdate salts , cerium salts , fluoride salt , zinc salts .
Nano-inhibitors can quickly respond to the local environmental changes associated with corrosion processes, such as local pH, ionic strength, and potential, and release encapsulated corrosion inhibitors to retard the corrosion process . This kind of system can: (i) prevent contact of inhibitor with coating, (ii) provide controlled release of inhibitor after initiation of corrosion, (iii) decrease the amount of consumed inhibitor, (iv) improve the durability of coating, and (v) provide the possibility of installing triggering mechanisms for releasing self-healing agents . Nanocontainers/microcapsules for self-healing coatings are expected to be featured with the following characteristics: (a) mechanical and chemical stability, (b) compatibility with coating material, (c) sufficient loading capacity, (d) impermeability of the shell wall, (e) ability to sense corrosion in early stages, and (f) the ability to release corrosion inhibitor on demand. The application of containers (capsules) is one of the most promising approaches for the development of stimuli-responsive coatings with self-healing/active protection functionalities .
In case of metal coating, recently we reported the use of cerium load nanosilica for electroplating of Zn-Ni alloy coating on steel substrate . In our study, the electrochemical measurements suggested that inhibitor could be released from nanocontainer in the early electrodeposition of alloy coating on steel substrate. Whereas, the salt spray test indicated that inhibitor was released during the corrosion of coated steel, thus increased the protective duration of coating by two times.
Nanocoatings (nanocomposite /nanostructured coatings) refer to the use nanomaterials and nanotechnology to enhance the coating performance. Nowadays coating should not only serve as the decoration with physical barrier, but also act as the smart multifunctional materials.
For anticorrosion, in general, the barrier performance of coatings can be significantly improved by the incorporation of nanoparticles, that decreasing the porosity and zigzagging the diffusion path for deleterious species. At the interface coating/metal, nanoparticles are expected to enhance the coating adhesion and reduce the trend for the coating to blister or delaminate [30-32]. Besides, nanoparticles or nanostructured coatings could act as a barrier against corrosive species .
It was reported in literature that nanomaterials could be used as nanofillers to reinforce both organic and metallic coatings. Various organic matrices have been used to fabricate the polymeric nanocomposite coatings, such as epoxy [30, 31], polyurethane , chitosan , polyethylene glycol [35, 36], polyaniline [37, 38], rubber-modified polybenzoxazine , ethylene tetrafluoroethylene , polyester , polyacrylic , polydimethylsiloxane , polypyrrole , and alkyds . For metallic nanocoatings, two main matrices have been used for anticorrosion: Ni matrix [46-50] and Zn-Ni matrix .
Nano-generators refer to the uses of nanosized devices/materials to convert the mechanical/thermal/light energies into electricity. It was reported in literature for the self-powered cathodic protection using nanogenerators [51-55].
"Guo et al. reported the use of the disk Triboelectric Nanogenerator (TENG) to provide a self-powered cathodic protection for stainless steels. Their TENG’s output transferred charges and short-circuit current density were 0.70 C/min and 10.1 mA/m2, respectively, at the rotating speed of 1000 rpm. By coupling the negative pole of this TENG with stainless steels in the 0.5 M NaCl solution, the cathodic polarization potentials were in range from–320 mV to –5320 mV. Similarity used the flexible TENG to harvest the mechanical energy of wind and raindrops for cathodic protection of iron in 0.1M NaHSO3+0.1M NaNO3 solution. Kinetic wave energy also could be harvested by using the flexible TENG .Zhang et al. reported the use of flexible hybrid nanogenerator (NG) for simultaneously harvesting thermal and mechanical energies. In their hybrid NG, a triboelectric NG was constructed below the pyro/piezoelectric NG. Recently Cui et al. reported the use of polyaniline nanofibers to construct a wind-driven TENG. Their TENG exhibited a high output values: maximum output voltage of 375 V, short current circuit of 248 µA, and 14.5 mW power, under a wind speed of 15 m/s.
In other direction, other approach of photo generated cathode protection, had been reported by coupling the nano-TiO2 photo anode with metal electrode using simulated solar irradiation , white-light irradiation or under UV light . Recently, the hybridization of noble metals (Au, Ag, Pd) nanoparticles and nano-TiO2particles are the most promising approach not only to enhance the visible light sensitivity of TiO2, but also to reduce the recombination of photo generated electron–hole pairs .
Nano-sensors (or nanomaterials based sensors) can offer many advantages over their microcounterparts, such as lower power consumption, high sensitivity, lower concentration of analytes, smaller interaction distance between object and sensor. Beside, with the supports of artificial intelligence tools (such as fuzzy logic, genetic algorithms, neural networks, ambient-intelligence…), sensor systems nowadays become smarter.
For corrosion protection at the nanoscale using smart coatings, the early detection of localized corrosion is very important, with regard to the economy, safety, and ecology. The most promising approach is to embed the smart nano-sensors, which are sensitive to the changes of environmental pH values, into the protective coating. Recently, Exbrayat developed the new nano-sensors for monitoring early stages of steel corrosion in NaCl solution. Their nano-sensors were constructed using silica nanocapsules, with the hydrophobic liquid core containing a fluorescent dye. In case of steel corrosion, these nano-sensors were able to detect iron ions and low pH values. used the phenolphthalein loaded mesoporous nanoparticles to detect the active cathodic zones in aluminum and magnesium alloys. This pH indicator changed color at the high pH values.
To protect metals from corrosion at the nanoscale, various methods could be used effectively, such as by using nano-alloys, nano-inhibitors, nano-coatings, nano-generators, nano-sensors… These advanced methods can not only protect metals at the early-stage, but also provide the structural health monitoring and self-healing.
Besides the methods for corrosion protection at the nanoscale, there are several important studies should be carried out, such as (i) Mathematical modeling and simulation of corrosion at the nanoscale, (ii) Methods for testing and measurement of metal corrosion at the nanoscale, (iii) Nanoscale simulation of cathodic protection…
The authors of this mini review are co-editors of the forthcoming Elsevier book titled “Corrosion protection at the nanoscale” (Published date: May 2020, ISBN: 978-0-12-819359-4).
S. Kumar, K. Chattopadhyay, V. Singh (2016) Effect of surface nanostructuring on corrosion behavior of Ti–6Al–4V alloy, Materials Characterization 121:23-30
J. Lei, W-F. Cui, X. Song, G. Liu, L. Zhou (2014) Effects of surface nanocrystallization on corrosion resistance of β type titanium alloy, Trans Nonferrous Met Soc Chin, 24:2529-2535
H. Garbacz, M. Pisarek, K. J. Kurzydłowski (2007) Corrosion resistance of nanostructured titanium, Biomol Eng 24:559-563.
D. Raducanu, E. Vasilescub, V.D. Cojocarua, I. Cinca, P. Drob et al. (2011) Mechanical and corrosion resistance of a new nanostructured Ti–Zr–Ta–Nb alloy, J Mech Behav Biomed Mater 4:1421-1430.
R. Huang, Y. Han (2013) The effect of SMAT-induced grain refinement and dislocations on the corrosion behavior of Ti–25Nb–3Mo–3Zr–2Sn alloy, Mater Sci Eng C 33:2353-2359.
S. Jelliti, C. Richard, D. Retraint, T. Roland, M. Chemkhi et.al (2013) Effect of surface nanocrystallization on the corrosion behavior of Ti–6Al–4V titanium alloy, Surf. Coat. Technol 224:82-87.
T. Balusamy, S. Kumar, T.S.N.S. Narayanan (2010) Effect of surface nanocrystallization on the corrosion behaviour of AISI 409 stainless steel, Corros Sci 52:3826-3834.
S. Swaminathan, S-M. Hong, M. Kumar, W-S. Jung, D-I Kim et al. (2019) Microstructural evolution and high temperature oxidation characteristics of cold sprayed Ni-20Cr nanostructured alloy coating, Surface and Coatings Technology, 362:333-344.
M. Kumar, H. Singh, N. Singh, S.-M. Hong, I.-S. Choi et al. (2015) Development of nano-crystalline cold sprayed Ni-20Cr coatings for high temperature oxidation resistance, Surf. Coat. Technol 266:122-133.
M.V.Markushev, E.V.Avtokratova, S.V.Krymskiy, O. Sh.Sitdikov (2018) Journal of Alloys and Compounds, Effect of precipitates on nanostructuring and strengthening of high-strength aluminum alloys under high pressure torsion 743: 773-779.
I. Sabirov, M.Yu. Murashkin, R.Z. Valiev (2013) Nanostructured aluminium alloys produced by severe plastic deformation: New horizons in development, Materials Science and Engineering 560:1–24.
M. Jafari, M.H. Enayati, M.H. Abbasi, F. Karimzadeh (2010) Compressive and wear behaviors of bulk nanostructured Al2024 alloy, Materials and Design 31: 663–669.
C. Shuai, B. Wang, Y. Yang, S. Peng, C. Gao (2019) 3D honeycomb nanostructure-encapsulated magnesium alloys with superior corrosion resistance and mechanical properties, Composites Part B: Engineering 162: 611-620.
T. A. Nguyen, A. A. Assadi (2018) Smart Nanocontainers: Preparation, Loading/Release Processes and Applications, Kenkyu Journal of Nanotechnology and Nanoscience 4: 1-6.
E. Shchukina, H. Wang, D. G. Shchukin (2019) Nanocontainer-based self-healing coatings: current progress and future perspectives, Chem. Commun 55:3859-3867.
E. Abdullayev, R. Price, D. Shchukin, Y. Lvov (2009) Halloysite Tubes as Nanocontainers for Anticorrosion Coating with Benzotriazole, ACS Appl. Mater. Interfaces 1:1437–1443.
A. Chenan, S. Ramya, R.P. George, U. K. Mudali (2014) 2-Mercaptobenzothiazole-Loaded Hollow Mesoporous Silica-Based Hybrid Coatings for Corrosion Protection of Modified 9Cr-1Mo Ferritic Steel, Corrosion 70: 496-511.
D. Yu, J. Wang, W. Hu, R. Guo (2017) Preparation and controlled release behavior of halloysite/2-mercaptobenzothiazole nanocomposite with calcinedhalloysite as nanocontainer, Materials and Design 129:103-110.
E. Abdullayev, V. Abbasov, A. Tursunbayeva, V. Portnov, H. Ibrahimov et al. (2013) Self-Healing Coatings Based on Halloysite Clay Polymer Composites for Protection of Copper Alloys, ACS Appl. Mater. Interfaces 5:4464–4471.
I. Kartsonakis, I. Daniilidis, G. Kordas (2008) Encapsulation of the corrosion inhibitor 8-hydroxyquinoline into ceria nanocontainers, Journal of Sol-Gel Science and Technology 48: 24–31.
J. M. Falcón, L.M. Otubo, I.V.Aoki (2016) Highly ordered mesoporous silica loaded with dodecylamine for smart anticorrosion coatings, Surface and Coatings Technology 303:319-329.
A. Keyvani, M. Yeganeh, H. Rezaeyan (2017) Application of mesoporous silica nanocontainers as an intelligent host of molybdate corrosion inhibitor embedded in the epoxy coated steel, Progress in Natural Science: Materials International 27: 261-267.
R. Noiville, O. Jaubert, M. Gressier, J. Bonino, P. Taberna (2018) Ce ( III ) corrosion inhibitor release from and boehmite nanocontainers, Mater. Sci. Eng. B. 229:144–154.
M. Yeganeh, M. Saremi (2015) Corrosion inhibition of magnesium using biocompatible Alkyd coatings incorporated by mesoporous silica nanocontainers, Prog. Org. Coatings. 79:25-30.
Y. Liu, J. Xu, J. Zhang, J. Hu (2017) Electrodeposited Silica Film Interlayer for Active Corrosion Protection, Corros. Sci 120:61-74.
D.G. Shchukin, H. Mohwald (2011) Smart nanocontainers as depot media for feedback active coatings, Chem. Commun 47:8730–8739.
X. Liu, W. Li, W. Wang, L. Song, W. Fan et al. (2018) Synthesis and characterization of pH-responsive mesoporous chitosan microspheres loaded with sodium phytate for smart water-based coatings, Mater. Corros 69:736–748.
K.A. Zahidah, S. Kakooei, M.C. Ismail, P. Bothi Raja (2017) Halloysite nanotubes as nanocontainer for smart coating application: A review, Prog. Org. Coatings. 111:175–185.
T. T. H. Nguyen, B. T. Le, T. A. Nguyen, Inhibitor-loaded Silica Nanoparticles for Self-healing Metal Coating, In: “Smart Nanocontainers: Fundamentals and Emerging Applications”.
X. Shi, T. A. Nguyen, Z. Suo, Y. Liu, R. Avci (2009) Effect of Nanoparticles on the Anticorrosion and Mechanical Properties of Epoxy Coating”, Surface and Coatings Technology 204 : 237-245.
T. A. Nguyen, T. H. Nguyen, T. V. Nguyen, H. Thai, X. Shi (2016) Effect of Nanoparticles on the Thermal and Mechanical Properties of Epoxy Coatings, Journal of Nanoscience and Nanotechnology 16 : 9874-9881.
P. Nguyen-Tri, T. A. Nguyen (2018). Nanocomposite Coatings: Preparation, Characterization, Properties, and Applications, International Journal of Corrosion18:19.
L .P. Sung, J. Comer, A. M. Forster, H. Hu, B. Floryancic et al. (2008) Scratch Behavior of Nano- Alumina/Polyurethane Coatings, Journal of Coatings Technology and Research 5:419–430.
L. Al-Naamani, S. Dobretsov, J. Dutta, J. G. Burgess (2017) Chitosan-zinc oxidenanocomposite coatings for the prevention of marine biofouling, Chemosphere, 168: 408-417.
S. Lowe, N. M. O’Brien-Simpson, L. A. Connal (2015) Antibiofouling polymerinterfaces: poly(ethylene glycol) and other promising candidates, Polymer Chemistry 6: 198–212.
W-Y. Wang, J-Y Shi, J-L Wang, Y-L Li, N-N Gao, Z-X Liu (2015) Preparation andcharacterization of PEG-g-MWCNTs/PSf nano-hybrid membranes with hydrophilicity and antifouling properties, RSC Advances 5:84746–84753.
U. Bogdanović, V. Vodnik, M. Mitrić, S. Dimitrijević, S. D. Skapin (2015) Nanomaterial with High Antimicrobial Efficacy -Copper/Polyaniline Nanocomposite, ACS Applied Materials and Interfaces, 7:1955−1966.
M. Shabani-Nooshabadi, S. M. Ghoreishi, Y. Jafari, N. Kashanizadeh (2014) Electrodeposition of polyaniline-montmorrilonite nanocomposite coatings on 316L stainless steel for corrosion prevention, Journal of Polymer Research 21:416.
E. B. Caldona, Al C. C. De Leon, P. G. Thomas, D. F. Naylor, B. B. Pajarito et al. (2017) Superhydrophobic Rubber-Modified Polybenzoxazine/SiO2, Nanocomposite Coating with Anticorrosion, Anti-Ice, and Superoleophilicity Properties, Industrial and Engineering Chemistry Research 56:1485−1497.
R. Yuan, S. Wu, P. Yu, B. Wang, L. Mu et al. (2016) Superamphiphobic and Electroactive Nanocomposite toward Self-Cleaning, Antiwear, and Anticorrosion Coatings, ACS Applied Materials and Interfaces 8:12481−12493.
A. Golgoon, M. Aliofkhazraei, M. Toorani, M. H. Moradi, A. Sabour Rouhaghdam, (2015) Corrosion and Wear Properties of Nanoclay- Polyester Nanocomposite Coatings Fabricated by Electrostatic Method, Procedia Materials Science 11:536 – 541.
S. A. Sajjadi, M. H. Avazkonandeh-Gharavol, S. M. Zebarjad, M.Mohammadtaheri, M. Abbasi et al.(2013) A comparative study on the effect of type of reinforcement on the scratch behavior of a polyacrylic-based nanocomposite coating, Journal of Coatings Technology and Research 10 : 255–261.
D. S. Facio, M. J. Mosquera (2013) Simple Strategy for Producing Superhydrophobic Nanocomposite Coatings In Situ on a Building Substrate, ACS Applied Materials and Interfaces 5:7517−7526.
M. Saremi, M. Yeganeh (2014) Application of mesoporous silica nanocontainers as smart host of corrosion inhibitor in polypyrrole coatings, Corros. Sci 86:159-170.
M. Yeganeh, M. Saremi (2015) Corrosion inhibition of magnesium using biocompatible Alkyd coatings incorporated by mesoporous silica nanocontainers, Prog. Org. Coatings 79:25-30.
T. Rabizadeh, S. R. Allahkaram (2011) Corrosion resistance enhancement of Ni–P electroless coatings by incorporation of nano-SiO2 particles, Materials and Design, 32:133–138.
H. Ashassi-Sorkhabi, H. Aminikia,R. Bagheri (2014) Electroless deposition of Ni-Cu-P coatings containing nano-Al2O3 particles and study of its corrosion protective behaviour in 0.5 M H2SO4, International Journal of Corrosion.
H. M. Jin, S. H. Jiang, L. N. Zhang (2008) Microstructure and corrosion behavior of electroless deposited Ni–P/CeO2 coating, Chinese Chemical Letters 19:1367–1370.
L. Y. Wang, J. P. Tu, W. X. Chen, Y. C. Wang, X. K. Liu et al. (2003) Friction and wear behavior of electroless Ni-based CNTcomposite coatings, Wear 254:1289–1293.
Y. Zhou, H. Zhang, B. Qian (2007) Friction and wear properties of the codeposited Ni-SiC nanocomposite coating, Applied Surface Science 253:8335-8339.
W. Guo, X. Li, M. Chen, L. Xu, L. Dong et al. (2014) Electrochemical Cathodic Protection Powered by Triboelectric Nanogenerator, Adv. Funct.Mater 24:6691–6699.
H. R. Zhu, W. Tang, C. Z. Gao, Y. Han, T. Li et al. (2015) Self-powered metal surface anti-corrosion protection using energy harvested from Rain drops and wind, Nano Energy 14:193–200.
X. J. Zhao, G. Zhu, Y. J. Fan, H. Y. Li, Z. L. Wang (2015) Triboelectric Charging at the Nanostructured Solid/Liquid Interface for Area-Scalable Wave Energy Conversion and Its Use in CorrosionProtection, ACS Nano 9:7671–7677.
H. Zhang, S. Zhang, G. Yao, Z. Huang, Y. Xie et al. (2015) Simultaneously Harvesting Thermal and Mechanical Energies basedon Flexible Hybrid Nanogenerator for Self-Powered Cathodic, ACS Appl. Mater. Interfaces 7:28142−28147.
S. Cui, Y. Zheng, J. Liang, D. Wang (2018) Triboelectrification based on double-layered polyaniline nanofibers for self-powered cathodic protection driven by wind, Nano Research 11:1873–1882.
K. Sun, S. Yan, T. Yu, K. Wang, C. Feng et al. (2019) Highly enhanced Photoelectrochemical cathodic protection performance of the preparation of magnesium oxides modified TiO2 nanotube arrays, Journal of Electroanalytical Chemistry 834:138-144.
Z-Q Lin, Y-K Lai, R-G Hu, J. Li, R-G. Du, C-J. Lin (2010) A highly efficient ZnS/CdS@TiO2 photoelectrode for photogenerated cathodic protection of metals, Electrochimica Acta, 55:8717–8723.
J. Zuo, H. Wu, A. Chen, J. Zhu, M. Ye et al. (2018) Shape-dependent photogenerated cathodic protection by hierarchically nanostructured TiO2 films, Applied Surface Science, 462:142-148.
S. Mohapatra, T. A. Nguyen, P. Nguyen-Tri et al. (2018) Noble Metal-Metal Oxide Hybrid Nanoparticles: Fundamentals and Applications; Elsevier 978:2.
L. Exbrayat, S. Salaluk, M. Uebel, R. Jenjob, B. Rameau et al. (2019) Nanosensors for Monitoring Early Stages of Metallic Corrosion, ACS Appl. Nano Mater 2:812–818.
F. Maia, J. Tedim, A. C. Bastos, M. G. S. Ferreira, Mikhail L.Zheludkevich (2014) Active sensing coating for early detection of corrosion processes, RSC Advances 4:17780-17786. | http://www.kenkyugroup.org/article/29/188/Methods-for-Corrosion-Protection-of-Metals-at-the-Nanoscale |
- Is your Marine Protected Area (MPA) successful? If so, how will you continue to improve it?
- Is it struggling to achieve the goals that you set? And how will you adapt?
- Is this because of biological conditions or the community involvement or complex political decisions?
- What is the likely success of doing what others have done in their MPA? Will it work only under certain conditions?
Limitations of Existing Ideas
No single factor addresses these questions. MPAs are common marine management tools used within a complex social, political and ecological context. So why have most studies of factors for success of MPAs not looked across all three fields of study? By gathering such broad-based data, this research will be able to show in scientific terms that there is much more to an MPA than simply coral coverage, fish counts and participation. Decision-makers will know that carrying out procedures “A” and “B” will increase their probability of success by “X” percent.
Comprehensive and Interdisciplinary Field Research
This research project identifies biological, social and political factors leading to the success of MPAs, specifically coral reef no-take reserves in the wider-Caribbean region. Field research is being conducted at 30 no-take coral reef reserves over three years (2006-2008). Results from similar work was conducted in Southeast Asia (see publication on CRC website- Pollnac, 2000).
Project Timeline
- Field Research 2006-2008
- Regional Workshops 2009-2010
- What will communities and MPA managers gain?
- Involvement — During our research we will rely heavily on the wisdom and perspectives of local citizens. We will spend time interviewing people to hear their views on the marine reserve. We are also available to answer any questions y about the project or about MPAs in general.
- Information — During the study we will provide each MPA manager and community with information related to our work. When the analysis is complete, participants will see how their MPA is performing, what factors lead to successful MPAs, how successful MPAs manage and govern their resources, what alternative-income options they have created for themselves, what laws might have passed or be needed and more.
- Connectedness — By participating in this study, an MPA will gain access to researchers and other managers through a broader network of 30 MPAs being studied.
- Project Partners — A key component of the project is to involve local professionals and community members in the data gathering and outreach. This project is being lead by the University of Rhode Island’s Departments of Marine Affairs, Natural Resources Science department and the CRC. The team is interdisciplinary, and is composed of ecologists, marine policy analysts, anthropologists and coastal management practitioners. The U.S. National Science Foundation provides the funding.
For more information about this research please contact: | https://www.crc.uri.edu/projects_page/integrating-factors-for-success-in-caribbean-marine-protected-areas/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.