content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
A unique behavior of acoustic signals occurred for each type of defect as evident in their respective time and frequency domains. Straight pinhole shows that the faster the gas leakage passes through the defect the greater the magnitude of amplitude of the generated AE signals was recorded. Scattered AE signals took place above the critical pressure due to large scale flow instability. For stepwise pinhole, the magnitude of amplitude is higher for a thicker wall dimension and has a common characteristics with a sudden dropped of amplitude over the transition of flow. In a similar manner, the cone-type pinhole shows other distinct characteristics for two different dimensions. When pressure is over 370 kPa unusual sound is generated for 1.0 mm wall thickness. On the other hand, only straight pinhole shows a decreasing peak frequency after reaching the critical pressure, which is attributed to screech tone characteristics. While for stepwise pinhole with 1.0 mm wall thickness, the peak frequency is generally higher compared with 2.0 mm wall thickness. The cone-type orifice have similar pattern with stepwise pinhole in terms of peak frequency. Generally, the dissimilarity of AE generated signals revealed a distinct characteristic for every type of pipe defect. And the effect of stepwise deviation is extremely obvious in various depths. | https://www.jstage.jst.go.jp/article/matertrans/49/10/49_MRA2008175/_article |
“Old diplomacy does not work anymore.” And so, begins this film on the life and works of Dr. Vamik Volkan, a psychoanalyst, Nobel Prize nominee and emeritus professor of Psychology at the University of Virginia. This film explores severe societal-political divisions and interferences with democratic processes and human rights issues in many locations around the world, including the United States; it attempts examines the role of leader-follower relationships related to such developments. The term "large group identity" is used profusely throughout this film, describes hundreds, thousands, or millions of people- most of whom will never see or even know about each other as individuals, but who share many of the same sentiments.
Large group identity has for many centuries, been the catalyst for the continued animosity and unity between groups; Germans and Jews; Palestinians and Jews; Greeks and Turkish; racial and gender discrimination, just to name a few. This film explores Dr. Volkan’s expertise in analyzing the phenomena of large group identities, and its impact on what is called ‘shared trauma’ by the group; it examines its relationship to massive traumas at the hand of the “other,” its role in national or international affairs, its raising substantial barriers to peaceful co-existence between “enemy” groups. This film also describes what Dr. Volkan calls as “large-group psychology” in its own right.
Dr. Volkan points out that throughout human history many world leaders used words to mobilize the populace to create the best and worst that people can do. September 11th and just recently, January 6th, 2021, are clear examples. The experience, many Americans, naturalized or native, felt, was extreme. How do we deal with large group identities as it relates to a traumatic event like Pearl Harbor, The Manzanar War Relocation Center, Auschwitz, September 11th, George Floyd?
Dr. Volkan further points out that ‘group violence’ experienced in each of these incidents based on ‘blind trust’ continues to occur even without a long history of defining ‘us’ and ‘them’. Chosen trauma, according to Dr. Volkan, is at the heart of chosen identity and group conflict. Serbia and Bosnia are examples of this continued conflict relived by many Serbians and Bosnians through an ancient battle between these two nations. This film is both intriguing and troubling; it rips at the heart and asks, “how can we as human beings reconcile, if we cannot heal individually and collectively to the atrocities, we have visited upon one another as groups?”
Awards: Winner, Gradiva Award, National Association for the Advancement of Psychoanalysis; Winner, Sydney Halpern Award, Clio's Psyche Journal; Global Health Film Festival; Montreal Independent Film Festival; New Haven Documentary Film Festival; The North Carolina Film Festival
Published and licensed under the Creative Commons Attribution 4.0 license. Anyone can use these reviews, so long as they comply with the terms of the license. | https://emro.libraries.psu.edu/record/index.php?id=7331 |
Activities provide practical ways to embrace the Great Commission as a core vision of family discipleship, influencing new traditions and strategic lifestyle choices. Published weekly, activities unpack key concepts surrounding how and why we do missions, near and far. We focus on four key areas: Discover, Explore, Connect, and Live.
Why do we do missions? Bible-based activities trace God’s global heart, woven throughout Scripture. Discussion questions focus on the person and eternal purposes of Christ and what this means for us, as disciples of Jesus. Allow God to align your family’s hearts and lives with His desire to be glorified in all peoples.
How do we do missions? Experiential learning activities introduce aspects of missions: God’s movement in history, barriers and bridges to the gospel, unreached peoples, and the role of the Church. Step into your family’s place in history and use your God-given blessings to reach out to others both near and far.
Who is our focus when we do missions? First-person stories provide a window into the lives of boys and girls in unreached people groups. Stories include an interactive cultural activity, suggested recipe, and prayer focus. Help enlarge your children’s world while developing a lifestyle of prayer for their unreached peers around the globe.
How does my family do missions? Right-where-you-live activities that encourage new traditions, intentional lifestyle changes, and ministry involvement. It’s not about adding more to your plate, but making the most of everyday routines. Discover and embrace your family’s unique role in God’s kingdom and live it out in intentional ways.
We invite people to things all the time – to the movies, to dinner, to a sporting event, etc. We invite people to what we care about, what we enjoy, what we are passionate about.
Invite a family or individual to church that does not know Jesus. Go out to lunch after church. Engage in simple conversation about the sermon.
Why is it important to invite others in?
Read Luke 19:10. How does God view those who are lost and don’t know Jesus?
Pray for the salvation of the family or individual you invited to attend church with your family.
What did your family learn by inviting others to church? | https://weavefamily.org/thread/inviting-others-in/ |
(2013) 'Green Management: Does it create a competitive advantage within a business in the service sector? - An investigation into 'smaller' service businesses, in comparison to a world-wide organization - The Hard Rock Café', no. 102.
Abstract
This is an exploratory study into 'Environmental Management Systems', within the service industry, and whether the execution of these green management practices creates a competitive advantage in business. The project has three main aims that have been sub-categorized to create a solid research investigation. These are to identify whether integration of green management systems to other business strategies create a competitive advantage within a business. To explore if businesses within the service sector are ethically 'environmentally green', or is it an effective cost cutting measure. Finally, an exploratory study into whether it matters if an organization is 'large' or 'small' in creating sound green management, and what they really gain from it. A qualitative research method using in depth interviews with a clutch of small businesses, and further interviews with managers from a large organization, the Hard Rock Café will be managed. What is more, during the research stages, an extensive exploration of literature will be conducted in order to form the foundation from where all primary research can be built and expanded. The main outcomes of this investigation will provide conclusions that environmental management systems have a place in both large and small organizations. It will also be concluded that a competitive edge can be created in a business due to standardized green practices, however only executed with careful training and planning for all staff to adhere. | https://eresearch.qmu.ac.uk/handle/20.500.12289/7736 |
Understanding the Honeymoon Phase in Type 1 Diabetes
A terrible illness struck me a few months after my diagnosis. The story is fairly complicated, but here are the essential elements: I participated in a Phase II Clinical Trial for a drug that was supposed to make diabetic life easier; the drug made me sick; a flurry of illnesses followed; I ended up with mono.
During those bed-ridden days, my fever climbed to 105 degrees Fahrenheit and my grip on reality became weak. I don’t think I actually hallucinated, but it makes a better story to say that I did. So, I hallucinated. The most interesting part of the experience, however, was the fact that my blood sugars stayed almost-perfect the whole time.
This fact contradicted the diabetic wisdom that states that sickness and fever will drive one’s blood sugars into the stratosphere. I never bolused for what little food I could manage, and still my blood sugar levels hovered around 140. Hardly any insulin entered my body. Was I hallucinating? No: I was in the Honeymoon Phase.
The Honeymoon Phase, or “Honeymoon Period,” which can last for as long as a year, occurs when the body makes a partial recovery from its autoimmune attack. If you want to approach a thorough comprehension of type 1 diabetes (T1D), or if you know anyone who was diagnosed recently, it’s important to understand what’s going on here.
But in order to understand the Honeymoon Phase, we must take a look at T1D’s pathogenesis, or, how the disease develops. Side note: I will give one whole dollar to anyone who can use “pathogenesis” in a Scrabble game.
As you probably know, type I diabetes strikes when the body’s immune system decides that the pancreas’s Islets of Langerhans (more particularly, the beta cells within those Islets) are enemies and must be destroyed. Why the immune system decides that the beta cells are enemies that must be destroyed is a complicated question with no clear answer.
Beta cells secrete insulin, and so when they’re slain by T-cells and various other lymphocyte tough guys, the person with diabetes lacks the basic amount of insulin required to go on living. Unless she introduces artificial insulin into her body, the person with diabetes will die.
This sounds dire, but T1D diagnosis does not necessarily mean that the Islets of Langerhans are toast. Most people with diabetes retain pockets of living beta cells for a long time after their diagnosis. At the time of diagnosis, they are suffering terribly. When artificial insulin lowers the diabetic’s blood sugars, her surviving beta cells wake up, and the Honeymoon Phase can begin.
This lowering of blood sugars does two things. First, it puts less stress on the surviving beta cells by removing the need for them to compensate for missing insulin. Second, since glucose is toxic to your cells, and produces an inflammatory response, lowering blood-glucose allows the beta cells to function in an environment that is literally less toxic.
In these ways, artificial insulin causes the body to partially recover. Once the glucose toxicity around the Islets of Langerhans weakens, the beta cells begin to release larger puffs of endogenous insulin again. While this lasts, a person with diabetes is in the Honeymoon Phase.
My Honeymoon Phase spanned a school year during which I played three sports. The shock and attrition of my new diabetic life certainly hit me, but my rejuvenated beta cells softened the blow. The Honeymoon Phase felt like a small mercy. For a few months, the cellular bottle rockets that sent my blood sugars screaming into the four or five hundreds burned slowly, and possessed longer fuses. During this period, diabetes seemed eminently manageable.
For some people with diabetes, the instability of the Honeymoon Phase can become exhausting. As the beta cells make their partial recovery, one’s insulin needs decrease, and the new person with diabetes can find herself chasing low blood sugars. Indeed, some of the most fearsome lows of my life struck early in the Honeymoon Phase.
Certain blog posts articulate a desire for the honeymoon phase to end as quickly as possible so that the person with diabetes can get down to the real business of calculating their more permanent insulin-carb ratios. In the straightforwardly titled article “I Hate My Son’s Honeymoon Period,” Tara Bryant-Gray suggests that dealing with an “erratic, failing pancreas” is too much trouble. She says that if she could “turn off his pancreas,” she would.
It’s clear, then, that the Honeymoon Phase is not an uncomplicated blessing. Still, despite its discomforts, I think it’s better to hope that those last beta cells cling to life for as long as possible. People with diabetes can use all the help they can get, and the rarer one’s high blood sugars, the lower the risk of later complications.
Some researchers are looking for ways to prolong the Honeymoon Phase. Some results have suggested that a gluten-free diet will do the trick (NCBI); others have indicated that Vitamin D3 might help (JAMA). These measures are only momentary stays against T1D’s full onset.
The Honeymoon Phase possesses a name that is misleading at best; at worst, it’s bitterly ironic. Besides the knowledge that one’s self-care is slightly easier right now, there isn’t much to enjoy in it. Still, for those of us who have gone through the Honeymoon Phase, I think our brave, dying beta cells deserve a modicum of gratitude for continuing to do their thing while suffering under a brutal chemical siege. Until we find a cure, diabetes has given us this strange, small mercy. | https://beyondtype1.org/understanding-the-honeymoon-phase-in-type-1-diabetes/ |
Background {#Sec1}
==========
Prediction of gene structures is one of the important tasks in genome sequencing projects, and the prediction of exon-intron boundaries or splice sites (*ss*) is crucial for predicting the structures of genes in eukaryotes. It has been established that accurate prediction of eukaryotic gene structure highly depends upon the ability to accurately identify the *ss*. The *ss* at the exon-intron boundaries are called the donor (5′) *ss* whereas intron-exon boundaries are called the acceptor (3′) *ss*. The donor and acceptor *ss* with consensus GT (at intron-start) and AG (at intron-end) respectively are known as canonical *ss* (GT-AG type; Fig. [1](#Fig1){ref-type="fig"}). Approximately, 99 % of the *ss* are canonical GT-AG type *ss* \[[@CR1]\]. As GT-and AG-are conserved in donor and acceptor *ss* respectively, every GT and AG in a DNA sequence could be a donor or acceptor *ss*. However, they need to be predicted as either real (true) or pseudo (false) *ss*.Fig. 1Pictorial representation of donor and acceptor *ss*. Donor *ss* have di-nucleotides GT at the beginning of the intron and acceptor *ss* have di-nucleotides AG at the end of intron
During the last decade, several computational methods have been developed for *ss* detection that can be grouped into several categories viz., probabilistic approaches \[[@CR2]\], ANNs \[[@CR3], [@CR4]\], SVM \[[@CR5]--[@CR7]\] and information theory \[[@CR8]\]. These methods seek the consensus patterns and identify the underlying relationships among nucleotides in *ss* region. ANNs and SVMs learn the complex features of neighborhood nucleotides surrounding the consensus di-nucleotides GT/AG by a complex non-linear transformation, whereas the probabilistic models estimate the position specific probabilities of *ss* by computing the likelihood of candidate signal sequences. Roca et al. \[[@CR9]\] identified the di-nucleotide dependencies as one of the main features of donor *ss*. Although the above mentioned methods are complex and computationally intensive, it is evident that position specific signals and nucleotide dependencies are pivotal for *ss* prediction.
In the class of ensemble classifiers, RF \[[@CR10]\] is considered as highly successful one that consists of ensemble of several tree classifiers (Fig. [2](#Fig2){ref-type="fig"}). The wide application of RF for prediction purposes in biology can be seen from literature. Hamby and Hirst \[[@CR11]\] utilized the RF algorithm for prediction of glycosylation sites and found significant increase in accuracy for the prediction of "Thr" and "Asn" glycosylation sites. Jain et al. \[[@CR12]\] assessed the performance of different classifiers (fifteen classifiers from five different categories of pattern recognition algorithms) while trying to solve the protein folding problem. Their experimental results showed that RF achieved better accuracy as compared to the other classifiers. Later on, Dehzangi et al. \[[@CR13]\] demonstrated that the RF classifier enhanced the prediction accuracy as well as reduced the time consumption in predicting the protein folds. In the recent past, Khalilia et al. \[[@CR14]\] used RF to predict disease risk for eight disease categories and found that the RF outperformed SVM, Bagging and Boosting.Fig. 2Flow diagram shows the step involved in prediction using ensemble of tree classifiers. Initially, *B* number of samples were drawn from the original training set and a tree was grown using each sample. The final predictions were made by combining all the classifiers
Keeping the above in view, an attempt has been made to develop a computational approach for donor *ss* prediction. The proposed approach involves sequence encoding procedures and application of RF methodology. For given encoding procedures, RF outperformed SVM, ANN in terms of prediction accuracy. Also, RF achieved higher accuracy while compared with existing approaches by using an independent test dataset.
Methods {#Sec2}
=======
Collection and processing of splice site data {#Sec3}
---------------------------------------------
The true and false *ss* sequences of *Homo sapiens* were collected from HS3D \[[@CR15]\] (<http://www.sci.unisannio.it/docenti/rampone/>). The downloaded dataset contains a total of 2796 True donor Splice Sites (TSS) (<http://www.sci.unisannio.it/docenti/rampone/EI_true.zip>) and 90924 False donor Splice Site (FSS) (<http://www.sci.unisannio.it/docenti/rampone/EI_false_1.zip>). The sequences are of 140 bp long with conserved GT at 71^st^ and 72^nd^ positions respectively.
Both introns and exons have important role in the process of pre-mRNA splicing. To be more specific, presence of conserved-ness at both 5′ and 3′ ends of intron as well as exonic splicing enhancers \[[@CR16], [@CR17]\] is vital from splicing point of view. Besides, the length of an exon is also an important property for proper splicing \[[@CR18]\]. It has been shown in vivo that internal deletion of consecutively recognized internal exons that are below \~50 bp may often lead to exon skipping \[[@CR19]\]. As far as the length of an intron is concerned, Zhu et al. \[[@CR20]\] carried out the functional analysis of minimal introns ranging between 50-100 bp and found that minimal introns are conserved in terms of both length and sequence. Hence, the window length of 102 bp \[50 bp at exon-end + (GT + 50 bp) at intron-start\] is considered here (Fig. [3](#Fig3){ref-type="fig"}).Fig. 3Pictorial representation of *ss* motif. The di-nucleotides GT conserved at 51^st^ and 52^nd^ positions in the *ss* motif of length 102 having 50 nucleotides flanking on both sides of GT
Though in longer window length there is a less chance of existence of identical sequences, still we performed redundancy check to remove the identical TSS sequences from the dataset. To train the model efficiently, same number of unique FSS (equal to unique TSS) was considered by drawing at random from 90924 FSS. A sequence similarity search was then performed to analyze the sequence distribution, where each sequences of TSS was compared with the remaining sequences of TSS as well as with all the sequences of FSS and vice versa. The percentage of similarity between any two sequences was computed by assigning a score of 1 and 0 for every match and mismatch in nucleotides respectively, and the same is explained below for two sample sequences.
Sequence 1: ATTCGTCATG
Sequence 2: TCTAGTTACG
Score : 0010110101
Similarity (%)=(5/10)\*100=50
Further, we prepared a highly imbalanced dataset consisting of 5%TSS and 95%FSS to assess the performance of RF as well as to compare its performance with that of SVM and ANN.
Computation of Position Weight Matrix (PWM) {#Sec4}
-------------------------------------------
The sequences of both TSS as well as FSS were position-wise aligned separately, using the di-nucleotide GT as the anchor. This position-wise aligned sequence data was then used to compute the frequencies and probabilities of nucleotides at each position. From a given set S of *N* aligned sequences each of length *l*, *s*~*1*~,..., *s*~*N*~, where *s*~*k*~ = *s*~*k1*~,..., *s*~*kl*~ (*s*~*kj*~ є{A, C, G, T}, *j* = 1, 2, ..., *l*), the PWM was computed as$$\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$ \begin{array}{l}{p}_{ij}=\frac{1}{n}{\displaystyle \sum_{k=1}^n{I}_i\left({s}_{kj}\right)}\left\{\begin{array}{l}i=A,\;C,\;G,\;T\\ {}\\ {}j=1,2, \dots,\;l\end{array}\right.\\ {}\\ {} where\;{I}_i(q)=\left\{\begin{array}{l}1\kern0.24em if\;i=q\\ {}\\ {}0\kern0.24em otherwise\end{array}\right.\;\\ {}\end{array} $$\end{document}$$
The PWM with four rows (one for each A, C, G, and T) and 102 columns i.e., equal to the length of the sequence (Fig. [4](#Fig4){ref-type="fig"}) was then used for computing the di-nucleotide association scores.Fig. 4Graphical representation of the PWM for the TSS. The graph shows the probability distribution of four nucleotide bases (ATGC) around the splicing junction
Di-nucleotide association score {#Sec5}
-------------------------------
The adjacent di-nucleotide association scores are computed under proposed encoding procedures as follows:In the *first* procedure (P-1), the association between any two nucleotides occurring at two adjacent positions was computed as the ratio of the observed frequency to the frequency due to random occurrence of the di-nucleotide. For *N* position-wise aligned sequences, numerator is the number of observed di-nucleotide occurring together, whereas the denominator is *N* times of 0.0625 (=1/16, probability of occurrence of any di-nucleotide at random).In the *second* procedure (P-2), the association was computed as the ratio of the observed frequency to the expected frequency, where expected frequency was computed from PWM under the assumption of independence between the positions.In the *third* procedure (P-3), the di-nucleotide association was computed as the absolute value of the relative deviation of the observed frequency from the expected frequency, where expected frequency was computed as outlined in P-2.
In all the three procedures, the scores were transformed to logarithm scale (base 2) to make them uniform. The computation of the di-nucleotide association scores is explained as follows:
Let *p*~*j*~^*i*^ be the probability of occurrence of *i*^*th*^ nucleotide at *j*^*th*^ position, $\documentclass[12pt]{minimal}
\begin{document}$$ {p}_{j^{\prime}}^{i^{\prime }} $$\end{document}$ be the probability of occurrence of *i′*^*th*^ nucleotide at *j′*^*th*^ position and $\documentclass[12pt]{minimal}
\begin{document}$$ {n}_{j,{j}^{\prime}}^{i,{i}^{\prime }} $$\end{document}$ be the frequency of occurrence of *i*^*th*^ and *i′*^*th*^ nucleotides together at *j*^*th*^ and *j′*^*th*^ positions respectively. Then the different di-nucleotide association scores between *i*^*th*^ and *i′*^*th*^ nucleotides occurring at *j*^*th*^ and *j′*^*th*^ positions under P-1, P-2, P-3 were computed using following formula$$\documentclass[12pt]{minimal}
\begin{document}$$ \begin{array}{l}\left(\mathrm{P}-1\right)\to {s}_{\left(j,{j}^{\prime}\right)}^{\left(i,{i}^{\prime}\right)}={ \log}_2\left(\frac{n_{j,{j}^{\prime}}^{i,{i}^{\prime }}}{N*\;0.0625}\right)\\ {}\left(\mathrm{P}-2\right)\to {s}_{\left(j,{j}^{\prime}\right)}^{\left(i,{i}^{\prime}\right)}={ \log}_2\left(\frac{n_{j,{j}^{\prime}}^{i,{i}^{\prime }}}{N*{p}_j^i*{p}_{j^{\prime}}^{i^{\prime }}}\right)\ \mathrm{and}\\ {}\left(\mathrm{P}-3\right)\to {s}_{\left(j,{j}^{\prime}\right)}^{\left(i,{i}^{\prime}\right)}={ \log}_2\left|\frac{n_{j,{j}^{\prime}}^{i,{i}^{\prime }}-N*{p}_j^i*{p}_{j^{\prime}}^{i^{\prime }}}{N*{p}_j^i*{p}_{j^{\prime}}^{i^{\prime }}}\right|\;\end{array} $$\end{document}$$
respectively, where $\documentclass[12pt]{minimal}
\begin{document}$$ {s}_{\left(j,{j}^{\prime}\right)}^{\left(i,{i}^{\prime}\right)} $$\end{document}$ is the association score, *N* is the total number of sequence motifs in the data set; *i*,*i′*є{A,T, G, C} and *j* = 1, 2, ..., (window length-1) and *j′* = *j* + 1. A pseudo count of 0.001 was added to avoid the logarithm of zero in the frequency. For a clear understanding, computation of di-nucleotide association scores is given below, through an example with 5 different sequences.
Positions : 0123456789
Sequence 1: ATAC **GT** CATG
Sequence 2: TGTA **GT** TTCG
Sequence 3: ATGC **GT** ACAC
Sequence 4: GACT **GT** TGCT
Sequence 5: CCTG **GT** GAGA
Using these sequences, the random, observed and expected (under independence) frequencies for di-nucleotide AT occurring at positions 0, 1 respectively are computed as follows:
Observed frequency = Number of times AT occurs together at 0^th^ and 1^st^ positions respectively
=2
Random frequency = Number of sequences × Probability of occurrence of any of the 16 combinations of di-nucleotides at random (=1/16)
=5\*0.0625
=0.3125
Expected frequency under independence = Number of sequences × Probability of independent occurrence of A at 0^th^ position × Probability of independent occurrence of T at 1^st^ position
=5\*(2/5)\*(2/5)
=0.8
In similar way, the frequencies can be calculated for other possible di-nucleotide combinations (AA, AG, AC, TA, ..., CC) occurring at all possible adjacent positions. Now, the association scores for three different procedures P-1, P-2 and P-3 can be calculated by using equation (2) as$$\documentclass[12pt]{minimal}
\begin{document}$$ \mathrm{P}\hbox{-} 1\to {s}_{\left(0,1\right)}^{\left(A,T\right)}={ \log}_2\left(\frac{Observed}{Random}\right)={ \log}_2\left(\frac{2}{0.3125}\right), $$\end{document}$$$$\documentclass[12pt]{minimal}
\begin{document}$$ \mathrm{P}\hbox{-} 2\to {s}_{\left(0,1\right)}^{\left(A,T\right)}={ \log}_2\left(\frac{Observed}{Expected}\right)={ \log}_2\left(\frac{2}{0.8}\right) $$\end{document}$$
and$$\documentclass[12pt]{minimal}
\begin{document}$$ \mathrm{P}\hbox{-} 3\to {s}_{\left(0,1\right)}^{\left(A,T\right)}={ \log}_2\left|\frac{Observed- Expected}{Expected}\right|={ \log}_2\left|\frac{2-0.8}{0.8}\right| $$\end{document}$$
Construction of scoring matrices {#Sec6}
--------------------------------
For a sequence of *l*bp long, *l*-*1* combinations of two adjacent positions are possible. Again, in each combination, 16 pairs of nucleotides (AA, AT,...,CG, CC) are possible. Thus, scoring matrices, each of order 16× (*l*-*1*), were constructed using di-nucleotide association scores under all the three procedures. Figure [5](#Fig5){ref-type="fig"} shows a sample scoring matrix for 102 bp window length.Fig. 5A sample scoring matrix. There are 101 columns for different combination of positions and 16 rows for all possible combinations of nucleotides. This scoring matrix was prepared under all the three encoding procedures
Ten-fold cross-validation and encoding of splice site sequence {#Sec7}
--------------------------------------------------------------
TSS and FSS sequence datasets were separately divided into 10 random non-overlapping sets for the purpose of 10-fold cross validation. In each fold, one set of TSS and one set of FSS together were used as the test dataset and remaining 9 sets of TSS and 9 sets of FSS together were used as the training dataset. This was performed because 10-fold cross validation procedure is a standard experimental technique for determining how well a classifier performs on a test data set \[[@CR21]\]. For each training set, scoring matrices for TSS and FSS were constructed independently and then the difference matrix was derived by subtracting the TSS scoring matrix from the FSS scoring matrix. The training and test datasets were then encoded by passing the corresponding sequences through the difference matrix (Fig. [6](#Fig6){ref-type="fig"}), where each sequence was transformed into a vector of scores of length *l*-*1*. A detailed explanation on encoding of the sequence is provided in Additional file [1](#MOESM1){ref-type="media"}.Fig. 6Diagrammatic representation for preparation of encoded training and test datasets from TSS and FSS sequences. For each of the training set in 10 fold cross validation procedure, TSS and FSS scoring matrices were constructed followed by the construction of difference scoring matrices. The encoded training and test sets were obtained after passing the *ss* sequence data of training and test sets through the difference matrix
Classification using Random Forest {#Sec8}
----------------------------------
Let *L*(**y**, **x**) be the learning dataset, where **x** is a matrix of *n* rows (observations) and *p* columns (variables), **y** is the response variable that takes values from *K* classes. Then, the RF consists of ensemble of *B* tree classifiers, where each classifier is constructed upon a bootstrap sample of the learning dataset. Each classifier of RF votes each test instances to one of the pre-defined *K* classes. Finally, each test instance is predicted by the label of winning class. As the individual trees are constructed upon a bootstrap sample, on an average 36.8 % $\documentclass[12pt]{minimal}
\begin{document}$$ \left[{\left(1-\frac{1}{n}\right)}^n\approx \frac{1}{e},\left(e\approx 2.718\right)\right] $$\end{document}$ of instances do not play any role in the construction of each tree, and are called as Out Of Bag (OOB) instances. These OOB instances are the source of data used in RF for estimating the prediction error (Fig. [7](#Fig7){ref-type="fig"}). RF is computationally very efficient and offers high prediction accuracy with less sensitiveness to noisy data. For classification of TSS and FSS, RF was chosen over the other classifiers as it is a non-parametric (i.e., it does not make any assumption about the probability distribution of the dataset) method as well as its ability to handle large data sets. For more details about RF, one can refer \[[@CR10]\].Fig. 7Diagrammatic representation of the steps involved in RF methodology
Tuning of parameters {#Sec9}
--------------------
There are two important parameters in RF viz., number of variables to choose at each node for splitting (*mtry*) and number of trees to grow in the forest (*ntree*). Tuning of these parameters is required to achieve maximum prediction accuracy.
*mtry*
A small value of *mtry* produces less correlated trees that consequently results in lower variance of prediction. Though, integer (log2^(*p*+1)^) number of predictors per node has been recommended by Breiman \[[@CR10]\], this mayn't provide best possible result always. Thus, RF model was executed with different *mtry* values i.e., *1*, √*p*, *20* %\**p*, *30* %\**p*, *50* %\**p* and *p* to find out the optimum one. The parameterization that generated the lowest and stable OOB Error Rate (OOB-ER) was chosen as the optimal *mtry*.
*ntree*
Many times, the number of trees to be grown in the forest for getting the stable OOB-ER is not known. Moreover, OOB-ER is totally dependent on the type of data, where the stronger predictor leads to quicker convergence. Therefore, the RF was grown with different number of trees, and the number of trees after which the error rate got stabilized was considered as the optimal *ntree*.
Margin function {#Sec10}
---------------
Margin function is one of the important features of RF that measures the extent to which the average vote for right class exceeds the average vote for any other class. Let (**x**, y) be the training set having *n* number of observations where each vector of attributes (**x**) is labeled with class y~j~ (where, j = 1, 2 for binary class), i.e., the correct class is denoted by y (either y~1~ or y~2~). Further, let *prob* (y~j~) be the probability of class y~j~, then the margin function of the labeled observation (**x**, y) is given by$$\documentclass[12pt]{minimal}
\begin{document}$$ m\left(\mathbf{x},\ \mathrm{y}\right)= prob\left[h\left(\mathbf{x}\right)=y\right]-\overset{2}{\underset{\begin{array}{l}j=1\\ {}{y}_j\ne y\end{array}}{ \max }} prob\left[h\left(\mathbf{x}\right)={y}_j\right] $$\end{document}$$
If *m* (**x**, y) \> 0, then *h* (**x**) correctly classifies y, where *h* (**x**) denotes a classifier that predicts the label y for an observation **x**. The value of margin function always lies between-1 to 1.
Implementation {#Sec11}
--------------
The RF code was originally written in Fortran by Breiman and Cutler and also included as a package "randomForest" in R \[[@CR22]\] and this package was implemented (for execution of RF model) on a windows server (82/GHz and 32 GB memory). Run time was dependent on data size and *mtry*, ranging from 1 second per tree to over 10 seconds per tree.
Performance metrics {#Sec12}
-------------------
The performance metrics viz., Sensitivity or True Positive Rate (TPR), Specificity or True Negative Rate (TNR), F-measure, Weighted Accuracy (WA), G-mean and Matthew's Correlation Coefficient (MCC), all of which are the functions of confusion matrix, were used to evaluate the performance of RF. The confusion matrix contains information about the actual and predicted classes. Figure [8](#Fig8){ref-type="fig"} shows the confusion matrix for a binary classifier, where TP is the number of TSS being predicted as TSS and TN is the number of FSS being predicted as FSS, FN is the number of TSS being incorrectly predicted as FSS and FP is the number of FSS being incorrectly predicted as TSS. The different performance metrics are defined as follows:Fig. 8Diagrammatic representation of confusion matrix. TP, FP, TN and FN are the number of true positives, false positives, true negatives and false negatives respectively. TP is the number of TSS being predicted as a TSS and TN is the number of FSS being predicted as FSS. Similarly, FN is the number of TSS being incorrectly predicted as FSS and FP is the number of FSS being incorrectly predicted as TSS$$\documentclass[12pt]{minimal}
\begin{document}$$ \mathrm{T}\mathrm{P}\mathrm{R}\ \mathrm{or}\ \mathrm{Sensitivity} = \frac{TP}{TP+FN}\kern0.36em \left( Same\;as\; recall\;for\; binary\; classification\right) $$\end{document}$$$$\documentclass[12pt]{minimal}
\begin{document}$$ \mathrm{T}\mathrm{N}\mathrm{R}\ \mathrm{or}\ \mathrm{Specificity} = \frac{TN}{TN+FP} $$\end{document}$$$$\documentclass[12pt]{minimal}
\begin{document}$$ \mathrm{F}-{\mathrm{measure}}^{\left(\alpha \right)}=\frac{\left(1+\alpha \right)\times recall\times precision}{\left(\alpha \times recall\right)+ precision}\kern0.48em \left(\alpha\;takes\; discrete\; values\right),\ \mathrm{Precision}=\frac{TP}{TP+FP} $$\end{document}$$$$\documentclass[12pt]{minimal}
\begin{document}$$ \mathrm{F}-{\mathrm{measure}}^{\left(\beta \right)}=\frac{\left(1+{\beta}^2\right)\times recall\times precision}{\left({\beta}^2\times recall\right)+ precision}\kern0.24em \left(\beta\;takes\; discrete\; values\right) $$\end{document}$$$$\documentclass[12pt]{minimal}
\begin{document}$$ \mathrm{W}\mathrm{A}=\frac{1}{2}\left(\frac{TP}{TP+FN}+\frac{TN}{TN+FP}\right) $$\end{document}$$$$\documentclass[12pt]{minimal}
\begin{document}$$ \mathrm{G}-\mathrm{Mean} = \sqrt{\left(\frac{TP}{TP+FN}\right)\left(\frac{TN}{TN+FP}\right)} $$\end{document}$$$$\documentclass[12pt]{minimal}
\begin{document}$$ \mathrm{M}\mathrm{C}\mathrm{C}=\frac{\left(TP\times TN\right)-\left(FP\times FN\right)}{\sqrt{\left(TP+FP\right)\left(TP+FN\right)\left(TN+FP\right)\left(TN+FN\right)}} $$\end{document}$$
Comparison of RF with SVM and ANN {#Sec13}
---------------------------------
The performances of RF was compared with that of SVM \[[@CR23]\], ANN \[[@CR24]\] using the same dataset that was used to analyze the performance of RF. The "e1071" \[[@CR25]\] and "RSNNS" \[[@CR26]\] packages of R software were used for implementing the SVM and ANN respectively. The SVM and ANN classifiers were chosen for comparison because these two techniques have been most commonly used for prediction purpose in the field of bioinformatics. In classification, SVM separates the different classes of data by a hyper-plane. In terms of classification performance, the optimal hyper-plane is the one that separates the classes with maximum margin (a clear gap as wide as possible). The sample observations on the margins are called the support vectors that carry all the relevant information for classification \[[@CR23]\]. ANNs are non-linear mapping structures based on the function of neural networks in the human brain. They are powerful tools for modeling especially when the underlying relationship is unknown. ANNs can identify and learn correlated patterns between input datasets and corresponding target values. After training, ANNs can be used to predict the outcome of new independent input data \[[@CR24]\]. The SVM model was trained with the radial basis function (gamma = 0.01) as kernel. In the ANN model, multilayer perceptron was used with "Randomize_Weights" as initialization function, "Std_Backpropagation" as learning function and "Act_Logistic" as hidden activation function. The 10-fold cross validation was performed for SVM and ANN, similar to RF. All the three techniques were then compared in terms of performance metrics. Also, the MCC values of RF, SVM and ANN were plotted to analyze the consistency over 10 folds of the cross validation. A similar kind of comparison between RF, SVM and ANN was also made using the imbalanced dataset. To handle the imbalanced data, one additional parameter i.e., *cutoff* was used in RF, where 90 % cutoff was assigned to the major class (class having larger number of observations) i.e., FSS and 10 % to the minor class (class having lesser number of observations) i.e., TSS, based on the degree of imbalanced-ness in the dataset. Similarly, one additional parameter i.e., *class.weights* was used in SVM model, and the weights used were 19 and 1 for TSS and FSS respectively (keeping in view the proportion of TSS and FSS in the dataset). However, no parameter to handle imbalanced-ness was found in "RSNNS" package, therefore the same model of ANN was trained using imbalanced data.
In the case of imbalanced test dataset, the performance metrics were computed by assigning weights w~1~ to TP & FN and w~2~ to FP & TN. Here, w~1~ $\documentclass[12pt]{minimal}
\begin{document}$$ =\raisebox{1ex}{${n}^{FSS}$}\!\left/ \!\raisebox{-1ex}{$\left({n}^{TSS}+{n}^{FSS}\right)$}\right. $$\end{document}$ and w~2~ $\documentclass[12pt]{minimal}
\begin{document}$$ =\raisebox{1ex}{${n}^{TSS}$}\!\left/ \!\raisebox{-1ex}{$\left({n}^{TSS}+{n}^{FSS}\right)$}\right. $$\end{document}$, where *n*^*TSS*^ is the number of TSS and *n*^*FSS*^ is the number of FSS in the test dataset. Further, the Mann Whitney *U* test at 5 % level of significance was performed to evaluate the difference among the prediction accuracies of RF, SVM and ANN, by using the *stats* package of R-software.
Comparison with other prediction tools {#Sec14}
--------------------------------------
The performance of the proposed approach was also compared with other splice site prediction tools such as MaxEntScan (<http://genes.mit.edu/burgelab/maxent/Xmaxentscan_scoreseq.html>), SpliceView (<http://bioinfo4.itb.cnr.it/~webgene/wwwspliceview_ex.html>) and NNSplice (<http://www.fruitfly.org/seq_tools/splice.html>) using an independent test set. Besides, three more methods viz., Maximal Dependency Decomposition (MDD), Markov Model of 1^st^ order (MM1) and Weighted Matrix Method (WMM) given under MaxEntScan were also used for comparison. The independent test set was prepared using two different genes (AF102137.1 and M63962.1) downloaded from Genbank (<http://www.ncbi.nlm.nih.gov/genbank/>) randomly. Comparison among the approaches was made using the values of performance metrics.
Web server {#Sec15}
----------
A web server for the prediction of donor splice sites was developed using HTML and PHP. The developed R-code was executed in the background using PHP script upon the submission of sequences in FASTA format. The web page was designed to facilitate the user for a sequence input, selection of species (human) and encoding procedures. In the server, the model has been trained with human splice site data and the user has to supply only the test sequence (s) of his/her interest to predict the donor splice sites.
Results {#Sec16}
=======
Analysis of sequence distribution {#Sec17}
---------------------------------
The removal of the identical sequences from the TSS dataset resulted in 2775 unique TSS. A graphical representation of degree of similarity within TSS, within FSS and between TSS & FSS is shown in Fig. [9](#Fig9){ref-type="fig"}. It is observed that each sequence of TSS is 40 % (blue) similar with an average of 56 (0.02\*2775) sequences of TSS (Fig. [9a](#Fig9){ref-type="fig"}) and 15 (0.005\*2775) sequences of FSS (Fig. [9c](#Fig9){ref-type="fig"}). On the other hand, each sequence of FSS is 40 % (blue) similar with an average of 17 (0.006\*2775) sequences of FSS (Fig. [9b](#Fig9){ref-type="fig"}) and 17 sequences of TSS (Fig. [9d](#Fig9){ref-type="fig"}). Similarly, each sequence of TSS is 30 % (green) similar with an average of 1276 (0.46\*2775) sequences of TSS (Fig. [9a](#Fig9){ref-type="fig"}) and 805 (0.29\*2775) sequences of FSS (Fig. [9c](#Fig9){ref-type="fig"}). On the other hand, each sequence of FSS is 30 % (green) similar with an average of 832 (0.30\*2775) sequences of FSS (Fig. [9b](#Fig9){ref-type="fig"}) and 805 (0.29\*2775) sequences of TSS (Fig. [9d](#Fig9){ref-type="fig"}). Further, more than 90 % of sequences of entire dataset (both TSS and FSS) are observed to be at least 20 % similar with each other.Fig. 9Graphical representation of sequence distribution in the dataset. **a**. Similarities of each sequence of TSS with rest of the sequences in TSS. **b**. Similarities of each sequence of FSS with rest of the sequences in FSS. **c**. Similarities of each sequence of TSS with all the sequences in FSS. **d**. Similarities of each sequence of FSS with all the sequences in TSS. X-axis represents the sequence entries and Y-axis represents fraction of similar sequences
Optimum values of parameters {#Sec18}
----------------------------
The graph of OOB error against *ntree* (500) for different *mtry* values is shown in Fig. [10](#Fig10){ref-type="fig"}. From Fig. [10](#Fig10){ref-type="fig"} it is observed that the OOB errors are stabilized after 200 trees, for all *mtry* values and that too in all the three encoding procedures. Besides, it is observed that OOB error is minimum at *mtry*=50, irrespective of the encoding procedures. Hence, the optimum values of *mtry* and *ntree* were determined as 50 and 200 respectively. The final prediction was made with optimum values of the parameters.Fig. 10Graphical representation of OOB-ER with different *mtry* and *ntree*. Graphs **a**, **b** and **c** represents the trend of error rates with varying *mtry* for three encoding procedures, P-1, P-2 and P-3. The OOB-ER was minimum for *mtry* = *50* and stabilized with 200 trees (*ntree*)
Performance analysis of random forest {#Sec19}
-------------------------------------
The plot of margin function for all the 10 folds of the cross-validation under P-1 is shown in Fig. [11](#Fig11){ref-type="fig"}. The points in red color in Fig. [11](#Fig11){ref-type="fig"} indicate the predicted FSS and blue color indicate the predicted TSS. The same for P-2 and P-3 are provided in Additional files [2](#MOESM2){ref-type="media"} and [3](#MOESM3){ref-type="media"} respectively. The instances having the values of margin function greater than or equal to zero are correctly predicted test instances and less than zero are incorrectly predicted test instances. From Fig. [11](#Fig11){ref-type="fig"} it is observed that most of the values of margin function are above zero both in TSS and FSS i.e., the RF achieved high prediction accuracy. Similar results are also found in case of P-2 and P-3. Further, the performance of RF is measured in terms of performance metrics and is presented in Table [1](#Tab1){ref-type="table"}. From Table [1](#Tab1){ref-type="table"} it is seen that the number of correctly predicted TSS is higher than that of FSS, in all the three encoding approaches. Also, it is observed that the average prediction accuracies are \~93 %, \~91 % and \~92 % under P-1, P-2 and P-3 respectively.Fig. 11Graphical representation of margin functions for ten-fold cross-validation. Red color points for FSS and blue color for TSS. The instances having value of margin function greater than or equal to zero are correctly predicted test instances and instances having value below zero indicate incorrectly predicted test instancesTable 1Performance metrics of RF for three encoding proceduresApproachesPerformance MetricsTPRTNRF (β = 2)F (α = 1)WAG-meanMCCP-10.95390.92360.93130.93970.93870.93860.8782P-20.93730.90090.91080.92050.91910.91890.8383P-30.93980.90770.91630.92500.92380.92360.8483
Comparative analysis among different classifiers {#Sec20}
------------------------------------------------
The performance metrics of RF, SVM and ANN under P-1, P-2 and P-3 for both balanced and imbalanced training datasets are presented in Table [2](#Tab2){ref-type="table"}. The plots of MCC for RF, SVM and ANN are shown in Fig. [12](#Fig12){ref-type="fig"}. From Table [2](#Tab2){ref-type="table"} it is observed that the prediction accuracies of RF are higher than that of SVM and ANN under both balanced and imbalanced situations. It is further observed that for the balanced training dataset the performances of RF and SVM are not significantly different in P-1 but significantly different in P-2 and P-3 (Table [3](#Tab3){ref-type="table"}). However, the RF performed significantly better than that of ANN in all the three procedures. Furthermore, all the three classifiers achieved higher accuracies in case of balanced training dataset as compared to the imbalanced training dataset. Besides, RF achieved consistent accuracy over the 10 folds under all the three encoding procedures (Fig. [12](#Fig12){ref-type="fig"}). On the other hand, SVM and ANN could not achieve consistent accuracies in P-2 and P-3 over different folds of the cross validation.Table 2Comparison of the performance of RF, SVM and ANN under all encoding procedures with both balanced and imbalanced training datasetEPMLABalanced DatasetImbalanced DatasetTPRTNRF (α = 1)F (β = 2)G-meanWAMCCTPRTNRF (α = 1)F (β = 2)G-meanWAMCCP-1RF0.9540.9240.9400.9320.9390.9390.8780.8420.8960.8650.8800.8690.8690.739(0.014)(0.014)(0.010)(0.012)(0.010)(0.010)(0.020)(0.064)(0.018)(0.032)(0.049)(0.030)(0.028)(0.043)SVM0.9350.9300.9330.9310.9330.9330.8650.1040.9820.1850.3490.3200.5430.180(0.015)(0.017)(0.015)(0.015)(0.016)(0.016)(0.031)(0.027)(0.018)(0.041)(0.031)(0.040)(0.013)(0.061)ANN0.8920.8960.8940.8950.8940.8940.7870.0320.9880.0610.1360.1780.5100.068(0.064)(0.080)(0.063)(0.062)(0.066)(0.065)(0.129)(0.026)(0.010)(0.046)(0.032)(0.065)(0.011)(0.055)P-2RF0.9370.9010.9200.9110.9190.9190.8380.8830.8940.8880.8910.8880.8890.777(0.020)(0.016)(0.016)(0.018)(0.016)(0.016)(0.033)(0.038)()(0.025)(0.030)(0.019)(0.019)(0.035)SVM0.7200.7730.7400.7520.7460.7460.4930.3210.9890.4820.6890.5630.6550.417(0.029)(0.106)(0.041)(0.026)(0.049)(0.051)(0.108)(0.051)(0.008)(0.055)(0.053)(0.043)(0.025)(0.048)ANN0.7750.7770.7760.7760.7760.7760.5520.3050.9780.4600.6610.5460.6420.383(0.067)(0.037)(0.049)(0.059)(0.048)(0.045)(0.090)(0.049)(0.014)(0.052)(0.051)(0.043)(0.022)(0.046)P-3RF0.9400.9080.9250.9170.9240.9240.8480.8790.8910.8840.8880.8850.8850.770(0.017)(0.015)(0.012)(0.014)(0.012)(0.012)(0.246)(0.044)(0.022)(0.029)(0.034)(0.022)(0.022)(0.042)SVM0.7890.8070.7960.8000.7980.7980.5950.2490.9880.3950.6090.4960.6190.352(0.044)(0.068)(0.042)(0.042)(0.046)(0.045)(0.090)(0.052)(0.008)(0.062)(0.056)(0.049)(0.026)(0.055)ANN0.7570.7600.7580.7590.7590.7590.5170.2720.9790.4210.6260.5160.6260.355(0.118)(0.099)(0.067)(0.098)(0.057)(0.048)(0.086)(0.066)(0.009)(0.081)(0.072)(0.064)(0.034)(0.076)The values inside the brackets () are the standard errors*EP* encoding procedure, *MLA* machine learning approachesFig. 12Graphical representation of MCC of the RF, SVM and ANN. MCC is consistent in all the three procedures for the RF over the tenfold cross-validationTable 3*P*-values of Mann Whitney U statistic for testing the significant difference between RF-SVM, RF-ANN and SVM-ANN at 5 % level of significance for all the performance measures under both balanced and imbalanced training datasets\$DEPMLATPRTNRF (α = 1)F (β = 2)G-meanWAMCCBalancedP-1RF-SVM0.020080.424730.325570.041170.405500.383780.32557RF-ANN0.003560.732860.018540.005200.023230.025690.01854SVM-ANN0.016960.305850.075260.035460.075260.096050.10512P-2RF-SVM0.000180.025640.000010.000010.000010.000010.00001RF-ANN0.000180.000180.000010.000010.000010.000010.00001SVM-ANN0.058690.545050.165490.063010.143140.140170.24745P-3RF-SVM0.000180.000660.000010.000010.000010.000010.00001RF-ANN0.000180.001290.000010.000010.000010.000010.00001SVM-ANN0.939610.161500.105120.684210.075260.069540.07526ImbalancedP-1RF-SVM0.000180.000180.000010.000010.000010.000010.00001RF-ANN0.000170.000170.000010.000010.000010.000010.00001SVM-ANN0.000480.467780.000080.000080.000080.000080.00021P-2RF-SVM0.000180.000170.000010.000010.000010.000010.00001RF-ANN0.000180.000180.000010.000010.000010.000010.00001SVM-ANN0.648540.051300.393050.528850.481250.325570.05243P-3RF-SVM0.000180.000180.000010.000010.000010.000010.00001RF-ANN0.000180.000180.000010.000010.000010.000010.00001SVM-ANN0.494830.052100.630530.578740.578740.739360.91180\$*D* type of dataset (balanced or imbalanced), *EP* encoding procedures (P-1, P-2, P-3), *MLA* machine learning approaches
Though RF performed better than SVM and ANN, its performance was further compared with that of Bagging \[[@CR27]\], Boosting \[[@CR28]\], Logistic regression \[[@CR29]\], *k*NN \[[@CR30]\] and Naïve Bayes \[[@CR29]\] classifiers to assess its superiority. The functions *bagging ()*, *ada ()*, *glm ()*, *knn ()* and *NaiveBayes ()* available in R-packages "class" \[[@CR31]\], "klaR" \[[@CR32]\], "stats" \[[@CR33]\], "ada" \[[@CR34]\] and "ipred" \[[@CR35]\] were used to implement Bagging, Boosting, Logistic regression, *k*NN and Naïve Bayes classifiers respectively. The values of performance metrics, their standard errors and P-values for testing the significance are provided in Table [4](#Tab4){ref-type="table"}, Table [5](#Tab5){ref-type="table"} and Table [6](#Tab6){ref-type="table"} respectively. It is observed that the performance of RF is not significantly different from that of Bagging and Boosting in case of balanced dataset (Table [6](#Tab6){ref-type="table"}). On the contrary, RF outperformed both Bagging and Boosting classifiers under imbalanced situation (Table [6](#Tab6){ref-type="table"}). It is also noticed that the classification accuracies (performance metrics) of RF are significantly higher than that of Logistic regression, *k*NN and Naïve Bayes classifiers under both the balanced and imbalanced situations (Table [4](#Tab4){ref-type="table"}, Table [6](#Tab6){ref-type="table"}).Table 4Performance metrics of Bagging, Boosting, Logistic regression, *k*NN and Naïve Bayes classifiers for all the three encoding procedures under both balanced and imbalanced situationsEPMDBalancedImbalancedTPRTNRF (α = 1)F (β = 2)G-meanWAMCCTPRTNRF (α = 1)F (β = 2)G-meanWAMCCP-1BG0.9440.9210.9340.9400.9330.9330.8660.0690.9960.1270.0840.2580.5330.172BS0.9520.9190.9360.9450.9350.9350.8720.0410.8980.0790.0510.1920.4700.129LG0.8950.8820.8890.8920.8880.8880.7770.0080.9930.0160.0100.0870.5020.012NB0.8350.8360.8360.8350.8340.8350.6740.2020.8380.2970.2310.4090.5200.067KN0.8560.8400.8470.8520.8470.8480.6970.0480.8540.0870.0580.2000.4510.012P-2BG0.9270.8820.9070.9190.9040.9040.8100.1120.9920.1980.1350.3300.5520.216BS0.9340.9010.9180.9280.9170.9170.8350.0900.9960.1630.1090.2960.5430.200LG0.7420.7340.7390.7410.7370.7380.4780.1120.9810.1980.1350.3300.5470.190NB0.7720.7580.7670.7700.7640.7650.5320.1590.8840.2500.1860.3730.5210.073KN0.8130.6780.7600.7900.7390.7460.5020.1730.9810.2900.2070.4120.5770.262P-3BG0.9240.9040.9150.9200.9140.9140.8280.1250.9910.2200.1510.3510.5580.230BS0.9410.8980.9220.9330.9200.9200.8410.0950.9950.1710.1150.3050.5450.205LG0.8130.7750.7980.8070.7930.7940.5890.1200.9830.2100.1440.3420.5510.202NB0.7840.7610.7750.7800.7710.7720.5470.1780.9450.2890.2100.4100.5620.196KN0.7950.7000.7560.7780.7420.7470.5010.0650.9890.1200.0800.2470.5270.142*MD* methods, *EP* encoding procedures, *BG* bagging, *BS* boosting, *LG* logistic regression, *NB* naïve bayes, *KN K* nearest neighborTable 5Standard errors of different performance metrics for Bagging, Boosting, Logistic regression, *k*NN and Naïve Bayes classifiers for all the three encoding procedures under both balanced and imbalanced situationsEPMDBalancedImbalancedTPRTNRF (α = 1)F (β = 2)G-meanWAMCCTPRTNRF (α = 1)F (β = 2)G-meanWAMCCP-1BG0.02010.01780.01140.01560.01130.01130.02260.02340.00360.04090.02820.04740.01080.0334BS0.01460.01490.01110.01250.01130.01120.02240.01770.31560.03340.02180.07150.16520.0504LG0.05690.07400.06010.05750.06240.06210.12380.00650.00560.01210.00760.03130.00450.0267NB0.06300.08260.05600.05710.05730.05770.11690.03570.10430.05000.03900.04390.05490.1579KN0.15020.12790.13860.14540.13640.13540.27010.02210.30230.03890.02670.07650.15950.0799P-2BG0.02010.02720.01920.01880.02010.02000.03970.02610.00600.04290.03100.04210.01300.0364BS0.02070.01790.01610.01840.01630.01630.03270.02730.00330.04560.03250.04610.01340.0358LG0.06880.07990.06170.06440.06300.06320.12720.01820.01480.02900.02140.02730.01070.0410NB0.05460.06290.04210.04720.04050.04070.08240.03160.07330.04870.03630.04360.04260.1342KN0.09250.08110.03620.06810.02660.02800.05980.02350.00440.03370.02690.02820.01170.0270P-3BG0.01560.01860.01170.01300.01200.01190.02370.01850.00520.02910.02170.02670.00890.0235BS0.01210.01780.01020.01020.01080.01070.02100.01940.00390.03240.02310.03230.00950.0256LG0.04060.05860.03760.03770.04090.04020.07950.02100.01160.03340.02470.03030.01320.0440NB0.03800.06890.03300.03230.03720.03680.07350.02540.04340.03970.02950.03330.02860.0913KN0.10170.08290.06290.08420.05660.05440.10760.02920.00780.05040.03520.06290.01160.0334*MD* methods, *EP* encoding procedures, *BG* bagging, *BS* boosting, *LG* logistic regression, *NB* naïve bayes, *KN K* nearest neighborTable 6*P*-values of the Mann Whitney statistic to test the significant difference between the performance of RF with that of Bagging, Boosting, Logistic regression, *k*NN and Naïve Bayes classifiers in all the three encoding procedures under both balanced and imbalanced situations\$DEPCLsTPRTNRF (α = 1)F (β = 2)G-meanWAMCCBalancedP-1RF-BG0.3430660.6764350.2728560.2121220.1857110.2404360.272856RF-BS0.8200630.9390060.3149990.7959360.3149990.3835980.314999RF-LG0.0016720.0530920.0028790.0007250.0051960.0090820.005196RF-NB0.0002420.0027961.08E-051.08E-051.08E-050.0001811.08E-05RF-KN0.0531820.0870510.0288060.0630130.0354630.0255810.028806P-2RF-BG0.413190.5943140.3562320.3562320.2775120.3153780.356232RF-BS0.8377650.3678440.9682390.9682390.8421050.7435370.842105RF-LG0.0002750.0004392.17E-052.17E-052.17E-052.17E-052.17E-05RF-NB0.0002750.0042162.17E-052.17E-052.17E-052.17E-052.17E-05RF-KN0.0003760.0002732.17E-052.17E-052.17E-050.0002782.17E-05P-3RF-BG0.1716720.8793780.143140.143140.1654940.150620.14314RF-BS0.4941740.3816130.9705120.5288490.8534280.8201970.911797RF-LG0.0001810.0001811.08E-051.08E-051.08E-051.08E-051.08E-05RF-NB0.0001820.0002791.08E-051.08E-051.08E-050.0001821.08E-05RF-KN0.0001820.0001811.08E-051.08E-051.08E-050.0001821.08E-05ImbalancedP-1RF-BG0.0002690.0002512.17E-052.17E-052.17E-050.0002782.17E-05RF-BS0.0001760.0025550.0001810.0001810.0001810.0001780.000181RF-LG0.0002630.0002682.17E-052.17E-052.17E-050.0002632.17E-05RF-NB0.0002710.1773382.17E-052.17E-052.17E-052.17E-052.17E-05RF-KN0.0001750.0255261.08E-051.08E-051.08E-050.0001820.000179P-2RF-BG0.0001790.0001730.0001820.0001820.0001820.0001810.000182RF-BS0.0001810.0001581.08E-051.08E-051.08E-050.0001811.08E-05RF-LG0.000180.0001781.08E-051.08E-051.08E-050.000181.08E-05RF-NB0.0001820.7336341.08E-051.08E-051.08E-050.0001821.08E-05RF-KN0.0001810.0001741.08E-051.08E-051.08E-050.0001821.08E-05P-3RF-BG0.0001760.0001680.0001820.0001820.0001820.0001810.000182RF-BS0.0001790.0001491.08E-051.08E-051.08E-051.08E-051.08E-05RF-LG0.0001790.0001771.08E-051.08E-051.08E-050.0001821.08E-05RF-NB0.0001770.0090821.08E-051.08E-051.08E-051.08E-051.08E-05RF-KN0.000180.000181.08E-051.08E-051.08E-050.0001781.08E-05\$*D* data type, *RF* random forest, CLs classifiers, *BG* bagging, *BS* boosting, *LG* logistic regression, *NB* naïve bayes, *KN K* nearest neighbor
Comparison of RF with other prediction tools {#Sec21}
--------------------------------------------
The performance metrics of the proposed approach and the considered existing methods computed by using an independent test dataset is presented in Table [7](#Tab7){ref-type="table"}. It is seen that none of the existing approaches achieved above 90 % TPR. On the other hand, all other approaches (except SpliceView) achieved higher values of TNR than that of proposed approach (Table [7](#Tab7){ref-type="table"}). Furthermore, the proposed approach achieved more than 90 % accuracy in terms of different performance metrics (Table [7](#Tab7){ref-type="table"}).Table 7The performance metrics for the proposed approach and other published tools using the independent test setMethodsTPRTNRF (α = 1)F (β = 2)G-meanWAMCCMaxEntScan0.6270.9900.7660.8840.7880.8090.662MDD0.6510.9910.7840.8940.8030.8210.682MM10.5810.9880.7300.8620.7580.7850.623WMM0.4150.9860.5810.7640.6400.7010.488NNSplice0.7330.9540.8240.8910.8370.8440.705SpliceView0.8880.8790.8840.8820.8830.8840.767Proposed0.9770.9220.9510.9360.9490.9490.900
Online prediction server-MaLDoSS {#Sec22}
--------------------------------
The home page of the web server is shown in Fig. [13](#Fig13){ref-type="fig"} and the result page after execution of an example dataset is shown in Fig. [14](#Fig14){ref-type="fig"}. Separate help pages are provided as links in the main menu with complete description on encoding procedures and input-output. The gene name, start and end coordinates of splice sites, splice site sequences and probability of each splice site being predicted as TSS are given in the result page. Since RF is observed to be superior over the other classifiers, it is only included in the server for prediction. The prediction server is freely available at <http://cabgrid.res.in:8080/maldoss>.Fig. 13Snapshot of the server pageFig. 14Snapshot of the result page after execution of an example dataset with all the three encoding procedures
Discussion {#Sec23}
==========
Many statistical methods like, Back Propagation Neural Networks (BPNN), Markov Model, SVM etc. have been used for prediction of *ss* in the past. Rajapakse and CaH \[[@CR4]\] introduced a complex *ss* prediction system (combination of 2^nd^ order Markov model and BPNN) that achieved higher prediction accuracy than that of Genesplicer \[[@CR36]\], but at the same time it is required longer sequence motifs to train the model. Moreover, BPNN is computationally expensive and may increase further with the inclusion of 2^nd^ order Markov model. Baten et al. \[[@CR6]\] reported improved prediction accuracy by using SVM with Salzberg kernel \[[@CR37]\], where the empirical estimates of conditional positional probabilities of the nucleotides around the splicing junctions are used as input in SVM. Sonnenburg et al. \[[@CR7]\] employed *weighted degree* kernel method in SVM for the genome-wide recognition of *ss*, which is based on complex nonlinear transformation. In the present study we applied RF as it is computationally feasible and user friendly. Furthermore, the fine tuning of parameters of RF helps in improving the prediction accuracy.
Most of the existing methods capture position specific signals as well as nucleotide dependencies for the prediction of *ss*. In particular, Roca et al. \[[@CR9]\] explained the pivotal role played by the nucleotide dependencies for the prediction of donor *ss*. Therefore, the proposed encoding procedures are based on di-nucleotide dependencies. Further, the earlier *ss* prediction methods such as Weighted Matrix Method (WMM) \[[@CR38]\], Weighted Array Model (WAM) \[[@CR39]\] and Maximal Dependency Decomposition (MDD) \[[@CR40]\] only considered the TSS but not the FSS to train the prediction model. However, FSS are also necessary \[[@CR41]\], and hence RF was trained with both TSS and FSS datasets.
There is a chance of occurrence of same *ss* motifs in both TSS and FSS when the length of *ss* motif is small. To avoid such ambiguity, instead of 9 bp long motif (3 from exons and 6 from introns) \[[@CR42]\], the longer *ss* motif (102 bp long) was considered in this study. Further, duplicate sequences were removed and a similarity search was performed to analyze the sequence distribution. It is found that each sequence of TSS is 40 % similar with an average of 0.5 % sequences of FSS (Fig. [9c](#Fig9){ref-type="fig"}) and each sequence of FSS is 40 % similar with an average of 0.6 % sequences of TSS (Fig. [9d](#Fig9){ref-type="fig"}). Also, the sequences are found to be similar (20 % similarity) within the classes (Fig. [9a-b](#Fig9){ref-type="fig"}). This implies that the presence of within class dissimilarities and between class similarities in the dataset. Thus the performance of the proposed approach is not over estimated.
The procedure followed in the present study includes WMM and WAM procedures to some extent in finding the weights for the first order dependencies. Besides, the difference matrix captured the difference in the variability pattern existing among the adjacent di-nucleotides in the TSS and FSS. Li et al. \[[@CR43]\] have also used di-nucleotide frequency difference as one of the positional feature in prediction of *ss*.
The optimum value of *mtry* was observed as 50, determined on the basis of lowest and stable OOB-ER. This may be due to the fact that each position was represented twice (except the 1^st^ and 102^nd^ positions) in the set of 101 variables (1_2, 2_3, 3_4, ..., 100_101, 101_102). Further, OOB-ER was found to be stabilized with small number of trees (*ntree* = 200) and this may be due to the existence of di-nucleotide dependencies in the *ss* motifs that leads to the high correlation between trees grown in the forest. However, we considered the *ntree* equal to 1000 as (i) the computational time was not much higher than that required for *ntree* = *200*, and (ii) the prediction accuracy may increase with increase in the number of trees. Hence, the final RF model was executed with *mtry* = 50 and *ntree* = 1000. The classification accuracy of RF model was measured in terms of margin function, over 10 folds of cross-validation. It is found that the probability of instances being predicted as the correct class over the wrong class is very high (Fig. [11](#Fig11){ref-type="fig"}), which is a strong indication that the proposed approach with RF classifier is well defined and capable of capturing the variability pattern in the dataset.
As far as the encoding procedures are concerned, it is analyzed that the dependencies between the adjacent nucleotide positions in the *ss* positively influenced the prediction accuracy. Out of the three procedures (P-1, P-2 and P-3), P-1 is found to be superior with respect to different performance metrics. Though the accuracy of P-2 is observed to be lower than that of P-3, the difference is negligible. Therefore, it is inferred that the ratio of the observed frequency to the random frequency of di-nucleotide is an important feature for discriminating TSS from FSS.
Among the classifiers, RF achieved above 91 % accuracy in all the three encoding procedures, while SVM showed a similar trend only for P-1 and ANN could not achieve above 90 % under any of the encoding procedures (Table [2](#Tab2){ref-type="table"}). The MCC values of RF, SVM and ANN also supported the above finding. Though SVM and ANN performed well in P-1, their consistencies were relatively low in P-2 and P-3 over 10 folds of cross validation. On the other hand, RF was found to be more consistent in all the three encoding procedures. Further, the prediction accuracy of RF was not significantly different (*P*-value \> 0.05) from that of SVM, whereas it was significantly higher (P-value \< 0.05) than that of ANN in balanced training set under P-1. However, under P-2 and P-3, RF performed significantly better than that of SVM and ANN in both balanced and imbalanced situations (Table [3](#Tab3){ref-type="table"}). Further, the performance of SVM was not significantly different than that of ANN in P-1, whereas it was significantly different in P-2 and P-3 under both balanced and imbalanced datasets (Table [3](#Tab3){ref-type="table"}). In case of imbalanced dataset, RF performed better than SVM and ANN in terms of sensitivity and overall accuracy (Table [2](#Tab2){ref-type="table"}). Besides, the performances of SVM and ANN were biased towards the major class (FSS) whereas RF performed in an unbiased way. Furthermore, all the classifiers performed better under P-2 and P-3 as compared to P-1, in case of imbalanced dataset (Table [2](#Tab2){ref-type="table"}).
Besides SVM and ANN, the performance of Bagging, Boosting, Logistic regression, *k*NN and Naïve Bayes classifiers were also compared with that of RF. Though the performance of RF was found at par with that of Bagging and Boosting in balanced situation, it was significantly higher than that of Logistic regression, *k*NN and Naïve Bayes classifiers. However, in case of imbalanced dataset, RF performed significantly better than Bagging, Boosting, Logistic regression, *k*NN and Naïve Bayes classifiers in all the three encoding procedures. Thus, RF can be considered as a better classifier over the others.
RF achieved highest prediction accuracy under P-1 as compared to the other combinations of encoding procedures (P-2, P-3) and classifiers (SVM, ANN, Bagging, Boosting, Logistic regression, *k*NN and Naïve Bayes). Therefore, the performance of RF under P-1 was compared with different existing tools i.e., MaxEntScan (Maximim Entropy Model, MDD, MM, WMM), SpliceView and NNSplice using an independent test set. The overall accuracy of the proposed approach (RF with P-1) was found better than that of other considered (existing) tools.
The purpose of developing the web server is to facilitate easy prediction of donor splice sites by the users working in the area of genome annotations. The developed web server provides flexibility to the users for selecting the encoding procedures and the machine learning classifiers. As the test sequences belong to two different classes, the instances with probability \>0.5 are expected to be true splice sites. Besides, higher the probability more is the strength of instance being a donor splice site. Though, the RF achieved higher accuracy under P-1 as compared to the other combinations, all combinations are provided in the server for the purpose of comparative analysis by the user. To our limited knowledge, for the first time, we have used RF in *ss* prediction.
Conclusion {#Sec24}
==========
This paper presents a novel approach for donor splice site prediction that involves three splice site encoding procedures and application of RF methodology. The proposed approach discriminated the TSS from FSS with higher accuracy. Also, the RF outperformed SVM, ANN, Bagging, Boosting, Logistic regression, *k*NN and Naïve Bayes classifiers in terms of prediction accuracy. Further, RF with the proposed encoding procedures showed high prediction accuracy both in balanced and imbalanced situations. Being a supplement to the commonly used *ss* prediction methods, the proposed approach is believed to contribute to the prediction of eukaryotic gene structure. The web server will help the user for easy prediction of donor *ss*.
Availability and requirement {#Sec25}
----------------------------
MaLDoSS, the donor splice site prediction server, is freely accessible to the non-profit and academic biological community for research purposes at <http://cabgrid.res.in:8080/maldoss>.
Additional files
================
{#Sec26}
Additional file 1:**An example of the proposed sequence encoding approach.** Description of the data: A precise description about the sequence encoding procedure is provided with an example. (PDF 72 kb) Additional file 2:**Plotting of margin function for encoding procedure 2 (P-2).** Description of the data: Each dot in the plot is the value of margin function for an observation (TSS or FSS). Ten different plots corresponding to 10 test sets of the 10-fold cross validation. Red and blue points are the values of margin function for FSS and TSS. The values above zero indicate that the instances are correctly classified. (PDF 110 kb) Additional file 3:**Plotting of margin function for encoding procedure 3 (P-3).** Description of the data: Each dot in the plot is the value of margin function for an observation (TSS or FSS). Ten different plots corresponding 10 test sets of the 10-fold cross validation. Red and blue points are the values of margin function for FSS and TSS. The values above zero indicate that the instances are correctly classified. (PDF 108 kb)
ANN
: artificial neural network
BPNN
: back propagation neural network
EP
: encoding procedure
FN
: false negative
FSS
: false splice sites
HS3D
: homo sapiens splice dataset
MCC
: matthew's correlation coefficient
MDD
: maximal dependency decomposition
MEM
: maximum entropy modeling
MLA
: machine learning approaches
MM1
: markov model of 1^st^ order
MM2
: markov model of 2^nd^ order
OOB
: out of bag
OOB-ER
: out of bag-error rate
PWM
: position weight matrix
RF
: random forest
SVM
: support vector machine
TNR
: true negative rate
TP
: true positive
TPR
: true positive rate
TSS
: true splice sites
WA
: weighted accuracy
WAM
: weighted array model
WMM
: weighted matrix model
**Electronic supplementary material**
The online version of this article (doi:10.1186/s13040-016-0086-4) contains supplementary material, which is available to authorized users.
**Competing interests**
The authors declare that they have no competing interests.
**Authors' contributions**
PKM and TKS collected the data, developed and implemented the methodology and drafted the manuscript. TKS and PKM developed the web server. ARR conceived the study and finalized the manuscript. All authors read and approved the final manuscript.
The grant (Agril. Edn.4-1/2013-A&P dated 11.11.2014) received from Indian Council of Agriculture Research (ICAR) for Centre for Agricultural Bioinformatics (CABin) scheme of Indian Agricultural Statistics Research Institute (IASRI) is duly acknowledged.
| |
In collaboration with:
Carmen Merino, Analyst of Resolution Mechanisms; and
Carmen Iriarte, Computer Analyst for Strengthening and Resolution Mechanisms
The COVID-19 pandemic posed major social and economic challenges worldwide and more urgently, in developing countries. As a result, urgent relief is needed to reactivate general economy. However, above all, the most vulnerable and economically affected sectors, need to be prioritized and their resilience managed with both sustainable social inclusion and environmental responsibility.
In the same spirit, it should be noted that climate change constitutes a critical risk for Ecuador. Its effects are evident in the melting of the glaciers in the mountains, the increasing temperature throughout the country, changes in the regimes from the rain, the extinction of some of its natural water sources and, regardless of its magnitude, the El Niño phenomenon affecting the country. This, in turn, has resulted in rapid decrease, migration and even the risk of extinction of some of its flora and fauna species, as well as the proliferation of diseases and natural disasters such as floods and fires. All these events have aggravated poverty, unemployment, as well as inequality and insecurity that still persist in the country. This devastation is also accompanied by displacement and migration, which can reach alarming magnitudes because agricultural and maritime sectors are the ones affected by these environmental impacts.
Globally, the cost of extreme weather events caused by climate change is estimated to reach USD5.6 billion by 2025.
Today, 25 percent of the planet's biodiversity is concentrated in the Andean countries. As a consequence, the Superintendencia de la Economía Popular y Solidaria de Ecuador (SEPS), whose principles consider social and environmental responsibility, believes the implementation of measures that help mitigate the effects of climate change is of utmost importance, promoting the development of green finance as the key to financing the transition to a low carbon economy and dependence on non-renewable resources reduced. We also believe in enhancing adaptation to climate change, strengthening primary sectors, such as agriculture while preserving the natural environment.
The cooperative sector in Ecuador, which is highest in rural areas, accounts for more than 60 percent of the total microcredit portfolio in the country. These customers, located in rural areas where financial resources mainly revolve around agricultural activities, are a part of society most affected by climate change.
While SEPS, as a technical entity for supervision and control, is not immune to the effects of climate change and of its impact on both the environment and society as a whole, we have determined that one of our priorities is the mitigation and adaptation to these climatic risks, in order to promote a sustainable financial system capable of building greater resilience to climate change. With this in mind, SEPS is currently coordinating the implementation of inclusive green financing as a mechanism for social inclusion.
With the technical assistance of AFI, SEPS is developing an environmental and social risks management (ESRM) guidelines for its supervised entity. The guidelines will help in reducing transactional risks associated with economic activities of its partners and clients, while prioritizing environmental protection. Implementing such ESRM guidelines aims to establish a minimum standard of incorporating active evaluation of environmental and social issues. At the same time, it also enables us to assess the probability of default of credit/investments of cooperatives and other financial institutions to promote sustainable business practices in Ecuador.
Regulators are continuing to focus their green finance discussions on improving the quality, targets, and effectiveness of social programs and public spending as a response to climate change challenges posed by natural scarcity, financial instability, and the increasing vulnerability of citizens.
AFIs Inclusive Green Finance (IGF) workstream is part of the International Climate Initiative (IKI) supported by the German Federal Ministry for the Environment, Nature Conservation and Nuclear Safety (BMU), based on a decision of the German Bundestag. | https://www.afi-global.org/blog/2020/06/inclusive-green-finance-mechanism-social-inclusion |
JADA Foundational Science is a cross-disciplinary, open-access journal that will bring research in basic and applied sciences to readers who are experts in clinical dentistry.
The American Dental Association (ADA) has launched a new research journal, JADA Foundational Science. This new cross-disciplinary, open-access journal reportedly will allow researchers in basic and applied sciences to make their work visible to experts in clinical dentistry and medicine.
According to the association, the entirely online journal is now taking submissions for original research articles and reviews in the areas of biology, chemistry, engineering, materials science, computer science and informatics, advanced imaging and processing, and other technologies. JADA Foundational Science will “aggressively seek out new scientific work to bring to a wide and diverse audience and use a rigorous peer review process,” the ADA said in a press release.
JADA Foundational Science differs from JADA in that it will be entirely online and open access. As an open-access journal, JADA Foundational Science seeks to break down the barriers to access for scientists of all disciplines who have work relevant to the oral health sciences and aims to create a forum for collaboration and interaction among investigators and clinicians in all disciplines.
“The goal will be to recruit timely and innovative work of high scientific rigor that is applicable to many of the health sciences and presented in a way that is engaging and instructive to both research scientists and clinicians,” says Jack L. Ferracane, PhD, editor of JADA Foundational Science. Ferracane is the department chair for restorative dentistry and director of biomaterials and biomechanics at Oregon Health & Science University School of Dentistry in Portland, Ore. | https://orthodonticproductsonline.com/industry-news/association-news/ada-launches-new-research-journal-jada-foundational-science/ |
Q:
Cannot find Inherited class
I had an excellent response to an earlier question so I'm going to push my luck here.
While there is an excellent post already about this;What does a "Cannot find symbol" compilation error mean? I am stuggleing to find which reason applies and why.
The error here occurs when creating the object FirstClassTicket, both before and after the equals where it calls FirstClass. I assume I have made an error when I inherited FirstClass from Ticket as this was originally running fine calling Ticket before I began to include to different Ticket types.
class Booking{
public static class Ticket{
int Price;
int Amount=1;
int Discount=0;
public int WhatCost(){
int Cost =Amount*(Price-Discount);
return Cost;
}
//Inheriting Ticket classes.
class FirstClass extends Ticket{ //Defines standard class tickets.
int Price =10;
}
}
public static void main(String args[]) {
FirstClass FirstClassTicket=new FirstClass();
Scanner scanner = new Scanner(System.in);
System.out.println("Welcome to the advance booking service.");//Introduce program.
System.out.println("A first class ticket costs:");
System.out.println(FirstClassTicket.WhatCost());
}
}
A:
You have defined FirstClass as inner class of ticket. Make the FirstClass as static and access it as shown below.
Booking.Ticket.FirstClass FirstClassTicket=new Booking.Ticket.FirstClass();
| |
This application is a Divisional of Application Ser. No. 09/009,184, filed Jan. 20, 1998.
BACKGROUND OF THE INVENTION
SUMMARY OF THE INVENTION
1. Field of the Invention
The present invention relates to the general field of Liquid Crystal Displays, and more specially to a method of increasing viewing angle and response speeds on a two-domain Chiral Homeotropic LCD with negative compensator.
2. Description of the Prior Art
Currently, the market for LCDs is increasing rapidly. However, the viewing angle and contrast ratio of LCDs are still insufficient for their use in large screen products. For high image quality, these characteristics need to be improved. The Fujitsu Ltd. has proposed a vertical-alignment-mode LCD, recently. Please see the reference “K. Ohmuro, S. Kataoka. T. Sasaki, Y. Koike SID'97 DIGEST p845˜p848”. The VA-LCD(vertically aligned LCD) has been implemented by optimizing a vertically aligned mode with a domain-divided structure and an optical compensator. This vertical-alignment-mode LCD has a wide viewing angle of over 70°, a fast response(<25 ms), and a high contrast ratio of over 300, but it still remains some problems. There are disadvantages, for example, the formation of two domain structure by using mask and rubbing process are complex and expensive, the rubbing process also produce ESD (Electrostatic Damage) and particles, further a mask rubbing will result image sticking.
It is an object of the present invention to solve the aforementioned problems. Another objection of this invention is to provide a method of increasing the viewing angle and response speeds on a two-domain Chiral Homeotropic LCD with negative compensator without a rubbing treatment.
These objects have been achieved by forming a two-domain vertical aligned mode LCD. Further, let the liquid crystal molecules in each of the liquid crystal domains are orientated nearly perpendicular to surfaces of the substrates with a little pre-tilted angle to the normal of the substrates when an electric field is not applied, the tilt angle projected on the azimuthal of substrate between the orientation of liquid crystal molecules in two domains are not equal to 180 degree.
To achieve the aforementioned liquid crystal molecules conditions, there are two methods can be provided. One is using an oblique incident linear polarized ultraviolet light beam process with mask shift to expose the orientation layer twice and forming two-domain structure. Another is using two linear polarized ultraviolet light beam incidents from different direction and a fixed mask to expose the orientation layer in the same time.
These methods can increase the response speed of the liquid crystal molecules and provide a rapid and clean process to avoid the disadvantages that mentioned in the background.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1
a
is a fragmentary sectional view of an UV-type two-domain (Chiral Homeotropic LCD with negative compensator in OFF State according to the present invention;
FIG. 1
b
is a fragmentary sectional view of an UV-type two-domain Chiral Homeotropic LCD with negative compensator in ON state according to the present invention;
FIG. 2
a
is a fragmentary sectional view of a two-domain VA(Vertical Aligned) mode LCD with negative compensator according to the present invention;
FIG. 2
b
is a top view of a two-domain VA(Vertical Aligned) mode LCD with negative compensator according to the present invention;
3
FIG. 4
FIG. and the viewing angle characteristics according to the invention;
FIG. 5
FIG. 5
a
b
and are the first embodiment of the manufacture process for LPUV method according to the present invention;
FIG. 5
c
the azimuthal angle &phgr; between two incident LPUV light beams according to the present invention; and
FIG. 6
is the second embodiment of the manufacture process for LPUV method according to the present invention.
DESCRIPTION OF THE PREFERRED EMBODIMENT
The present invention will be described in detail with reference to drawings. The purpose of the present invention is to provide a method of increasing viewing angle on a two-domain Chiral Homeotropic LCD with negative compensator. The detail processes will be described as follows.
FIG. 1
a
shows the panel structure of an UV-type (Ultra Violet-type) two-domain Chiral Homeotropic LCD with negative compensator in OFF state.
FIG. 1
a
101
102
102
101
102
Referring to , the basic parts of a liquid crystal are illustrated in cross section. A number of layers are involved, the outermost being a pair of light polarizers, polarizer and analyzer . In their most commonly used configuration, the polarizer and analyzer are arranged so as to have their optic axes orthogonal to one another. That is, in the absence of anything else between them, light passing through the polarizer would be blocked by the analyzer , and vice versa.
101
103
102
104
105
104
Below the polarizer is an upper transparent insulating substrate (usually glass) and immediately above the analyzer is a negative compensator . A similar lower substrate located above the negative compensator , transparent conducting lines usually comprising indium tin oxide (ITO) run orthogonal to one another and are located on the lower surface of upper substrate and the upper surface of the lower substrate, respectively.
106
10
106
Sandwiched between, and confined there by means of suitable enclosing walls (not shown), is a layer of liquid crystal such as, Chiral Homeotropic liquid crystal molecules. The Chiral Homeotropic liquid crystal molecules together with substrates to form a vertical aligned cell . The orientation of these molecules, can be controlled by coating such a surface with suitable orientation layer and formed by a LPUV (Linearly-Polarized Ultra-Violet) process. The orientation layers are formed on the substrates and contact with the liquid crystal layers .
FIG. 1
a
102
The panel structure that shown in is in dark state(off state), the liquid crystal molecules are homeotropic orientated. After a negative compensator is formed, we found that the viewing-angle dependence light leakage is small.
FIG. 1
b
shows the panel structure of an UV-type(Ultra Violet-type) two-domain Chiral Homeotropic LCD with negative compensator in ON state. The liquid crystal molecules are chiral nematic. Because the liquid crystal molecules are chiral nematic orientated so the color dispersion is small. Further, the two-domain structure can make the gray-scale viewing angle very large with no inversion.
FIG. 2
FIG. 2
a
a
206
41
40
42
is a fragmentary cross section view of a two-domain VA(Vertical Aligned) mode LCD with negative compensator. Still referring to , there is a detail illustration of a LCD panel structure especially to the orientation of liquid crystal molecules. In this figure, the LCD panel is in on-state the liquid crystal molecules are chiral nematic orientated, and there are two domains with an overlap region in one pixel. The tilt direction (azimuthal) of liquid crystal molecules in overlap region has a angle &phgr; that is not equal to 90 degrees (it can be greater or less than 90 degrees) with respect to the tilt direction of the liquid crystal molecules in domains and .
FIG. 2
b
204
40
42
is a top view of a two-domain VA (Vertical Aligned) mode LCD with negative compensator , the tilt angle &phgr; that project on the azimuthal of substrate between the orientation of liquid crystal molecules in domains and are not equal to 180 degree (it can greater or less than 180 degree). In this embodiment, the liquid crystal molecules in each domain are orientated nearly perpendicular to the surface of substrates with a little pre-tilted angle to the normal of the substrates when an electrode field is not applied (OFF state). The tilt-angle of the liquid crystal molecules that project on the azimuthal of the substrate between the orientation of liquid crystal molecules in two domains is not equal to 180 degree.
40
42
41
6
FIG. 5
a
The pre-tilted angle of the two-domain vertical aligned liquid crystal molecules in domains , and the overlap area strongly effect the response time of the liquid crystal molecules. Methods how to achieve this response characteristics of domain divided VA-cells will demonstrate in ˜FIG. .
FIG. 3
FIG. 3
311
312
is the equal contrast ratio contour. It illustrated the viewing angles of the two-domain VA mode cell according to the present invention. In reference number represent the polarizing axis, reference number represent the analyzing axis. We have achieved a super-wide viewing angle (>70°) in all directions. In this figure &phgr; represent the azimuthal angle, and &thgr; represent the viewing angle.
FIG. 4
413
is shown the evaluation of viewing angle characteristics of the present invention. In this figure, the regions with gray-level inversion are small. It means the new developed two-domain VA mode LCD has very wide viewing angle characteristics with no gray-level inversion. In this figure &phgr; represent the azimuthal angle, and &thgr; represent the viewing angle.
FIG. 5
a
a
a
510
512
512
505
514
510
40
41
512
shows the fabrication step with a first oblique incident linearly-polarized ultraviolet light beam to expose the orientation layer (some kind of polymer such as polyimide). The orientation layer is located on the transparent substrate (class) , using a mask cover about half pixel of the panel, then a first oblique linearly-polarized ultraviolet light beam expose the area and area on the orientation layer (polyimide) .
FIG. 5
b
b
b
510
512
512
505
514
510
41
42
512
shows the fabrication step with a second oblique incident linearly-polarized ultraviolet light beam to expose the orientation layer (some kind of polymer such as polyimide). The orientation layer is located on the transparent substrate (glass) . The mask that covered in the first step shifts about half pixel of the panel, then a second oblique linearly-polarized ultraviolet light beam expose the area and area on the orientation layer . In this preferred embodiment, the orientation layer can be chosen from the group of polyimid, photo-aligned polymer, etc. According to the result that reported on “Y. Limura, S. Kobayashi SID 97 Digest P311˜P314”, the pre-tilted angle of polymer films depend on the exposure energy of ultraviolet light beam, and the liquid crystal alignment direction on UV-exposed polymer film is parallel to the exposed UV polarization.
41
40
42
40
41
42
The photo-alignment technique described above can control the liquid crystal alignment. The area is exposed to LPUV light beam two times, the pre-tilted angle will a little bit greater than in area and area , so the pre-tiled angle of liquid crystal can be controlled by the exposed time of LPUV. The liquid crystal molecules in area , area , and area form a continuous pattern, and the response time will become shorter than the prior art.
In this present invention we use two step LPUV-exposed (Linear Polarized ultraviolet-exposed) process to form the two-domain VA mode LCD.
FIG. 5
c,
a
b
510
510
41
40
42
Referring to we control the azimuthal angle &phgr; between two incident LPUV light beams (and ) not equal 180 degree, so the liquid crystal molecules in the overlap region of domains and will incline in the same direction. The response time of the liquid crystal molecules has a great chance to become short.
FIG. 6
610
610
612
612
605
614
610
41
42
610
40
41
41
a
b
a
b
shows the second embodiment of the manufacture process for LPUV method according to the present invention. In this embodiment, we incident two LPUV light beams (and ) simultaneously to expose the orientation layer (some kind of polymer such as polyimide). The orientation layer is located on the transparent substrate (glass) . The mask with an aperture that arranged to let can expose on the area and area , can expose on the area and area , simultaneously. Also, we control the azimuthal angle &phgr; between two incidents LPUV light beams that are not equal 180 degree, so the liquid crystal molecules in the overlap region of two domains (i.e. area ) will incline in the same direction. The response time of the liquid crystal molecules has a great chance to become fast. The expose light source is not limited to use linearly polarized ultraviolet light, an ultraviolet light also has the same effect. Besides, for a ultraviolet light source the orientation layer also can be chosen from the group of PVC, PVMC, PVCN-F and polysiloxane, etc.
According to the UV-type two-domain VA mode LCD that we developed and illustrated above, the orientation of the LC molecules projected on the surface of each domain are not equal to 180 degree, this structure will make the response speed of LC molecules become fast in the overlap area. Further, the linearly polarized ultraviolet manufacturing process will provide a fast and clean process to form a domain divided VA-cells without the rubbing steps and can avoid the generation of electrostatic charges and dusts during the process.
As is understood by a person skilled in the art, the foregoing preferred embodiment of the present invention that are illustrated of the present invention rather than limiting of the present invention. It is intended to cover various modifications and similar arrangements included within the spirit and scope of the appended claims, the scope of which should be accorded the broadest interpretation so as to encompass all such modifications and similar structure. For example, the method that we propose in this invention can also be applied in reflect-type liquid crystal display. | |
The relationship between biodiversity and ecosystem function has often been explored by examining species diversity in relation to producer productivity. Despite the importance of soil microbial communities in forest ecosystem functioning (nutrient and carbon cycling), little is known about how these communities respond to changes in aboveground plant species diversity. In the present study we examined how soil microbial communities respond to tree species richness (SR).
We used a common garden experiment of high-density tree communities near Montreal, Canada consisting of conifer and hardwood species mixtures, to characterize microbial communities in both monocultures and species mixtures. We tested the response of the communities along a SR gradient: 1, 2, 4 and 12 species. Phospholipid fatty acid (PLFA) analysis and the MicroRespTMsystem were used to assess community structure and community-level physiological profiles (CLPP).
Results/Conclusions
Our results showed a significant effect of SR on microbial parameters. Basal respiration, glucose-induced respiration (active microbial biomass) and the metabolic quotient (qCO2) were significantly higher in SR mixtures compared to monocultures. Principal component analysis based on CLPP revealed that communities associated with SR mixtures use higher amounts of carbon sources than do monocultures, while communities associated with deciduous trees generally use higher amounts of carbon than those associated with evergreen species. Nevertheless, there were no differences in total PLFA abundances, neither in bacteria, fungi, or any other tested soil microbial PLFA abundances or ratios, between monocultures and SR mixtures. Likewise, there were no differences in microbial structure among monoculture plots, with the exception of Gram positive bacterial species, which were more abundant in certain monoculture plots. These results indicate that the identity of the aboveground species rather than their diversity may influence the soil microbial community structure, while both the identity and diversity drive their functioning. | https://eco.confex.com/eco/2015/webprogram/Paper54129.html |
Amenities:
Air-Cond
Heating System
Wi-Fi
TV/DVD player & DVD’s
Microwave
Toaster/Kettle
Fully Equipped Kitchenette
Coffee Machine
Airdryer
Iron & Ironing board
Juice Maker
Baby Cot on request
Description:
On the first floor, without a lift, of a historic building (one of the most ancient in the area), the Campo De’ Fiori Apartament transmits a sensation of pleasure and quietude due to its warm and pleasant atmosphere, and to the quiet zone where it is located. The entrance leads straight into the romantic living room, which has a comfortable double sofa bed suitable for an extra person, LCD TV, Stereo System, Ipod dock station, books and DVDs, an elegant wooden dining table and a huge window that overlooks a courtyard. The very high, wooden ceiling is splendid and there is an original Renaissance decoration on the wall.The night area is reachable through the stylish ironwork staircase and has a queen size bed and a chest of drawers for your clothes. The kitchen is separate from the living area; it is modern and fully equipped with everything you need during your stay, making cooking a pleasure whenever you choose to cook some of the wonderful fresh vegetables bought at the local outdoor market. Furnished in a very soft and functional way, this apartment also boasts a very efficient independent Heating System/ Air Conditioning which ensures the guests’ comfort all the time. The comfortable bathroom has a pleasant shower and bidet.
Location:
This charming 45 sq.m Open Loft Style apartment is located, next to Campo Dè Fiori ( home of the most famous street market in Rome by day and a lively spot at night) and the well-known Piazza Farnese. Situated on one of the ancient streets of Rome, famous for its numerous antiquary shops, once part of the via “recta papalis”, which led from the Lateran to St. Peter. The street is lined with 15th and 16th-century houses and small workshops. The apartment is within easy walking distance to many of the most famous sights in Rome; a short 2 minute walk will have you in Piazza Navona, 5 minutes to the Pantheon, 15 minutes to the Spanish Steps, Trevi Fountain and the Colosseum. St. Peter’s Basilica is just a 10 minute walk away, while by crossing the nearby pedestrian bridge Ponte Sisto you will reach the characteristic district of Trastevere in 5 minutes. Undoubtedly…..a perfect location!!!! Public transport is nearby. Nowadays “Campo De’ Fiori” represents one of the places where Rome shows its most authentic character from the early morning with its fabulous open air market, to the deepest night with entertainment offered by numerous bars, restaurants and “trattorie” of the area. “Yet another fabulous place to pull up a chair, order an espresso or a glass of wine and do some serious people watching. From here you should wander around the wonderful tiny cobblestone streets of the local neighbourhood – it’s the perfect area to get lost in and make your own Rome discoveries as you are truly in the pulsing heart of the city”.
Click here to locate the apartment
Click here to Find the nearest carpark
Gallery: | http://www.unicapropertygroup.com/property/vacation-rentals-campo-de-fiori-apartment/ |
Modeling integrative nephron function
With a unique 3D organization of functional units, the kidney is a complex organ. While much has been learned about the molecular processes occuring in specific cell types and nephron segments, little is known about the complex interplay that forms the basis for the integrity and function of the kidney.
Using computational modeling, our lab aim to better understand the kidney in health and disease.
Renal hypoxia in hypertension and diabetes
Despite intense research, the mechanism underlying the development of renal hypoxia and chronic kidney diseases remains incompletely understood.
Our lab develop computational models and conduct simulations to assess the following hypothesis: In diabetes and hypertension, oxidative stress reduces tubular transport efficiency and causes mitochondrial uncoupling, and thus gives rise to a mismatch between changes in renal oxygen supply and consumption, leading to renal hypoxia and eventually chronic kidney diseases. Furthermore, reduced NO formation increases tubular transport but reduces oxygen supply. In addition, tubular hypertrophy, Na+- glucose cotransport and hyperfiltration increase transport work in the diabetic kidney.
Sex differences in blood pressure regulation
Hypertension is a global health challenge with known sexual dimorphism in pathophysiology and in responses to drug treatments, yet men and women are typically treated with the same approach. We conduct model simulations to gain mechanistic insights into how sex differences in the renin-angiotensin system and in renal phenotype impact kidney function in hypertension. | https://uwaterloo.ca/scholar/a2layton/research |
PALOS HEIGHTS, Ill.— Trinity Christian College is pleased to announce the WorldView schedule of speakers for the Fall Semester. This semester’s WorldView program begins on Sept. 14 with a visit by Dr. John Inazu, Ph.D., of Washington University in St. Louis.
“Trinity is excited to host another season of WorldView, featuring engaging and thought-provoking speakers who will offer critical insights into a variety of interesting topics,” said Trinity Christian College President Kurt D. Dykstra. “All are welcome to join in our community for these events.”
The series features a diverse group of guest speakers:
- Thursday, Sept. 14—Dr. John Inazu
Dr. Inazu, the Sally D. Danforth Distinguished Professor of Law and Religion at Washington University in St. Louis, will speak on “Confident Pluralism” at 7 pm in the Fireside Room. In his book Confident Pluralism: Surviving and Thriving Through Deep Difference, he argues that Americans can and must live together peaceably in spite of significant differences over politics, religion, sexuality, and other important matters. His talk is in conjunction with Trinity’s annual observance of Constitution Day.
- Wednesday, Sept. 20—Josh Larsen ’96
Larsen, who is co-host of the radio show and podcast “Filmspotting” and film critic of the faith and culture magazine “Think Christian,” will discuss “Movies as Prayers of Anger and Reconciliation” at 7 pm in the Grand Lobby of Ozinga Chapel. Larsen will highlight select scenes from Spike Lee’s Oscar-nominated 1989 film “Do the Right Thing,” which is notorious as an expression of righteous social anger while offering a prayerful hint of reconciliation.
Larsen will also speak at Chapel at 10 am on Sept. 20 in Ozinga Chapel Auditorium.
- Monday, Oct. 16—Dr. Suzanne McDonald
- Wednesday, Oct. 18—Dr. Eduardo J. Echeverria ’73
To commemorate the 500th anniversary of the Reformation, Trinity is holding a series of lectures to provide insights into the impact of the Reformation on both Reformed Protestantism and Roman Catholic viewpoints. Dr. McDonald is a professor of systematic and historical theology at Western Theological Seminary and Dr. Echeverria is a professor of philosophy and systematic theology at Sacred Heart Major Seminary. The lectures will take place at 3:30 pm in the Grand Lobby of Ozinga Chapel.
- Monday, Nov. 27—Wayne Messmer
Messmer is a local legend—famed singer, exciting speaker, acclaimed author, and talented broadcaster. The “Voice of the National Anthem” will bring his timeless message of hope and resilience to Trinity during a 7 pm lecture in the Grand Lobby of Ozinga Chapel. Messmer will share the accounts of his true-life story, canvassing his rise to fame as the premier singer of the National Anthem, his career as a broadcaster, speaker, and sports personality and his near-fatal experience of being shot in the throat during an armed robbery attempt.
In addition to his Worldview address, Messmer will also present “Damien,” a one-man play based on the life of Father Damien de Veuster, on Tuesday, Nov. 28, in the Marg Kallemeyn Theatre. Proceeds from the performance will benefit the work of Dr. David Weinstein, the world’s leading researcher for a cure for Glycogen Storage Disease. Messmer’s granddaughter suffers from this disease. Visit http://www.trnty.edu/worldview for more information and to purchase tickets to “Damien.”
WorldView is Trinity’s annual community and college series for film, word, and music. All events are free, and the public is welcome. All events take place on Trinity’s campus at 6601 W. College Dr., in Palos Heights, Ill. | https://www.trnty.edu/press-release/trinity-announces-fall-semester-worldview-speakers/ |
So I have a question about an experiment we did in class.
We used a fan to blow air at a little wind turbine to generate electricity, we connected the turbine to a reversible PEM fuel cell to electrolyse water and generate some hydrogen and oxygen. We timed this process and recorded the average power output from the turbine to calculate the total energy produced by the turbine: 59.4 J.
For the second part of the experiment we passed the hydrogen through another PEM fuel cell which would power a little electric motor until all the hydrogen was consumed, and measured the power and time it ran for with the same device. Again, calculated the energy consumed and it gave only 2.4 J.
Which would make it seem like a lot of energy was lost, mainly I guess during the electrolysis, the fuel cell and I guess the motor itself. But can so much energy be lost during this process? If so, how? | https://physics.stackexchange.com/questions/700931/energy-lost-to-hydrogen-fuel-cell/700959#700959 |
This bill would require analysis of the following:
Impacts on communities that host expanded gambling facilities and other communities within a 25-mile radius;
Short-term, medium-term and long-term impacts that factor in the potential effect of gambling facilities being built in other states along the Massachusetts border, the impact of a potential casino on tribal land in Massachusetts, and the potential impact of the legalization of internet gambling;
The effect on the economy that occurs because of jobs created by the gambling industry but also the adverse impacts on jobs whose existence pre-dates expanded gambling;
The increase in tax revenue to the state, but also the anticipated loss or diminution of revenues from businesses damaged by the introduction of gambling; and,
The costs to Massachusetts or communities hosting gambling facilities of establishing a new regulatory structure to oversee the industry, addressing any increase in criminal activity that results from gambling, incarcerating those found guilty of violating new gambling laws, capital improvements necessary to accommodate new gambling facilities, compulsive gambling, increased bankruptcies, and increased health care utilization to address gambling addiction issues.
Co-sponsors of Brewers bill include: Rep. Ellen Story (D-Amherst), a House Floor Division leader; Rep. William Brownsberger (D-Belmont); Rep. Thomas Conroy (D-Wayland); Rep. James Lyons (R-Andover); Rep. Elizabeth Malia (D-Jamaica), co-chair of the Committee on Mental Health and Substance Abuse; Rep. Ryan Fattman (R-Sutton); Rep. Frank Smizik (D-Brookline); Rep. Ruth Balser (D-Newton); Rep. Carolyn Dykema (D-Holliston); Rep. Todd Smola (R-Palmer); Rep. Brian Ashe (D-Longmeadow); Rep. Denise Provost (D-Somerville); Sen. Patricia Jehlen (D-Somerville); Sen. Susan Fargo (D-Lincoln); Sen. Cynthia Creem (D-Newton); Sen. Richard Moore (D-Uxbridge); Sen. Michael Knapik (R-Westfield); Sen. James Eldridge (D-Acton); and Sen. Benjamin Downing (D-Pittsfield).
PLEASE URGE YOUR STATE REPRESSENTATIVE AND STATE SENATOR TO SUPPORT THIS COMMON SENSE LEGISLATION
Previous studies on expanded gambling in Massachusetts
COMPREHENSIVE ANALYSIS: Projecting and Preparing for Potential Impact of Expanded Gaming on Commonwealth of Massachusetts
Spectrum Gaming Final Report for the Commonweath of Massachusetts
August 1, 2008
Cost to Taxpayers: $189,000
Spectrum Gaming Market Analysis, Gross Gaming Revenue Projections
An update to its 2008 Massachusetts gross gaming revenue estimates.
March 31, 2010
Speaker DeLeo held a fundraiser in the spring with gambling industry representatives and commissioned this benefit analysis by the gambling industry firm within 60 days of the fundraiser. The cost of the Speaker's study was not released.
Massachusetts Statewide Gaming Report
Benefit Analysis requisitioned by the Massachusetts Senate, and Prepared by Innovation Group
June 2010
Cost to Taxpayers: $80,000
Recommended complimentary drinks and smoking to increase gamblers playing and losses.
New Hampshire Gaming Study Commission Final Report of Findings
A review of various models for expanded gaming and their potential to generate state revenues, as well as an assessment of the social, economic and public safety impacts of gaming options on the quality of life in New Hampshire. May 18, 2010
In the News
BREWER FILES BILL TO REQUIRE GAMBLING COSTS-BENEFIT ANALYSIS
By Kyle Cheney
STATE HOUSE NEWS SERVICE
STATE HOUSE, BOSTON, FEB. 5, 2011…..Sen. Stephen Brewer, appointed to the Senates most powerful budget-writing post last month by Senate President Therese Murray, is backing legislation championed by anti-gambling forces that requires a detailed cost-benefit analysis as a prerequisite to the introduction of slot parlors or casinos.
We want a data-driven discussion, said Kathleen Conley Norbut, an anti-gambling advocate who previously led United to Stop Slots Massachusetts. We dont want benefit studies done by the gambling industries having that kind of disinformation continue to be spread.
Brewer, who voted against expanded gambling legislation last session, has declared himself open-minded on the issue and was appointed to chair to Senate Ways and Means Committee despite voting last year to reject a proposal that would have sanctioned three casinos and two slot parlors. The bill passed the House and Senate in July but died when Gov. Deval Patrick sent it back to the Legislature after lawmakers had adjourned formal sessions for the year.
Expanded gambling critics have pushed for a cost-benefit analysis, arguing other studies commissioned by the governor and Legislature omitted details about the potential downside of expanded gambling.
According to the legislation, if the study concludes that the expanded gambling costs outweigh its benefits, any expanded gambling legislation would require two-third approval in the House and the Senate to reach the governors desk.
Norbut, who is also listed as a co-sponsor of the legislation, blistered House Speaker Robert DeLeo, who has vowed a renewed push for expanded gambling this session.
The speaker borders on being obsessed with this issue, and thats unfortunate for the comprehensive needs of the commonwealth, she said. Its no surprise to hear him continuing to go with supporting misinformation.
DeLeo has supported expanded gambling, particularly slot parlors at existing racetracks, as an immediate source of local aid for cities and towns who are bracing for a cut next fiscal year.
The speakers made his concerns clear in the dire fiscal situation about the need for revenue and the need for jobs, said DeLeo spokesman Seth Gitell.
Asked whether hed be open to a cost-benefit analysis, Gitell said, At this time we havent seen the bill in the House.
DeLeo and Gov. Patrick have indicated they might try to work out their differences on gambling behind closed doors to prevent the issue from consuming the legislative agenda. But both have also recently acknowledged they havent had a substantive conversation about gambling since talks collapsed last July.
Sen. Jennifer Flanagan (D-Leominster) filed a three-casino proposal in early January, hoping to spur debate on the issue earlier in the session and prevent a repeat of last years race against the clock.
DEMPSEY: LITTLE SUPPORT IN HOUSE FOR NEW CASINO ANALYSIS
By Shawn Regan
The Eagle Tribune
HAVERHILL - FEB. 13, 2011…..A proposal backed by the Senate's top budget writer that would require a detailed cost-versus-benefit analysis before the Legislature takes up casinos again isn't likely to find much support in the House of Representatives, state Rep. Brian Dempsey said.
According to Sen. Stephen Brewer's proposal, if the study concludes that the cost of expanded gambling outweigh its benefits, any bill that legalizes casinos or slot parlors would require two-thirds approval in the House and Senate to reach the governor's desk. Brewer, D-Barre, was recently appointed to chair the Senate Ways and Means Committee.
Dempsey, D-Haverhill, Brewer's Ways and Means counterpart in the House, co-authored last session's proposal on behalf of House Speaker Robert DeLeo that would have created three casinos and two slot parlors at Massachusetts racetracks. The bill passed the House and Senate, but died over Gov. Deval Patrick's opposition to slot parlors at racetracks.
DeLeo and Dempsey support expanded gambling, particularly slot parlors at existing racetracks, as a quick source of local aid for cities and towns bracing for state aid cuts this summer. Both lawmakers have said the House will push for expanded gambling this session.
Dempsey said he respects Brewer's proposal, but he sees it as unnecessary because there have already been several studies on the potential impacts of casinos and slot parlors. He also questioned the practicality of the bill because it calls for analyzing the impacts of casinos and slot parlors on host communities and cities and towns within a 25-mile radius.
"That would be a hypothetical exercise, because no one knows where casinos would be," said Dempsey, who favors what he called a "competitive bidding process," in which a new gaming commission would select the host communities based on specific proposals.
"We want to put the onus on the operators to tell us how they are going to work with local restaurants and local businesses and promote the state lottery as part of their proposals, and then let the gaming commission pick the best ones," Dempsey said. "We won't know ahead of time where they will be, so I don't know how we can do the kind of study ahead of time that Senator Brewer proposes."
As for the question of when the House might take up casinos, Dempsey said most members he has talked to want it to happen "sooner rather than later."
"It's definitely on the front-burner," he said. "I think we'll know more about a timetable in a few weeks." | https://uss-mass.org/independent_analysis.html |
Wealth and Power in Provincial Mexico: Michoacan from the Late Colony to the Revolution.
Most of us regard it as a sociological truism that the economic and political spheres of life affect each other intimately. It is surprisingly rare, however, for a locally or regionally focused study actually to portray convincingly the relationship of ideology and public action to economic structure. Margaret Chowning's excellent book on the economic elite of the state of Michoacan, in western-central Mexico, during the nineteenth century accomplishes this through deep research, judicious generalization, and graceful writing. Although highly quantitative in its approach, Chowning's study is never offputting in the deployment of its technical apparatus, while the author is very honest about the limits of her data, even making a virtue of them. Among the many delights of the book, for example, are the passages in which she contrasts the often lugubrious views of contemporary observers of economic life--be they liberal politicians or famous travelers like Fanny Calderon de la Barca--with the actual facts as rev ealed by notary records, testaments, inventories, and other primary sources, making of the slippage itself an indicator of cultural and political attitudes. While the book does not make a breakthrough in a theoretical or methodological sense, it does represent a highly successful exemplar of a well established and fruitful genre in Latin American and Mexican history--the study of the economic, social, and political reproduction of a regional elite. Where it is most innovative, even playfully so, is in certain aspects of its narrative organization, in the periodization that frames the narrative, and in its portrayal of the cycles of Mexican economic history in the century after 1780 or so. It is a work of accomplished historical imagination, broad vision, and intellectual sophistication, well worth reading not only for specialists in the Latin American field, but also by students of elite groups, nineteenth-century politics, and economic and social history more broadly, especially within the Euro-Atlantic worl d.
Chowning opens her narrative with a delightful description of the regional capital of Valladolid (later rechristened Morelia, for an Independence-era hero) as it would have appeared in about 1800, two decades before independence from Spain, to a traveler coming from Mexico City, some 200 kilometers to the southeast. She portrays the easy rhythms of late colonial life in a major provincial city dominated by an elite of landowning and merchant families, mostly creole but with a substantial number of Old World-born members, situating these groups in the upper two-ninths of city's population and justifying the boundary with reasonable criteria based on wealth. Chowning describes the well established social hierarchy, the place of the church as an institutional entity and of religious sensibility in the everyday lives of the city's populace, a civil society dominated by the essentially conservative political values of the Iberian colonial world, and the burgeoning influence of Enlightenment ideas. Her description of the city and its region closes, and each of the following chapters is preceded and the book as a whole concluded, with a highly evocative essay based on the history of an important local family, the Huartes and their descendants, whose rising and falling fortunes exemplified with reasonable fidelity the ebb and flow of the region's economic and political fortunes over the century 1780-1880 or so. This is a period more conventionally broken by historians at independence from Spain (1821), the advent of the great mid-century liberal Reform (1855), or the rise of the modernizing dictator Porfirio Diaz (1876), although in emphasizing cyclical movement and continuity Chowning by no means discounts the importance of political events. Alternating within them between economic and political matters, subsequent chapters deal with the Spanish imperial crisis eventuating in Mexican Independence, the substantial collapse of the regional economy in the fifteen years or so after Independence, the revival of the 183 Os an d 1840s, the political ferment, upward middle-class mobility, and elite stasis of the 1840s and 1850s, the economic downturn and economic realignments of the 1850s and 1860s, and the railroad- and banking-stimulated revival of the "Porfinian boom" from the middle-1870s or so.
Chowning's extremely interesting conclusions help us to fill in, albeit within a specific regional context, much of the historiographic black hole constituted by the half-century between Independence and the rise of Diaz. Among her most striking findings is the hitherto little suspected economic dynamism of the western-central regional economy during the two cycles of economic recovery she delineates in the 1830s-40s and 1870s-80s. Another is her acute analysis of the changes in political alignments, powerholding, and wealth distribution that occurred with the liberal Reform of the 1 850s, processes that stimulated the rise of a new political and property-holding middle class, and the later bifurcation of the liberals into a more populist, rural, caudillo-dominated group and an urban-based, republican (though not radically democratic) group of progressive modernizers. In a book as ambitious as this one it is not difficult to find some flaws, of course. For example, so elite-centered is the research that commo n people in city and country revert to the position of political objects, as though the new social history or subaltern studies had never developed. To be fair, however, the author delimits her area of inquiry quite clearly in the beginning, so this emphasis is hardly a surprise. On the other hand, even with the wave-like oscillations Chowning portrays in the region's economic fortunes, it is difficult to accept completely the revisionist slant of her findings, or fully to give up the conventional wisdom that the period in Mexico between 1820 and about 1880 was primarily one of contraction (or "decompression," as another scholar of modern Mexico has called it) interspersed with brief reversals, rather than one embracing such distinct cycles. Still, these caveats shrink to relatively minor criticisms in the face of Chowning's considerable accomplishment in knitting together economic and political developments for an important but understudied area of Mexico, framing her study with a subtle research agenda and a relatively novel periodization, and delivering her findings with scrupulous attention to detail and in thoughtful, clear, and evocative writing. | https://www.thefreelibrary.com/Wealth+and+Power+in+Provincial+Mexico%3A+Michoacan+from+the+Late+Colony...-a079151309 |
Steve Finnan auctions off Liverpool Champions League medal and jersey
Former Ireland international defender Steve Finnan is auctioning off his 2005 Champions League winners medal as well as his Liverpool jersey from the final.
Steve Finnan was a part of Liverpool’s Champions League winning squad in 2005, starting the final as the Reds came from 3-0 down against AC Milan to beat the Italian side on penalties.
He also started the final two years later against AC Milan once again but this time Liverpool found themselves on the losing end.
Finnan would go on to play for Espanyol and Portsmouth after leaving Liverpool however he has kept a relatively low profile since his retirement from the game, setting up a property business with his brother.
Memorabilia from Finnan’s career have shown up on auction website Graham Budd Auction, with the former Irish international looking to offload medals and jerseys.
His Champions League winning medal is listed as having an estimated value of between £12,000-£15,000, with his signed jersey from Istanbul listed between £2,000-£2,500.
He is also auctioning off a small replica trophy (£5,000-£7,000) from the Champions League final as well as his 2006 FA Cup winners medal (£5,000-7,000).
Finnan started the 2006 final at the Millennium Stadium in Cardiff as Liverpool defeated West Ham on penalties following a 3-3 after extra time.
The 44-year-old joined Liverpool in 2003 after a hugely successful spell with Fulham which saw him named in the PFA Team of the Year in his first year in the Premier League.
Finnan was also a mainstay of the Irish national team during his career, winning 53 caps for the Boys in Green and scoring two goals.
He was a part of the squad in the 2002 World Cup who would reach the competition’s last 16 before losing out on penalties to Spain. | |
An exclusive villa under construction, located in the historic city center.
On the ground floor there is a large terrace, two more terraces are located on the first floor. The terraces offer breathtaking views of the old town and the island of La Gomera. The villa also offers a pool. Near the main entrance there is a garage.
This is a unique opportunity to live in the old and beautiful part of Costa Adeje.
Extra opportunities
We can help you to get a mortgage on favourable terms. A residence permit can be obtained with the purchase of this property. Please ask our team for more information.
Interior
The house has three bedrooms, three bathrooms, a living room, a dining room and a kitchen.
Location and nearby infrastructure
Costa Adeje is a place that is ideal for families. Over the past few years, the resort town has changed a lot thanks to the large number of hotel complexes built, the rapid development of infrastructure and the level of service. This, without a doubt, attracts tourists from Europe who want to enjoy not only wonderful weather, but also a wide variety of leisure activities. | https://tranio.com/spain/adt/1851311/ |
Source: The Conversation
Zambia, a country in southeast Africa, has approximately 1,200 lions, one of the largest lion populations on the continent. More than 40% of the U-shaped country is protected land, with over 120,000 square miles of national parks, sanctuaries and game management areas for lions to roam.
Zambian lions are split into two subpopulations, with one in the Greater Kafue Ecosystem in the west and the other in the Luangwa Valley Ecosystem in the east. Between these two geographically different regions lies Lusaka, Zambia’s largest city, which is surrounded by farmland.
People had assumed that the two groups of lions did not – even could not – mix. After all, they’re separated by a geographical barrier: the two regions feature different habitats, with the east an offshoot of the Great Rift Valley system and the west part of the southern savannas. The lions are also separated by what’s called an anthropogenic barrier: a big city that lacks wildlife protection, making it seemingly unsuitable for lions.
So my colleagues and I (authors from the Conversation) were surprised when we found that a small number of lions are in fact moving across the area in between presumed to be uninhabitable by lions. These sneaky lions – and their mating habits – are causing the high levels of genetic diversity we found in the entire Zambian lion population.
Identifying which genes are where
Working with the Zambian Wildlife Authority, biologist Paula White collected hundreds of biological samples from lions across Zambia between 2004 and 2012. Eventually a box of this hair, skin, bone and tissue, meticulously packaged and labeled with collection notes and sampling locations, arrived at my lab at Texas A&M University.
Our goal was to investigate genetic diversity and the movement of various genes across Zambia by extracting and analyzing DNA from the lion samples.
From 409 lions found inside and outside of protected lands, I looked at two kinds of genes, mitochondrial and nuclear. You inherit mitochondrial DNA only from your mom, while you inherit nuclear DNA from both of your parents. Because of these differences, mitochondrial and nuclear genes can tell different genetic stories that, when combined, paint a more complete picture of how a population behaves.
My mitochondrial analysis verified that, genetically, there are two isolated subpopulations of lions in Zambia, one in the east and one in the west. However, by also looking at the nuclear genes, we found evidence that small numbers of lions are moving across the “unsuitable” habitat. Including nuclear genes provided a more complex picture that tells us not only which lions were moving but also where.
Lion data can help manage wildlife overall
Human-lion conflict is a big issue in Zambia, particularly outside of protected land. If lions were moving across human dominated areas, you’d think they’d be seen and reported. But these lions are sneaking through virtually undetected – until we look at their genes.
As a large, charismatic carnivore, lion research and conservation influences many other species that share their habitat.
Wildlife managers can use these findings to help with lion conservation and other wildlife management in and around Zambia. Now that we generally know where lions are moving, managers can focus on these areas to find the actual route the big cats are taking and work to maintain or even increase how many lions can move across these areas. One of the ways of doing this is by creating more protected land, like corridors, to better connect suitable habitat. | https://vicfalls.travel/sneaky-lions-in-zambia/ |
Sommer, Christoph ; Bargel, Hendrik ; Raßmann, Nadine ; Scheibel, Thomas:
Microbial repellence properties of engineered spider silk coatings prevent biofilm formation of opportunistic bacterial strains.
In: MRS Communications. Vol. 11 (2021) Issue 3 . - pp. 356-362.
ISSN 2159-6867
DOI: https://doi.org/10.1557/s43579-021-00034-y
Abstract in another language
Bacterial infections are well recognised to be one of the most important current public health problems. Inhibiting adhesion of microbes on biomaterials is one approach for preventing inflammation. Coatings made of recombinant spider silk proteins based on the consensus sequence of Araneus diadematus dragline silk fibroin 4 have previously shown microbe-repellent properties. Concerning silicone implants, it has been further shown that spider silk coatings are effective in lowering the risk of capsular fibrosis. Here, microbial repellence tests using four opportunistic infection-related strains revealed additional insights into the microbe-repellent properties of spider silk-coated implants, exemplarily shown for silicone surfaces. | https://eref.uni-bayreuth.de/id/eprint/65690/ |
The San Joaquin County Historical Museum reveals the rich heritage of the region, from the Miwok and Yokuts Indians, to the development of modern agriculture and to Charles Weber – the founder of Stockton and the first farmer in the area. Nestled on 18-acres in beautiful Micke Grove Regional Park, the atmosphere and the charm of the Museum encourages friendly family outings, picnic lunches and strolls through our many exhibits and buildings while you gain valuable knowledge of the rich history of San Joaquin County. The Museum offers educational experiences that showcase the county’s traditions of ingenuity, innovation, and invention – with emphasis on San Joaquin County’s singular contributions in agriculture – to promote community pride, continued learning, and an appreciation of regional history among County residents and visitors.
Exhibits
Pollinators: Keeping Company with Flowers is an exhibit portraying the relationship between flowers and pollinators. The exhibit is based around 70-some photographs of pollinators in wild and garden settings, primarily taken by Northern California plantsman and naturalist, John Whittlesey. These images vividly portray the intriguing lives of many kinds of pollinators. While many people recognize the European honeybee as an important pollinator, Keeping Company with Flowers primarily highlights native pollinators, which play a key role in the ecology of California.
Pollinators: Keeping Company with Flowers aims to increase awareness and appreciation of the incredible beauty and diversity of pollinators in California. Of the 4,000 known bee species in the US, 1,600 occur in California. Through close-up photographs and supplemental materials, this exhibit introduces a diversity of pollinators, the various processes of pollination, the needs of pollinators, the obstacles their populations are facing, and what can and is being done to support them.This exhibit is on view until October 16, 2022.
Participation in Museum Day is open to any tax-exempt or governmental museum or cultural venue on a voluntary basis. Smithsonian magazine encourages museum visitation, but is not responsible for and does not endorse the content of the participating museums and cultural venues, and does not subsidize museums that participate. | https://www.smithsonianmag.com/museumday/venues/museum/san-joaquin-county-historical-museum-YZW/ |
This module provides the identity relationship management capabilities of CitizenOne. The functionality allows a user to connect their profile to other roles they may play in their daily lives (e.g. business owners, employees, parents etc.); an example would be a business owner inviting an employee to act on the business owner’s behalf with a service provider, or a family member accessing services on behalf of their children.
Allowing a user to access services in context provides the ability to do more from one single profile in the way that is most relevant. The module interprets the user’s relationship to the service being accessed to ensure only appropriate accesses are granted to approved individuals. The service allows a business/family/entity to establish a hierarchy of who is approved to act on their account (e.g. Power of Attorney, authorizing an accountant to file a tax return etc.).
- Powers ‘personas’ that allow a user to login to a single profile but be presented with services in the most relevant manner (e.g. a user accessing services in their business owner or employee role will be presented with different services than accessing services as a citizen).
- Allows for relationship understanding that helps deliver services in the most user-focused way possible and can lead to insights to improve service delivery.
- Allows any type of user to access services from government (eliminating system duplication, deliver services to all types of users from one platform).
- Allows users and government to establish relationship hierarchies that allow approved parties to interact with services providers (e.g. allow a business to authorize and manage individuals to interact with government on the businesses behalf, authorizing a lawyer, an accountant, power of attorney, etc.). | https://www.vivvo.com/identity-relationship-management/ |
Telstra has secured a five-year agreement with the ABC for managed data, mobile and voice services.
The agreement, which extends a "long relationship" between the two firms, also makes the ABC the first Australian organisation to adopt Telstra's digital video network product.
The product is said to enable "newsrooms around the world to more efficiently send and receive content".
"The technology will allow the ABC to acquire, convert, manage and distribute a more comprehensive suite of broadcast media formats, such as HD video, audio or data within a single, highly scalable network platform," the carrier said in a statement. | https://www.itnews.com.au/news/telstra-secures-five-year-abc-deal-326366 |
The Parish Pastoral & Administrative Assistant is responsible for assisting the Parish, the Priest and the Parish Council with any and all duties requested in day-to-day management of the Holy Apostles Church.
Responsibilities
Administrative
- Handle communications for the Church including phone, mail and email
- Manage donor correspondence
- Church website content management
- Create and distribute weekly bulletin and monthly newsletter
- Update and manage church’s social media and livestream content
- Monitor and manage building and Parish vendors
- Maintain various Parish database records and provide requested reports
- Provide support for Fundraising and Building Construction projects
- Administer Parish calendar
- Create letters and spreadsheets
- Monitor church supply inventory
- Run errands as needed
- Other duties as assigned
Pastoral
- Youth Ministry
- Lead K-12 youth ministry program
- Teaching
- When required preach
- Lead small group learning
- Chant Ministry
- Teach Byzantine chant
- Lead choir for services
Knowledge, Skills and Abilities
Working knowledge and proficiency in Microsoft (Word, Power Point, Excel, Outlook) and other computer programs; inter-personal skills; knowledge of filing systems; professional etiquette and phone skills; excellent written and verbal communication skills; organizational skills; ability to be innovative and solution-oriented; ability to work under time constraints; knowledgeable in and ability to observe standard office policies and procedures.
Competencies:
Sensitivity to cultural conditions and practices and the church’s hierarchy and protocols. Competent in computer programs and telephone handling, customer service driven, and self-driven and able to work with minimal supervision.
Ability to teach and lead groups (both adults and children).
Knowledge of Byzantine Chant and Music is preferred.
Education: | https://www.orthodoxjobs.com/postings/2021/holy-apostles-cheyenne-wy-pastoral-assistant-office-manager |
At the request of the Central Texas Community Emergency Response Team Coordinator TEXSAR’s CERT trainers met with the Village of Point Venture Emergency Management team and interested citizens on Wednesday evening, May 30. Point Venture is launching their first Community Emergency Response Team training and the meeting was to provide information about the program and how TEXSAR would deliver the training and support the program. City officials will begin scheduling the classes and we will provide the training and CERT program management support. Great City leadership, hospitality and...
On Saturday morning, May 19th, TEXSAR team members headed to New Braunfels to participate in a training “play day” with Wesley Meyers of Rescue Training International (RTI). The morning started at a heavily canopied canyon preserve where TEXSAR members, along with members from TMAR and first responders from across Texas, were provided with an overview of Land Search and Rescue Concepts and Lost Person Behavior Patterns. Participants then practiced and refined their line search and echo location skills. Next was...
Beginning at 4:20 am Wednesday May 16, 2012 our team began preparing to assist the Killeen Police Department (KPD) with a search for a missing 8 yr old boy. The team and our friends responded quickly. Team member Matt Boettner lives in Killeen and was at the KPD Headquarters before 8 am discussing the case with the lead detective. Our Operations and Planning units began running lost person behavior and likely find area identification based on the information we had. The Logistics unit began planning for required equipment and...
TEXSAR is honored to have been selected for study by two University of Texas classes from the Red McCombs Business School. These classes, one a Team-based Communication class and the other a Business Communication class, studied the history and the future goals of our organization. The classes, each split into three to five person teams, produced reports and guidance on topics such as fundraising, grant applications, philanthropic foundations, website improvements, the use of social media and relationship building and maintenance. These students...
On Thursday evening May 03, 2012 TEXSAR team members were invited to speak to the Pflugerville Citizens on Patrol volunteer group. The folks are also CERT team members. We spoke with them about personal and family preparedness and also discussed ways that TEXSAR could assist them to train a new CERT class for the Pflugerville Police Department. It was a great opportunity to share ideas and meet the Pflugerville team members. ...
Four of our members completed the Community Emergency Response Team (CERT) Train the Trainer course this week held by the Texas Department of Emergency Management (TDEM) and FEMA. This new training capability allows TEXSAR to assist rural jurisdictions and volunteer fire departments as well as “Teen Cert” programs in area high schools to create, train and maintain CERT programs that will provide them with “force multipliers” to respond to large incidents and assist with... | https://www.texsar.org/news/page/18/ |
Workshop / Summer / Professional Development Workshop for Artists & Teachers / Approved by the Georgia Department of Education for 6 PLU's
When Art was included in the recent STEM educational agenda that emphasizes Science, Technology, Engineering and Math, STEM + Art = STEAM. The TreeSmART workshop will provide experiences for both artists and teachers to incorporate these principles into their practice using trees as a theme. An introduction to local tree ecology by UGA Costa Rica naturalists, on-site sketching and journaling, resource sharing, field trips, and several hands-on STEAM projects will be included.
Participants will complete several TreeSmART projects, integrating each of the STEAM disciplines. They will be introduced to resources on tree ring science and champion trees, and will engage in campus activities that incorporate design thinking and problem-solving. Viewing tree cross-sections, they will assess tree rings for type, age, environmental factors, and visual patterns to create radial composition drawings. From these, small groups will work collaboratively using the Engineering Design Process to plan and test their prototype for a walking labyrinth inspired by their selected tree ring. Throughout, the STEAM approach will be emphasized as an interdisciplinary path for creativity and wonder.
The program begins in the capital city of San José, making a visit to the National Museum, but will spend the majority of its time in the Monteverde Cloud Forest at the UGA Costa Rica campus. It will then conclude at Playa Sámara, a coastal jewel in Costa Rica's Guanacaste region.
Objectives: To prepare and engage teachers to include Art within the STEM approach, and to encourage an integrative conservation ethic through focusing on tree ecology.
Need: House Resolution 51, introduced in February 2013, states that adding Art into the Federal STEM program encourages economic growth and innovation.
Background: STEM to STEAM is an initiative first led by the Rhode Island School of Design to add Art and Design to the STEM educational agenda. The goal is to provide real solutions for our everyday lives where artists and designers can effectively communicate complex data and scientific information. Their tools and methods offer new models for creative problem solving and interdisciplinary partnerships in a changing world.
Guiding Principles:
Program courses are taught using both state-of-the art classrooms and through "living classroom" excursions. All in all, the 155-acre campus, located in San Luis de Monteverde, is comprised of 45,000 square feet of built space, which includes computer labs, spacious cabinas, a student union, and recreational facilities. Additionally, there are nearly 3 miles of hiking trails throughout the property, along with a working organic farm and reforestation nursery.
April 15 2014
$1,800*
July 17 - July 30, 2014
Enrollment is limited to 15 qualified participants, who will earn 6 professional development units (PLU) through the Georgia Department of Education, transferable to other states with approval.
To apply, prospective participants must submit a complete UGA Costa Rica program application. A $300 deposit will be required 14 days after your official acceptance, which will be refunded if the applicant withdraws by the program's withdrawal deadline (see below).
For more information, contact June Julian. You may also reach the UGACR Office at (706) 542-6203.
Program fees are approximately $1,800*, which covers all lodging (both on- and off-campus), three meals per day, travel insurance, and all sponsored in-country transportation and entry fees. Additional costs include personal items and international airfare.
* The final program fee is subject to enrollment.
All payments will be made to the University of Georgia Foundation:
UGA Foundation Milledge Centre, Suite 100 34 S. Milledge Ave. Athens, GA 30602-5582
Application Deadline: April 15, 2014
Withdrawal deadline: April 29, 2014
After this date, students who withdraw from the program will be responsible to forfeit their initial deposit as well as any program costs that have been spent on their behalf (e.g., hotel reservations, transportation, and payments to local vendors). To withdraw a student must submit the official withdrawal form.
Final Payment Deadline: June 15, 2014
Sandrine Han, aka Kristy Handrick in SL. I am an assistant professor in the University of British Columbia Curriculum & Pedagogy Department. I am a Ph.D. in Art Education. My dissertation is about visual learning, distance learning, and visual culture in 3D visualized virtual worlds. I also hold InAEA in SL. You are very welcome to join us. Thanks for coming to InAEA.
More About Us...
Get notified when a new post is published.
InAEA SLurl
BlogEngine.NET
Arthemia theme by Michael
Jubel, adapted by onesoft
Modified by
M.Efe Ozer
InAEA ©
2019
All rights reserved. | http://www.inaea.org/post/2014/02/12/FULL-STEAM-AHEAD-WITH-TREESMART-(WORKSHOP).aspx |
Armed conflict has been in the news this year: Syria, Iraq, Ukraine-Russia, and now Israel-Palestine. But on July 15’s Glenn Beck Program, head writer Stu Burguiere suggested that on the whole, the modern world is particularly peaceful.
A caller got Beck gushing on the United States’ decision to bomb Nagasaki and Hiroshima to bring about the end of World War II and the effectiveness of nuclear deterrence. "I love that it’s probably stopped a war between the United States and Russia," Beck said. "We had a cold war instead of a hot war because we both knew, oh, if you push that button I’ll push that button." Beck put the number of lives saved by the nuclear bomb at potentially "hundreds of millions."
Burguiere chimed in, acknowledging that nuclear weapons are still plenty dangerous. Even so, he continued, "so far, I don’t think you can look at it as anything other than a positive."
"Violence around the world is dropping. There are less wars, there are less people dying in wars than there have been in quite some time. It’s odd to think about, but it’s true."
With violence making newspapers’ front page almost every day, we were surprised enough to check: Are the number of wars and the number of deaths from wars on the downswing?
The big picture
Burguiere was a little vague with his time frame, so we’re going to start broad and zoom in.
Compared to prehistoric, pre-state, and even Medieval man, Harvard psychology professor Steven Pinker argues, the world has become incredibly peaceful. "Violence has been in decline for thousands of years," Pinker said in the Wall Street Journal, "and today we may be living in the most peaceable era in the existence of our species."
"The rate of documented direct deaths from political violence (war, terrorism, genocide and militias) in the past decade is an unprecedented few hundredths of a percentage point."
In his book, The Better Angels of Our Nature: Why Violence Has Declined, Pinker attributes our species’ decline in violence to the creation of states, trade interests, the Enlightenment and the dearth of major interstate war since World War II.
It’s worth noting that even though rates of violence are down, the exponential increase in population means that the absolute number of violent crimes and deaths every year is nowhere near an all-time low.
"Think about the vast loss of life in World War II," said Robert Epstein, professor of psychology at the University of the South Pacific. Epstein compared violent deaths to AIDS insofar as both were causes of death that were strictly behavioral. "If we use the same logic with, let’s say malaria or polio or any other cause of death, that relatively speaking we’re doing better, I don’t think a lot of people would accept that. They’d want to know, is it possible to eradicate that."
Battle deaths
After World War II -- and after the nuclear bombings Beck and Burguiere were discussing -- both the absolute number of direct deaths from war and the per capita rate of direct deaths dropped. Battle deaths are difficult to pin down precisely, but the Uppsala Conflict Data Program puts the number of battle deaths in 2013 at about 30,000.
According to the Peace Research Institute Oslo, which produced the graph above, there were between 557,000 and 851,000 battle deaths in 1950, the highest since the end of World War II. Despite some peaks and troughs, there have been fewer battle deaths over the last decade than any other 10-year average since World War II.
"The number of people killed in wars is close to its lowest point since 1946," Pinker said. "There was a slight uptick in 2012, and I expect it will continue to rise a bit in 2013 and 2014 because of Syria and Iraq, but not nearly enough to bring us to the levels seen during the Chinese Civil War, Korea, Vietnam, India/Pakistan/Bangladesh, Iran-Iraq, USSR-Afghanistan, and the many now-quiescent regions of Africa."
The experts we spoke to generally understood deaths due to war to mean direct battle deaths, as applying a consistent standard to war deaths outside of battle is difficult. "Direct deaths are the only ones that can be counted with confidence," wrote Pinker in Better Angels.
By most measures, war deaths are lower now than they’ve been for most of the post-World War II period, even as burgeoning conflicts in the Middle East have driven the number of battle deaths up in 2012 and 2013.
Defining war
Burguiere said fewer people today are dying in wars, but what constitutes a war is somewhat up for debate.
The definition of war that most political scientists use is a conflict with at least 1,000 battle casualties in a year, according to Joshua Goldstein, author of Winning the War on War. The Uppsala Conflict Data Program -- which Goldstein called "the expert on battle deaths" -- refer to conflicts that kill 25-999 people as "armed conflicts," not wars.
"Interstate wars, between regular armies, have dropped to zero currently and have become much less frequent and also shorter over recent decades," Goldstein said. He acknowledged, though, that "the world has slipped backward a bit in the last 5 years, mostly because of the war in Syria."
Armed conflicts have also seen a slight dip in recent years, although they’ve generally risen in frequency since World War II, even as they’ve become less fatal.
There are other ways, though, to define war. Mark Harrison, in his paper "Frequency of Wars," took an inclusive definition of war that includes even displays of force between countries. Based on this definition, as the number of countries has increased from less than 50 in 1870 to more than 180 today, the number of wars has also steadily increased.
"One way to think about this could be the difference between fire and friction," Harrison said. "Burguiere says there are fewer fires and fire deaths, and this is true. Our work shows that there is increasing friction."
Pinker, Goldstein and Harrison all attributed the decrease in major armed conflict to factors other than nuclear deterrence, although they acknowledged it’s an open debate. In Winning the War on War, Goldstein attributes the decline to the United Nation’s international peacekeeping and the fall of Communism, and in Better Angels, Pinker points out that Democratic countries have never fought in a war against each other.
Neither, too, have capitalist countries. Commerce is "a game in which everybody can win," Pinker said. "Though the relationship today between America and China is far from warm, we are unlikely to declare war on them or visa versa. Morality aside, they make too much of our stuff, and we owe them too much money."
Our ruling
In the course of praising nuclear deterrence, Burguiere said, "there are less wars, there are less people dying in wars than there have been in quite some time."
According to the most common definition of war -- an armed conflict with more than 1,000 battle deaths annually -- Burguiere is right, although other definitions exist. Battle deaths are also down since World War II. However, they have begun creeping up in the last couple of years, mostly because of the conflicts in Syria and Iraq. The record when it comes to "people dying in wars" generally is less clear, as is the role of nuclear deterrence.
Those caveats -- that "war" and "people dying in wars" are fraught terms, and that recent Middle Eastern conflicts are increasing the number of battle deaths -- don’t detract much from the overall point that violence is down. We rate Burguiere’s claim Mostly True. | https://www.politifact.com/punditfact/statements/2014/jul/21/stu-burguiere/fewer-wars-fewer-people-dying-wars-now-quite-some/ |
CROSS-REFERENCE TO RELATED APPLICATION
BACKGROUND OF THE INVENTION
SUMMARY OF THE INVENTION
BRIEF DESCRIPTION OF THE DRAWINGS
DESCRIPTION OF SOME EMBODIMENTS
This application claims the priority benefit of Taiwan application serial no. 93140723, filed on Dec. 27, 2004. All disclosure of the Taiwan application is incorporated herein by reference.
1. Field of the Invention
The present invention relates to a method and an apparatus for detecting image movement, and more particularly to a method and an apparatus for detecting image movement capable of enhancing displacement and small angle detection.
2. Description of the Related Art
With the advance of computer functions, and development in internet and multi-media technology, image data are transmitted in the digital form, rather than in analog form. In order to meet the modern life style, peripheral apparatus of computers have developed to be slim, precisely controllable, and multi-functional. For example, a conventional mouse uses a ball to control the locations. Though the structure is simple, a mouse must be frequently cleaned to get rid of the accumulated dust, and also the low location resolution is another disadvantage. The recently introduced optical mouse can resolve this issue and precisely control the location and reduce errors.
The traditional optical mouse uses the block match method to determine the displacement of the frame detector. By comparing the acquired sampling window and the reference window based on the mean-squared error method, or a mean absolute difference (MAD) method, the displacement of the frame detector can be obtained.
FIG. 1
FIG. 1
10
112
13
14
16
18
is a flowchart showing a conventional block match method to determine the displacement of a frame detector. Referring to , the frame detector first captures a reference frame at the first location in the step S. In the step S, the detector moves to a second location to capture another sampling frame. If both the reference frame and the sampling frame are 6×6, then a smaller 2×2 reference window is captured from the reference frame, and the frame is divided into 9 non-overlapping frame windows. Then, a window having the same gray level as that of the frame window in the sampling frame is sequentially sought according to the block match method. The relative functions of the frame window and the sampling frame are calculated in the step S. In the step S, the displacement and the moving vector between the windows having the smallest functions are calculated. In the step S, it is determined whether the frame window has reached the margin of the frame. If the frame window has reached the margin of the frame, the original reference frame is replaced by the sampling frame as a new reference frame in the step S. In fact, the frame detector has several noise sources. In addition to the variation of the semiconductor process, the noises from the power, and the signal noises, the frame detector is also affected by temperature and brightness of the outside environment. Accordingly, only the window having the gray level most similar to that of the previous frame can be found. Detecting errors are unavoidable.
In the traditional technology, due to the restrictions of the frame detector size and the frame speed, the moving speed of the frame detector is limited. If the frame captured by the frame detector is M×M, and the frame window is N×N, the pixel dimension of the frame detector is PN×PN, and the frame speed is FN frame/sec. In the example that the frame window is located at the center of the acquired frame array, the displacement ranges allowed in the horizontal and the vertical directions are ±(M−N)/2, and the maximum moving speeds allowed in the in the horizontal and the vertical directions are PN×[±(M−N)/2]×FN. When the moving speed of the frame detector is higher than the limit allowed for the hardware, the window having a gray level similar to that of the frame window cannot be captured by the block match method. This will result in incorrect determination in the block match method, and the displacement of the frame detector cannot be correctly calculated.
Accordingly, a method for detecting displacement of a frame detector is provided in the US Patent Application Publication 2003/0081129, entitled “METHOD FOR DETECTING MOVEMENT OF IMAGE SENSORS”, filed by Lin et al. In this patent publication, the frame detector captures a specific part of the reference frame at the first location. The frame detector then moves to the second location to capture a sampling frame. By comparing the reference frame and the sampling frame, the frame with the same specific part of the reference frame is obtained, and the displacement of the frame detector is determined. Based on the moving direction of the detector, the location of a new reference frame captured by the frame detector at the third location is determined so as to calculate the displacement of the frame detector. This patent dynamically adjusts the location of the reference window of the next reference frame to enhance the maximum displacement. Though increasing the maximum displacement, this method cannot enhance the moving angle detection.
The U.S. Pat. No. 5,729,008 provides a method of detecting frame displacement. This method uses a quadric to form the correlation surface model, and the point of the minimum value to calculate the integer part of the displacement. Based on the correlation value of the point and the surrounding points, the minor displacement in decimal is calculated by the interpolation method. When the correlation value is calculated, the difference at each point must be measured, and hardware is required to calculate the displacement in decimal. As a result, a complex and great calculation is unavoidable.
Accordingly, for keeping the hardware structure simple and reducing the calculation, the conventional method to detect the image movement still leaves room for improvement.
Accordingly, the present invention is directed to a method of detecting image movement. In the same reference frame, different reference windows can be dynamically captured in the direction reverse to the direction of the moving vector based on the previously detected frame. Accordingly, the detecting capability of single-direction maximum displacement can be enhanced. In addition, the capability of detecting minor displacement in various directions can also be improved so that the minor angle variation of the frame can be detected. When the detected frame displacement is larger than the maximum detectable displacement, the moving vector is compensated according to the principle of inertia.
The present invention is also directed to an apparatus for detecting image movement. For keeping the hardware structure simple and reducing the calculation, the present invention enhances the capabilities of detecting the maximum displacement and the minor displacement. Accordingly, the sensitivity of detecting the frame angle variation doubles that of the conventional technology.
In an embodiment of the present invention, the steps of detecting the image movement are described below. First, a first frame is captured as the reference frame, and a second frame serves as a first sampling frame. The center of the reference frame is a reference window. The windows in the sampling frame are sampled. According to the relative locations of the reference frame and the sampling windows, the moving speed and the moving vector are calculated. When the detected image movement displacement is higher than the maximum detectable displacement, the moving vector is compensated according to the principle of inertia. Then, the present invention determines whether both the reference window and the sampling window have reached the margin of the frame. If not, a new window is captured in the reverse direction of the moving vector based on the moving speed and the moving vector. If so, the reference frame is renewed. Then, the steps described above are repeated until the reference window and the sampling window have reached the margin of the frame. The sampling frame in which the sampling window has reached the margin of the frame would replace the original reference frame to become a new reference frame. According to the new reference frame, a new reference window is captured and the steps described above are repeated.
Before the sampling window reaches the margin of the frame, in the same reference frame, different reference windows are dynamically captured in the reverse direction of the image movement vector according to the previously detected frame. Accordingly, the capability of detecting the maximum displacement is enhanced. In addition, the reference frame is not renewed until both the reference window and the sampling window have reached the margin of the frame. Accordingly, the capability of detecting minor displacement in horizontal and vertical directions is improved. The minor frame angle variation can be obtained by calculating the displacement in different directions. For example, in the 8×8 frame with 2×2 reference windows, the slope of the minimum moving angle detectable in the present invention is 0.5/6, i.e., about 4.8 degrees.
In addition, the present invention also includes calculating the variation of acceleration. When the acceleration is a stable and reasonable value, the next displacement can be predicted. When the image movement speed is higher than the reasonably detectable displacement, the detection errors will result in drastic variation of the acceleration, and the acceleration value cannot be stable. Then, the previously predicted moving vector is used as the present movement, and the detected result will be used as the moving vector after the acceleration is stable.
The present invention also provides an apparatus for detecting image movement. In the present embodiment, the apparatus can be an optical mouse. The apparatus comprises an optical-sensing unit, an auxiliary calculating unit, a micro-processing unit, an analog/digital conversion unit, and a signal filter unit. The optical-sensing unit is coupled to the analog/digital conversion unit. The analog/digital conversion unit is coupled to the signal filter unit. The optical-sensing unit accesses the frame and outputs a digital signal through the analog/digital conversion unit. The noise of the digital signal is removed by the signal filter unit, and the digital signal is then transmitted to the micro-processing unit. The micro-processing unit is coupled to the auxiliary calculating unit. The micro-processing unit accesses a reference frame through the optical-sensing unit, and captures a reference window from the reference frame. The micro-processing unit accesses another reference frame. The micro-processing unit searches a sampling window which matches the reference window, and calculates a moving vector based on the reference window in the sampling frame. The micro-processing unit calculates the variation of the acceleration. When the acceleration value is stable and reasonable, the displacement is recoded. When the image movement speed is higher than a detectable displacement, the detection errors will result in drastic variation of the acceleration. As a result, the acceleration cannot be in a reasonable status. The micro-processing unit will then use the previously predicted displacement as the present displacement, and uses the detected result as the moving vector after the acceleration is stable. If the reference window has not reached a margin of the reference frame, a new window in the reference frame is captured to replace the reference window. If the reference window has reached a margin of the reference frame, and the sampling window has reached a margin of the sampling frame, the sampling frame replaces the reference frame as a new reference frame. Then, the reference window is captured in the new reference frame.
Accordingly, when detecting image movement, the present invention does not require additional hardware and can reduce the calculation, while enhancing the capability of detecting the maximum displacement. In addition, the present invention can enhance the capability of detecting the minor displacement to calculate the image movement angle variation. Moreover, when the detected frame displacement is more than the maximum detectable displacement, the moving vector is compensated according to the principle of inertia.
The above and other features of the present invention will be better understood from the following detailed description of the preferred embodiments of the invention that is provided in communication with the accompanying drawings.
FIG. 1
is a flowchart showing a conventional block match method to determine displacement of a frame detector.
FIG. 2
is a flowchart showing a method of an embodiment of the present invention.
FIGS. 3A-3F
are drawings showing steps of detecting image movement according to an embodiment with a 8×8 frame of the present invention, wherein the left column represents reference frames, and the right column represents sampling frames.
FIG. 4
is a drawing showing a functional block of the apparatus for detecting image movement according to an embodiment of the present invention.
FIG. 5A
is a drawing showing a circular track formed by a conventional apparatus for detecting an image movement.
FIG. 5B
is a drawing showing a circular track formed by an apparatus for detecting image movement according to an embodiment of the present invention.
FIG. 2
FIGS. 3A-3F
FIGS. 2
FIGS. 3A-3F
FIGS. 3A and 3B
FIG. 3C
FIG. 2
FIG. 3B
3
3
20
10
10
16
22
12
17
10
14
15
12
19
17
16
16
12
18
14
15
18
24
16
18
16
25
26
16
16
is a flowchart showing a method of an embodiment of the present invention. are drawings showing steps of detecting image movement according to an embodiment with a 8×8 frame of the present invention, wherein the left column represents reference frames, and the right column represents sampling frames. Referring to , and A-F, the value of the black squares is 0, and the value of the white squares is 8. The present invention can be applied to an apparatus which can detect the image movement, such as an optical mouse. In this embodiment, the mouse moves to the left and then upward. The moving angle is about 7 degrees, or ⅛, and the frame moves to the bottom right. In this embodiment of the present invention, the steps of detecting image movement are described below. In the step S, a first frame is captured to serve as the reference frame . The center of the reference frame is captured as the reference window . In the step S, a second frame is captured as the sampling frame . In , for the two squares in the window of the reference frame , the value of the top square is 8, and the value of the bottom square is 0. When the frame moves to the location in the horizontal and then slightly downward direction, the moving angle is about 7 degrees, or ⅛. That is, when the sampling frame moves right for a square, the frame will move downward for ⅛ square. The difference of gray levels in the window corresponding to the window will be 1. Then a window having the size similar to that of the reference window and whose gray level is most similar to that of the reference window in the sampling frame is sought so as to form the sampling window . Referring to , when the frame moves to the location in the horizontal and slightly downward direction, the moving speed and the moving vector are calculated according to the relative locations of the reference frame and the sampling window . Referring to and the step S in , it is then determined whether the reference window and the sampling window have reached the margin of the frame. If the reference window has not reached the margin of the frame, the steps S and S are performed. According to the moving speed and the moving vector, a new window is captured to replace the original reference window as a new reference window in the direction reverse to the moving vector, i.e., in the left direction in .
261
265
262
263
264
264
FIG. 2
2
2
A set of variables are used to record the detected displacement and to compensate the moving frame. Referring to the step S in , when the moving speed is higher than a predetermined moving vector, the displacement cannot be measured and the acceleration of the image movement becomes unreasonable. Then the predicted displacement is output as the present displacement in the step S. It is determined whether to reset the counting value of the acceleration in the allowed range so as to determine if the acceleration is in the stable status. After the image movement speed is in the detectable range, it is determined whether the counting value of the acceleration of the image movement in the allowed range reaches a pre-determined number in the step S. For example, if the detected acceleration is 0 or smaller than 1 pixel/secfor more than three times, the acceleration of the image movement is in a stable status. The next displacement is then predicted and the preset displacement is output in the steps S and S. If the detected acceleration is not 0 or smaller than 1 pixel/secfor more than three times, only the present displacement is output in the step S. In other words, the variation of the acceleration is used to determine whether the maximum moving speed is exceeded. If so, the previously predicted displacement is output according to the principle of inertia. If not, the present displacement is output.
FIGS. 3D and 3E
FIG. 2
FIG. 3F
FIG. 2
16
14
18
16
18
27
18
28
16
22
Referring to , the reference window reaches the margin of the frame. Moving right following the frame , the sampling window gradually shifts toward the margin of the frame. Assuming the reference window and the sampling window reach the margin of the frame as shown in the step S in and , the sampling frame in which the frame window reaches the margin of the frame would replace the original frame to serve as a new reference frame as shown in the step S in . A reference window is captured from the new frame. Then, the step S is repeated.
FIGS. 3D-3F
FIG. 3D
FIG. 3E
FIG. 3F
FIG. 3F
19
17
19
17
In order to point out the feature of detecting the small angle, the mean absolute difference (MAD) method is used as the method of detecting the moving as shown in . The values of the two top squares in the window and the MAD value of the window represent the MAD value in the vertical direction without displacement, i.e., dy=0. The values of the two bottom squares of the window and the MAD value of the window represent the MAD value in the vertical direction with a downward displacement, i.e., dy=−1. In , the MAD value of dy=0 is 4, and the MAD value of dy=−1 is 8. Because the MAD value of dy=0 is less than that of dy=−1, the vertical direction is selected without displacement. In , the MAD value of dy=0 is 5, and the MAD value of dy=−1 is 6. Because the MAD value of dy=0 is less than that of dy=−1, the vertical direction is selected without displacement. In , the MAD value of dy=0 is 6, and the MAD value of dy=−1 is 4. Because the MAD value of dy=0 is larger than that of dy=−1, the vertical direction is selected with a downward displacement. In , the system has detected the downward displacement. Comparatively, the conventional method automatically renews or resets the reference frame, and thus cannot detect the downward displacement.
14
15
18
16
18
16
In this embodiment, the frame moves to the location in a horizontal and then slightly downward direction. Accordingly, the moving speed and the moving vector can be calculated according to the relative locations of the reference frame and the sampling window . By determining whether the reference window and the sampling window have reached the margin of the frame, the present invention can determine whether to replace the original reference frame with the sampling frame, and to use the new reference frame to capture another reference window . The present invention, however, is not limited thereto. One of ordinary skill in the art would know that the reference frame is renewed when the reference window and the sampling window have reached the margin of the frame. Accordingly, the present invention allows the frame to move vertically or in any direction to enhance the capability of detecting the minor displacement. By calculating the displacement in any direction, the present invention can calculate the minor angle variation of the image movement.
FIGS. 3A-3F
4
20
20
22
29
28
24
26
22
24
26
24
26
28
Referring to and , the present invention provides an apparatus for detecting image movement. The connection of the internal units, the steps of detecting image movement and relative operations are as follows. The apparatus comprises an optical-sensing unit , an auxiliary calculating unit , a micro-processing unit , an analog/digital signal conversion unit , and a signal filter unit . The optical-sensing unit is coupled to the analog/digital signal conversion unit , which is coupled to the signal filter unit . When the optical-sensing unit accesses the frame, the digital signal is output through the analog/digital signal conversion unit . The noises of the digital signal are removed by the signal filter unit , and the digital signal is transmitted to the micro-processing unit .
20
28
10
12
22
28
10
16
29
28
16
16
18
14
15
18
28
28
16
18
16
16
16
16
16
14
18
16
18
28
18
28
16
FIGS. 3A and 3B
FIG. 3A
FIG. 3C
FIG. 3B
FIGS. 3D and 3E
FIG. 3F
The steps of detecting image movement and relative operation of the apparatus for detecting image movement in the present invention are described in the following. The micro-processing unit captures the first frame as the reference frame , and the second frame as the sampling frame through the optical-sensing unit . The micro-processing unit also captures the center of the reference frame as the reference window . With the operation of the auxiliary calculating unit , the micro-processing unit searches the window having the size similar to the reference window and whose gray level most is similar to that of the reference window to serve as the sampling window . Referring to , when the frame moves to the location in the horizontal and slightly downward direction, or from the center to the right in , s the moving speed and the moving vector are calculated based on the reference frame and the sampling window . As stated in the last embodiment, the micro-processing unit compensates the moving frame. The variation of the acceleration is used to determine whether the maximum moving speed is exceeded. If so, the predicted displacement is output according to the principle of inertia. If not, the detected displacement is output. Referring to , the micro-processing unit determines whether the reference window and the sampling window have reached the margin of the frame. If the reference window has not reached the margin of the frame, in the same reference frame, a new window is captured to replace the original reference window , as a new reference window , in the direction reverse to the moving frame, or from the center to the left in . Referring to , the reference window has reached the margin of the frame. Moving right following the frame , the sampling window gradually shifts to the margin of the frame. If the reference window and the sampling window have reached the margin of the frame in , the micro-processing unit replaces the original reference frame with the sampling frame in which the sampling window has reached the margin of the frame. The micro-processing unit then captures a reference window from the new reference frame.
FIG. 5A
FIG. 5B
FIGS. 5A and 5B
FIGS. 5A and 5B
20
30
20
is a drawing showing a circular track formed by a conventional apparatus for detecting image movement. is a drawing showing a circular track formed by an apparatus for detecting image movement according to an embodiment of the present invention. The apparatus for detecting image movement of the present invention can be, for example, an optical mouse. Referring to , by comparing the sections of , the circular track formed by the apparatus of the present invention is smoother.
18
16
−1
−1
−1
In the conventional technology, the reference frame is renewed when the sampling window reaches the margin of the frame. That is, when the sampling frame reaches the margin in the X(Y) direction is detected, movement of the sampling frame in the Y(X) direction has not yet been detected. That means the conventional method cannot detect the minor angle displacement. As a result, the smoothness of the circular track is not desirable. If the size of the array of the apparatus for detecting image movement is N, and the size of the reference window is M, the maximum displacement of the apparatus for detecting image movement is ±(N−M)/2, and the minimum detectable moving angle is tan(1/(N−M)). In the embodiment with the 8×8 frame and 16 2×2 reference windows, the minimum detectable moving angle is tan(1/(N−M))=tan(1/(8−2)). It has a slope of about ⅙, or 9.5 degrees. Therefore, the conventional method cannot detect the moving which has a slope smaller than 9.5 degrees. The moving angles of 0±9.5, 90±9.5, 180±9.5, and 270±9.5 detected in the conventional method would become the angles 0, 90, 180, and 270, respectively.
18
16
16
18
16
−1
In the present invention, before the reference window reaches the margin of the frame, in the same reference frame, the different reference window are dynamically captured in the direction reverse to the direction of the image movement so as to enhance the capability of detecting horizontal maximum displacement. In addition, the present invention does not renew the reference frame until the reference window and the sampling window have reached the margin of the frame. Accordingly, the present invention also enhances the capability of detecting minor displacement in the vertical direction. With the horizontal and vertical displacement, the minor angle variations of the image movement can be calculated. In one embodiment with the 8×8 frame and 2×2 reference windows as example, the minimum detectable moving angle in the present invention is tan(0.5/(N−M)), which represents a slope of about ( 0.5/6) or 4.8 degrees. Compared with the conventional technology, the sensitivity of detecting angle variation of the image movement in the present invention is double.
As described above, the present invention is not limited to these embodiments. Within the scope of the present invention, one of ordinary skill in the art would know that the reference frame is renewed when the reference window and the sampling window have reached the margin of the frame. When the frame moves vertically or in any direction, the present invention can detect the minor displacement in any direction. With the displacement in any direction, the minor angle variation of the image movement can be calculated.
Accordingly, the present invention has at least the following advantages:
1. Besides keeping the hardware structure simple and reducing the calculation, the present invention also enhances the capability of detecting maximum displacement in any direction.
2. By enhancing the capability of detecting minor displacement, the present invention doubles the sensitivity of detecting angle variation of the image movement from the conventional technology.
3. When the moving speed exceeds the detectable range, the present invention compensates the image movement of unreasonable acceleration with the predicted moving vector, to obtain better stability according to the principle of inertia.
Although the present invention has been described in terms of exemplary embodiments, it is not limited thereto. Rather, the appended claims should be constructed broadly to include other variants and embodiments of the invention which may be made by those skilled in the field of this art without departing from the scope and range of equivalents of the invention. | |
The end of analytical philosophy?
Philosophical analysis is a method of inquiry in which one seeks to assess complex systems of thought by ‘analysing’ them into simpler elements whose relationships are thereby brought into focus. This method has a long history, but became especially prominent at the start of the twentieth century and, by becoming integrated into Russell’s development of logical theory, acquired a greater degree of sophistication than before. The logical positivists developed the method further during the 1930s and, in the context of their anti-metaphysical programme, held that analysis was the only legitimate philosophical inquiry. Thus for them philosophy could only be ‘analytical philosophy’.
After 1945 those philosophers who wanted to expand philosophical inquiries beyond the limits prescribed by the positivists extended the understanding of analysis to include accounts of the general structures of language and thought without the earlier commitment to the identification of ‘simple’ elements of thought. Hence there developed a more relaxed conception of ‘linguistic analysis’ and the understanding of ‘analytical philosophy’ was modified in such a way that a critical concern with language and meaning was taken to be central to it, leading, indeed, to a retrospective re-evaluation of the role of Frege as a founder of analytical philosophy. At the same time, however, Quine propounded influential arguments which suggest that methods of analysis can have no deep significance because there is no determinate structure to systems of thought or language for the analytical philosopher to analyse and assess. Hence some contemporary philosophers proclaim that we have now reached ‘the end of analytical philosophy’. But others, who find Quine’s arguments unpersuasive, hold that analytical philosophy has virtues quite sufficient to ensure it a role as a central philosophical method for the foreseeable future.
Baldwin, Thomas. Analytical philosophy, 1998, doi:10.4324/9780415249126-DD091-1. Routledge Encyclopedia of Philosophy, Taylor and Francis, https://www.rep.routledge.com/articles/thematic/analytical-philosophy/v-1. | https://www.rep.routledge.com/articles/thematic/analytical-philosophy/v-1 |
Nursing Satisfaction Impacts Patient Outcomes, Mortality
For a long time, it has been the contention of many in healthcare that there is a correlation between nurse satisfaction and patient outcomes. It makes sense when you think about it – nursing professionals who are work longer shifts in poor conditions and have high numbers of patients to care for are more likely to make an error than those who have more resources and adequate staffing to prevent burnout.
Moreover, there are several bodies of research on the topic that support the notion of "happy nurses, healthier patients." Take a look at some of the key findings that make the case for hospitals and medical institutions to put programs in place to improve the quality of their employee experiences.
Long Shift Work Has Negative Effects
At short-staffed institutions, it’s quite common for nurses to be asked to stay on for a double-shift, but it could be to the detriment of quality patient care. One study sponsored by the National Institute of Nursing Research found that as the proportion of hospital nurses working shifts of more than 13 hours increased, patients’ dissatisfaction with care increased.
In addition, nurses working beyond 10 hours were found to be 2.5 times more likely than nurses working shorter hours to report job dissatisfaction and symptoms of burnout. Nurses often do not administer self-care, thus resulting in high stress. In many situations, this leads to high job turnover for the healthcare institution.
Nursing Staff Sizes Affect Mortality Rates
The number of patients per nurse can have a huge effect on not only the work environment but on how well patients fare. It can even be a matter of life or death, according to research out of the University of Pennsylvania.
The researchers examined statistics from 550 hospitals in California, New Jersey, Pennsylvania, and Florida; this included 25 Kaiser Permanente hospitals in California and 56 Magnet hospitals.
Because magnet hospitals are typically very well run healthcare systems and identified as providing good workplaces for nurses (as per the American Nurses Credentialing Center), there were some clear differences between such institutions and other hospitals.
Part of the research also included nurse surveys, focused on work environment, their level of education, job satisfaction, and the average number of patients they attend to daily. Lastly, the researchers examined the mortality data of each hospital.
The result? Nursing differences were shown to play a large role in mortality rates.
Job Satisfaction Equals Better Patient Care
Anecdotally, it might seem obvious that people who enjoy their work perform better. In the nursing field, that is especially true, and there’s research to support it.
Using data from the American Nurses Association’s National Database of Nursing Quality Indicators, researchers discovered that a 25 percent increase in nurse job enjoyment over a two-year span was linked with an overall quality of care increase between 5 and 20 percent.
Better Work Environment, Fewer Errors
One case study found a direct correlation between employee satisfaction and patient care. When a large military hospital system, with 70 hospitals, had problems retaining its nursing staff, it turned to McKinsey, a global management consulting firm, for help. The firm helped its client discover that nurses were dissatisfied with their working conditions, and felt like there were no career advancement opportunities available.
They were tasked with implementing a number of strategies to help turn things around. With new programs in place, the results were not only great from an employer standpoint but for the patients as well.
There was an increase in patient communication and improvements within a number of quality-of-care indicators. Productivity increased, with the percentage of pain reassessments completed going from 90 percent to 99 percent. The best result, though, was that nurse medication-administration errors decreased by 1.2 per full-time employee per 100 patient days.
The Takeaway
While none of these results may seem particularly surprising, they do serve to confirm what most nursing professionals already know – the more support they have from their employers, the better their performance and the higher the quality of care will be.
Hospitals should realize that investing in nursing satisfaction and having adequate staff benefits the hospital, the nurses, and the patients. Making sure their nursing professionals do not reach burnout level by offering opportunities for self-care and keeping schedules manageable is a good start. Providing them with tools, technology, and support staff to help them do their jobs more effectively is equally as important. And finally, as with any career, listening to nurses who wish to advance or use their skills in different ways and providing opportunities for them to do so will help employers retain talent.
In the end, it’s a win-win for everyone.
Next Up: Pros & Cons Of Nursing Degrees: LPN, ADN, BSN, MSN.
Nurse.org's Popular Guides and Resources
Student Loan Forgiveness for Nurses?
Read about the top student loan forgiveness programs for nurses and find out if you qualify.
15 Highest Paying Nursing Jobs in 2021
You know all nursing jobs aren’t created (or paid!) equally, but do you know which nurses are making the most money in 2020?
Earn CEUs Online!
Need to renew your license soon? Check out our favorite free online CEU courses.
2021’s Best Nursing Schools
We’ve looked at programs nationwide and determined these are our top nursing schools. | https://nurse.org/articles/nursing-satisfaction-patient-results/ |
Who is MST?
Michelle Scott Tucker is the author of Elizabeth Macarthur: A Life at the Edge of the World – a fascinating biography of the woman who established the Australian wool industry (although her husband received all the credit).
Elizabeth Macarthur was shortlisted for the 2019 NSW State Library Ashurst Business Literature Prize and the 2019 CHASS Australia Book Prize.
Michelle is a freelance writer and consultant, having had a successful career in government, business and the arts – including a recent stint as Executive Director of the Stella Prize, Australia’s pre-eminent literary prize for women writers. She has served as Vice Chair of the Writers Victoria board and currently serves on the board of the Macedon Ranges Literary Association.
Michelle is a graduate of the Australian Institute of Company Directors and has completed a Master of Business Administration and a Bachelor of Arts (English and History). She consults to government, business and not-for-profits to deliver reviews and and organisational improvement. In addition to her own writing, Michelle also delivers ghost writing, corporate histories, and business book development services.
Passionate about Australian literature, history and storytelling, Michelle lives in regional Victoria with her family and too many pets.
A Note from MST
I have a family and a demanding day job but for twelve years – in my spare (!) time – I worked on a biography of Elizabeth Macarthur.
The first edition (pictured right) was published by Text, April 2018. The second edition (left) in April 2019. You can read more about my writing process here. Or find out what drew me to Elizabeth Macarthur here in a popular post called ‘Never just a farmer’s wife.’
Writing the book, finding a publisher and launching Elizabeth Macarthur’s biography into the world has been a wonderful adventure. I’ve met some wonderful people, learned interesting things and travelled to some amazing places. This website – and in particular my blog – is my attempt to share the wonder, interest and amazement.
MST’s so-called literary career
In addition to delivering LOTS of talks and interviews, so far the highlights (and adventures) include:
2020 ‘Writers Residencies – Pros and Cons‘ published by Meanjin.
2019 Elizabeth Macarthur shortlisted for the CHASS Australia Prize for a book (Council for the Humanities, Arts and Social Sciences)
2019 Elizabeth Macarthur shortlisted for the State Library NSW Ashurst Business Literature Prize. More details here.
2018 Executive Director of The Stella Prize, August 2018-July 2019.
2018 ‘Invisible Women‘, an article published by Inside Story Magazine, about the gaps in the story of colonial Australia.
2018 For all my author talks and events, please refer to News & Events. Past activities are listed at the bottom of the News & Events page.
2018 Elizabeth Macarthur: A life at the edge of the world was published in April 2018. Huzzah!
2017 Was one of the judges for the Grace Marion Wilson Emerging Writers Competition. I wrote about my experience here.
2017 Elected to the Writers Victoria Committee of Management.
2016 Publication deal with Text Publishing. More details here.
2016 Signed with a literary agent, Jacinta di Mase Management.
2015 Selected for the non-fiction HARDCOPY Professional Development Program (a program hosted by the ACT Writers Centre and assisted by the Australian Government through the Australia Council). You can find a compilation of all my posts about HARDCOPY here.
2014 Recorded a short piece for ABC Radio National’s 360 Documentary program called Bedside Manners.
2012 Shortlisted: Hazel Rowley Literary Fellowship (for work-in-progress biography of Elizabeth Macarthur)
2012 Varuna Fellowship for Writing Retreats.
What else?
I’m available for speaking engagements, book signings, interviews, and festivals of all kinds. Please contact my publicist – [email protected]
All images and text used in this blog are my own unless otherwise stated. Please don’t use any of my photos or text without crediting me and linking back to this site.
Contact Me
If you’d like to contact me directly (for example to tell me about dead links or to suggest more effective hair styling techniques) then please use this contact form. | https://michellescotttucker.com/about/ |
Each project comes with 2-5 hours of micro-videos explaining the solution.
Get access to 50+ solved projects with iPython notebooks and datasets.
Add project experience to your Linkedin/Github profiles.
A sequence to sequence prediction for developing a classification system is of very much required in developing applications. Standard approaches for developing applications won't help in providing accuracy. Hence, as an example let's take an IMDB movie review dataset and create some benchmarks by using RNN, RNN with LSTM and drop out rate, RNN with CNN, and RNN with CNN plus drop out rate to make a composite sequence to sequence classification work. We can compare the model accuracy as well.
The goal of this tensorflow project is to identify hand-written digits using a trained model using the MNIST dataset. The MNIST dataset contains a large number of hand written digits and corresponding label (correct digit)
In this data science project, we are going to work on video recognization data and a robust level of image recognization MNIST data.
In this deep learning project, we are going to predict which team will win the NCAA basketball tournament of coming 2017 based on past historical data. | https://www.dezyre.com/project-use-case/sequence-classification-with-lstm-rnn-in-python |
Night Light is a mixed parkour map by pieceofcheese. It rewards 12 FFA Points upon completion, and can be joined directly using “/c join ntl”.
General
Gameplay
The map starts with a very long endurance section in which the player must parkour across lamps in a long, twisted hallway. It then throws the player into a dark, urban setting in which the player must complete a long series of jumps, with a small maze in the middle. Skipping jumps isn't possible due to barriers preventing so. Finally, the player must find their way around a maze / parkour fusion to various emerald platforms until they reach the end.
The map revolves around the gimmick of redstone lamps. Every redstone lamp platform has a pressure plate on top of it, lighting the lamp, and therefore the surroundings, when stood on. This may cause a bit of lag on slower computers.
Aesthetics
The beginning section uses many iron bars in a room made out of various types of stone. The platforms are redstone lamps atop nether brick(?) fences. The urban area consists of floating redstone lamp platforms leading around buildings made out of varying materials. Glowstone surrounded by glass is present here as well, adding a bit of light to the map. The final section is much less detailed, consisting of large coal-block room containing floating redstone lamps and the occasional lapis / emerald platform.
Challenges
This map is featured in 4 challenges: Cheesefest, 2016 Assortment, The Path of Heroes, and Hexa.
Trivia
- Falling during the last section drops the player into the void, where they will be able to see another map by pieceofcheese, Canvas, by looking up. | https://wiki.minr.org/Night_Light |
Summary of Week 2
This Week, we have seen that the digital revolution, along with the process of globalisation, challenges traditional cultural institutions conceived as national instruments. Cultural policies today have also to do with having a voice in the global landscape, which is saturated with information.
We also presented crowdfunding the new and innovative tool that offers promising perspectives for culture, especially because it can be combined with other sources of finance such as the system of matchfunding. In addition, we saw that crowdfunding offers artists and cultural producers also the opportunity to engage with their community and develop projects that meet their values and demands.
Finally, we discussed the effects of digital technologies on the art world. As we now know, given the importance of real-life interaction in the aesthetic experience and in the construction of the symbolic value of art, there are several limitations on the adoption of such technologies in the art world. In addition, the main players of the art market have resisted what they saw as an evolution that would reduce the value of art and harm their control on information. Nevertheless, the digital revolution seems to be making its way in the art world as well, with the rise of promising initiatives like Artsy or the Google Art Project.
Next Week, we will take a look at the threats that emerge in the digital era.
Don’t hesitate to leave us your feedback! | https://www.futurelearn.com/courses/culture-in-digital-age/0/steps/62489 |
Whether for the EU fuel and electricity market or for the food industry and oleochemistry: Sustainable products are in demand and compliance with certain standards is more necessary than ever, but demand also increases the expectations of customers and legislators that a product has to meet – along the entire supply chain.
The EU Directive 2009/28/EC on the "Promotion of the use of energy from renewable sources" links energy from biomass to the following sustainability criteria:
- Protection of biologically valuable areas
- compliance with greenhouse gas limits
- Traceability
Companies along the production and value chain are obliged to meet this requirement confirm adherence to it.
An ISCC or REDcert certification or ISCC PLUS or REDcert 2 certification gives you easy access to all EU fuel and electricity markets and the respective subsidies.
Certification helps you to prove credibly that sustainability criteria are met and thus guarantees respective market access. We are happy to support you in this way: With current news and developments, fast and customer-oriented, we are by your side to help you achieve your goals. | https://www.gut-cert.de/products/sustainability/supply-chains.html |
# Hifz-ur-Rehman
Hifz-ur-Rehman (died 1970) was a Pakistani archaeologist, historian and linguist.
## Career
Hifz-ur-Rehman donated his life-long collection of nearly 1,500 antiquities to the Lahore Museum, including three Quranic manuscripts of historical significance written by Imam Hussain (grandson of the Islamic prophet Muhammad), many decrees, Chinese porcelain, rare coins, glass objects, miniatures, ivory objects and specimens of calligraphy and Islamic art objects.
## Awards and recognition
Hifz-ur-Rehman died on December 31, 1970. Forty years later, on 23 March 2011, President Asif Ali Zardari posthumously honoured him with the "Sitara-i-Imtiaz" (The Star of Excellence) Award for his services in the fields of archaeology, history and linguistic research. | https://en.wikipedia.org/wiki/Hifz-ur-Rehman |
Term papers are utilised to assess the academic performance by writing an essay which answers the question posed in the term paper. A term paper is typically written by students on an academic term, usually for a major class, for the purpose of earning a grade. Merriam Webster defines it as a major written assignment within an educational course representative of a pupil’s academic accomplishment during a specific term. It is often required that students complete a minimum number of term papers prior to taking the final exam.
Plagiarism is the unauthorized copying or adapting another individual’s work without getting express permission from the original writer. Students are cautioned to be careful when writing term papers since plagiarism might be detected through a number of methods. The most obvious method of discovering plagiarism is assessing for word repetition; however, sometimes cues of plagiarism are tough to detect. What’s more, it is not merely the use of similar words that reveal plagiarism; sometimes plagiarism is also demonstrated by a similarity of structure and language. In order to prevent being found out, students should consult with a writing adviser to ascertain which specific writing format and style will present fewer problems while maintaining structural and visual integrity.
Pupils should always begin their term papers by writing an introduction. An intriguing introduction sets the stage for the newspaper and provides the reader a concise overview of the paper’s major points. An effective introduction brings the reader into reading the newspaper completely. When beginning a paper, students should also consider adding an proper introductory paragraph.
A powerful introduction determines the tone of this paper and provides a context for further research essays and papers. Pupils should avoid plagiarizing someone else’s work if their aim is to make a high grade. A credible writing adviser can help with determining whether a term paper writing format and style is very similar to another student’s or if a summary or summary will provide better support to the writing.
After the introduction, the principal points of the paper ought to be covered in an organized and informative way. The best college essay writing service key points must be supported by at least oneibliography. Students should avoid copying entire passages from another source or an essay without mentioning the source as essential. A student should search for supporting information past the principal points in the paper. Supporting information can come from an essay, a very simple webpage or references that are not mentioned in the main body of their writing. When studying term papers, a student should be able to follow the paper’s stream and understand that the paper’s main points.
Students should produce a bibliography before they begin writing term papers. A pupil should compile a bibliography prior to reading any word papers. A fantastic bibliographic record contains the name of the author, the publisher, the date and page where the paper was published, the citing company’s name and the date of this publication. A student should compile and attach a reference listing prior to reading any term papers. | http://www.indiacafemn.com/term-papers-writing-and-knowing/ |
His long list of other work includes young adult novels that tell tales of fatherhood, heroism, violence, racism, and the sea. Almost all his writing is set in Hawaii, where Salisbury grew up.
We’ll talk with him about boys becoming men and his own beloved memories of childhood. Salisbury says he is driven to write about the emotional paths children take through our turbulent world to become adults. He writes for kids because
I like them. I like being around them. I love their energy and the sparkle in their eyes.
Salisbury is also a musician – make that a world famous rock star. Seriously. Well, to a degree. He’s now put out an album credited to Little Johnny Coconut, the father of Salisbury’s young fictional protagonist Calvin. Salisbury loved music much earlier than he loved reading, let alone writing, books. He says he didn’t really start reading until he was in his 30s. He credits Alex Haley’s Roots for getting him into books.
Did you hate reading as a kid? What got you going? How was your emotional journey through our turbulent world? Have you read, or taught, any of Salisbury’s books?
Editor’s Note: Novelist Willy Vlautin was a double winner at this year’s Oregon Book Awards. You can hear our interview from last year with him right here.
| |
In managed rangelands periods of low primary productivity determine troughs of forage availability, constraining animal production year-round. Although alternative tools to increase forage availability during critical seasons exists, most of them are unaffordable and short-lived in marginal areas. We explore the potential benefits of deciduous tree...
• Plant light interception efficiency is a crucial determinant of carbon uptake by individual plants and by vegetation. Our aim was to identify whole-plant variables that summarize complex crown architecture, which can be used to predict light interception efficiency. • We gathered the largest database of digitized plants to date (1831 plants of 12...
Both engineered hydraulic systems and plant hydraulic systems are protected against failure by resistance, reparability, and redundancy. A basic rule of reliability engineering is that the level of independent redundancy should increase with increasing risk of fatal system failure. Here we show that hydraulic systems of plants function as predicted...
1.Research linking functional traits to competitive ability of co‐existing species has largely relied on rectilinear correlations, yielding inconsistent results. Based on concepts borrowed from natural selection theory, we propose that trait–competition relationships can generally correspond to three univariate selection modes: directional (a recti...
Explanations of leaf size variation commonly focus on water availability, yet leaf size also varies with latitude and elevation in environments where water is not strongly limiting. We provide the first conclusive test of a prediction of leaf energy balance theory that may explain this pattern: large leaves are more vulnerable to night‐time chillin...
Desert areas represent heterogeneous environments where animals must reproduce under extreme conditions, and where a combination of environmental factors may contribute to trigger or inhibit reproduction. Microcavia australis is a caviomorph rodent that occurs in arid and semiarid habitats of Argentina. We examined how reproductive activity in male...
Forage production in silvopastoral systems of the Flooding Pampa is based on cool season grasses with a relatively asynchronous phenology regarding their accompanying deciduous trees. However, the productivity of cool season grasses in these systems is usually low. The hypothesis of this work is that the low productivity of cool season grasses is c...
Tree establishment can have multiple effects on the production and biodiversity of rangelands. In mixed (C3-C4) grasslands, winter deciduous trees could favor cold-season species in the understory, improving forage availability in the most critical time of the year. Yet, they could also promote local extinctions and invasions, risking native biodiv... | https://www.researchgate.net/profile/Marisa-Nordenstahl-2 |
What is Mental Health?
According to the World Health Organization (WHO):
“Mental health is a state of well-being in which an individual realizes their own abilities, can cope with the normal stresses of life, can work productively, and is able to make a contribution to their community.” In other words, the quality of the “Live, Laugh, Love” in our lives.
Just as our lives are hindered or enriched by our physical health, the same is true for mental health. Not being sick does not necessarily mean that someone is in peak physical shape; good mental health is not just the absence of a mental disorder. Think of it as a continuum where individuals can be anywhere across a broad spectrum.
Mental health problems are complex issues and are never the result of the presence of one single risk factor or one single protective factor. Someone who has several risk factors could have more resiliency towards problems than someone else who experiences less risk factors. It is important to remember that each individual person experiences stress, pain, and risk in different ways. Both risk factors and protective factors exist in multiple contexts and may be biological, environmental, social, and personal or psychological.
Determinants of Adverse Mental Health Conditions:
There are many determinants that increase a person’s risk for developing a mental health disorder. They include, but are not limited to:
- Genetic load (history of mental illness in the family)
- Sexual orientation
- Adverse childhood experiences
- Traumatic life experiences and exposure to violence
- Dependency or abuse of substances
- Discrimination/social exclusion
- Poverty/income inequality, including food and/or housing insecurity
- Poor education, leading to underemployment/unemployment and employment insecurity
- Geographic factors, especially rural areas with poor access to care
- Stigma surrounding accessing mental health resources
Protective Factors Minimizing Adverse Mental Health Conditions:
Protective factors are characteristics associated with a lower likelihood of adverse mental health conditions or that reduce a risk factor’s impact. Protective factors may be seen as positive countering events. They include, but are not limited to:
- Supportive, stable family environment
- Positive peer group
- A sense of belonging and connectedness
- Economic security
- Safe environment, free of violence
- Experiences of achievement, a sense of self-worth, and a feeling of optimism
- Healthy diet, sleep, and healthy use of alcohol and substances.
Promoting good mental health includes strategies to create living conditions and an environment supportive of mental health that will allow people to both adopt and maintain healthier lifestyles. The range of available choices has the added benefit of increasing opportunities for everyone to experience the benefits of good mental health or improve their mental health. | https://www.safespacesd.org/what-is-mental-health/ |
The Schisandra benefits, Schisandra chinensis, may not evident when looking at this common garden shrub, a type of Magnolia vine. Native to Eastern Asia, the Chinese have used the berries of the Schisandra plant for their calming benefits, as well as for the ability to increase stamina, concentration and focus. Strangely, it is used as both a stimulant and a calming agent.
The Chinese still prescribe it for things such as:
- sleeping problems
- sleep that is disturbed by dreams
- increasing stamina
- bringing blood sugar to acceptable levels
- increasing of concentration and focus
- treatment of mild depression
- irritability
- forgetfulness
- strengthens and tones kidney and liver function
- alleviates night sweats
- night urination
- hives
- eczema
Research has been done on animals with Schisandra and there is strong evidence suggesting that it has properties that protect against skin cancer.
Other traditional uses of Schisandra include:
- The use of antioxidant properties to boost the body
- Increase of immune system function
- Assists in increasing the ease of breathing and aids the respiratory system
- Aphrodisiac properties and the increase of sex drive
- A boost of energy and rejuvenation of the mind
- A remedy for exhaustion and extreme fatigue
- Stress and any associated depression
- Increased lung and liver function
- Coughs, colds, sore throats and allergies
- Increase the ability to remember
- Vision that has been impaired can be improved
- Circulation and blood pressure
- Quicker healing after surgery and procedures
While the benefits of Schisandra have been proven, there are some instances when you should avoid the herb: | http://www.herbalremediesinfo.com/schisandra-benefits.html |
On the lookout photographers require a hunter’s instinct—not unlike those instincts possessed by some of our favorite subjects: cats make excellent models to follow. Their patience ensures the successful pursuit of a mouse or bird. Patience is necessary for photography: sometimes you have to wait for just the right moment to capture. Practicing patience and adhering to the five tips for capturing the right moment listed below will yield excellent photographs.
1 –Prepare Your Equipment. When you head out intending to take pictures, make sure your camera’s battery is fully charged and that you have not left your memory card in your computer. Keeping spare batteries on hand will prevent frustrating situations.
2 – Utilize Burst Mode. Quickly moving subjects make capturing well-composed photos more difficult. One way to deal with this challenge is to use your camera’s burst mode. Doing so means you will be able to choose the picture with the best composition from several options.
3 – Use Your Reflexes. The performers were photographed in the main train station in Berlin (below) and on the beach of Conil de la Frontera in Andalusia (top of post). In the train station, I watched a performer execute an aerial somersault; I was able to work patiently and wait for him to repeat the somersault to get my desired picture. The picture on the beach was another story: I had to act in the blink of an eye—and with a great deal of good luck—to capture the image at just the right moment.
4 – Keep Photographs Candid. Candid snapshots of people who know you are taking their picture are no longer candid. Asking permission to photograph them in advance defeats the purpose. In general, you can tell whether a photograph has been staged. This issue is an unsolved problem that can be addressed only with empathy and intuition.
5 – Keep Your Eyes Peeled. I am convinced that it is possible to train yourself to look for subjects. Regular attention to works of art can help you improve your sensibility for quality. Learning by seeing also hones your ability to understand and use compositional design principles. The beech’s elephant-like foot (below) was a picture-worthy subjects noticed with attentive eyes.
These helpful tips were written by Albrecht Rissler. Find more like this in Rissler’s book, Photographic Composition: Principles of Image Design. | https://rockynook.com/article/five-tips-for-capturing-the-right-moment/ |
Outlines the roles and responsibilities of scientific journal editors.
Responsibilities of the Newsletter Editor
Society for Technical Communication
List of expectations of newsletter editors for the society.
Editor’s Column
Payne, R.M. (1998). Journal of Souther Religion, Louisiana State University
Describes the work of an editor, provides guidelines, and references.
How to Establish an Association Editorial Board for a New Periodical
McHugh, J.B. (2010). Publishing Management Consultant
Outlines a process for establishing a board.
Conflict of Interest Guidelines
The American Society for Nutrition
Outlines expectations, conflicts of interest that may arise, and what to do in case of a conflict of interest for authors, reviewers and editors of its journals.
Critical Reviews
The Writing Center, University of Wisconsin Madison
How to write a review of a non-fiction book or article.
How to Review
Bieber, M., New Jersey Institute of Technology
Guide for refereeing conference and journal articles, with the philosophy that the review process benefits the author as much as the editor.
Peer Review in Academic Promotion and Publishing: Its Meaning, Locus, and Future
Harley, D., Krzys Acord, S., & Earl-Novel, S. (April, 2010)
Drafts of four working papers as part of the Future of Scholarly Communication Project.
Scholarship is Changing, So Must Tenure Review
Hurtado, S. & Sharkness, J. (September / October, 2008). Academe Online
The survival of the tenure process largely depends on the capacity to develop review processes that recognize emerging forms of scholarship.
The Shelter of Tenure is Eroding and for Faculty of Color Gaining Membership May be Tougher than Ever
Ruffins, P. (October, 1997). Black Issues in Higher Education
Indicates many of the issues that faculty of color face in higher education.
Purpose of review
The purpose of peer review is to examine the work of a scholar through review by other experts in the same field. The process is common for studies, articles, grant proposals, programs and accreditation proceedings. It is an integral part of scholarship, and serves as a system of checks and balances to ensure quality and integrity of academic endeavors. Participation in the review process is a common form of service for faculty members, and accomplished faculty members seek opportunities to engage in review activities. More detail about and history of the peer review process is outlined in the Scholarly Research – Peer Review section of the Community Commons.
Grant or Conference proposal reviewer
Although scholarly research, writing up findings, and presenting or publishing those findings is an integral part of the work of a faculty member, they take a tremendous amount of time and energy. As a result it is expected that faculty members will apply for grants to release them from other responsibilities and pursue their research agenda. Each grant involves construction of a proposal based on an RFP or request for proposal prepared by the granting agency that outlines the expectations and guidelines to follow. Often faculty members serve on committees to review such proposals and make decisions about which projects to fund. Similarly professional conferences have RFP’s that outline the sorts of papers / presentations they invite proposals for, and faculty members serve on committees to review the proposals and determine which individuals will present at the conference. New faculty members should seek out opportunities to serve the profession in this way, building a reputation and honing a critical eye for quality work.
Brochure / newsletter editor
There are many audiences for research findings, and multiple purposes for communicating what a course, program, school, college or campus has to offer. Since scholarly journals are primarily aimed at scholars working in the discipline of focus, many departments, programs, colleges, or campuses develop more informal newsletters, brochures, or public relations materials for a broader audience. These publications may assist in recruitment of students, help explain the commitments of scholars to particular issues for potential donors, or communicate with the community about expertise of benefit to practitioners or other community members. Examples may include education research findings that offer insights to K-12 teachers, public health findings that offer strategies to community health organizations, or agricultural findings that offer innovations to local farmers. Whatever the purpose or purposes of these informal publications, often faculty members serve as reviewers or editors so that the information is up to date, accurate, and articulated effectively. New faculty members may be interested in participating in this sort of review for existing materials, or even imagine the sorts of materials that might be effective in their discipline area when none currently exist.
Article, journal, chapter or book reviewer
An important part of the tenure process is publication of articles in scholarly journals. Each article, chapter or book is put through a rigorous process of anonymous review that relies on the service of thousands of faculty members. It is part of the professional obligation of a faculty member to engage in this review process, and the level of responsibility and quality of publication should gradually increase with experience and tenure. New faculty members might begin by learning about the process, familiarizing themselves with publication and review guidelines, and seeking mentors who serve in this capacity to begin development of an appropriate level of expertise in critique.
Editorial boards
There are countless means of publishing, sharing, going public with our work as academics. The variety of journals from scholarly to trade and practitioner, the growing array of online archives and websites, and the numerous informal newsletters and brochures often have editorial boards. This group of individuals often represents various viewpoints, perspectives, and institutions. They meet regularly to discuss the way the publication is laid out, set policy about what issues or themes to use to frame upcoming issues, to design or redesign publications, and make final decisions about which approved pieces to run and when. The more prestigious the publication, the more esteemed the editorial board. New faculty members who aspire to such board positions need to be strategic about publishing in the appropriate places, including those they hope to work for in the future. They should work hard on developing a pattern of using their expertise to serve as a reviewer for increasingly well known publications.
Archive editor
While current publications offer a sliver of the work currently going on in a field, only a fraction of what is happening on campuses is included in the discipline. For a more comprehensive look at the literature in a field there are archives of previously published materials, and some are hard to find gems that for one reason or another haven’t surfaced recently. For a broader look at the trends, history or full body of work in a field it is helpful to utilize these archives, which are similarly governed by boards and overseen by editors. This type of service may help new faculty members become critical reviewers while at the same time familiarizing them with the broad array of work in their field.
Peer review committee member
An integral part of the tenure process involves peer review. On some campuses it is a multilevel process, involving department or college level review followed by a university review process. Others have different configurations. Whatever the process details, faculty members are asked to review the body of work completed by their colleagues in order to determine if the scope, pattern of accomplishment, and depth of their work warrants retention, tenure or promotion. The process differs among campuses and even among departments on the same campus, but basically involves review of a personnel file prepared by the individual under review. Typically there is a personal statement outlining the body of work accomplished, accompanied by a notebook or file of evidence to support the claims made in the personal statement, including a CV and specific evidence chosen according to guidelines outlined in the tenure / promotion documents provided to each candidate for tenure. New faculty members should expect that after they complete the process themselves, they will be asked to serve for the colleagues coming up behind them.
External review teams
Program evaluation is a critical process that provides accountability for the quality and appropriateness of curriculum and instruction at a campus. Periodic reviews of campuses as part of accreditation requires the formation of teams with expertise in the disciplinary fields under review. Faculty members serve an important role on such teams, traveling to campuses, meeting with key personnel, and reviewing key documents to assess the outcomes of each program systematically.
Possible questions to ask about being a Reviewer / Referee / Editor:
CDIP Community Commons by Dr. Robin D. Marion is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
Based on a work at www.merlot.org. | https://cdip.merlot.org/facultyservice/reviewerreferee.html |
Peer review is the evaluation of work by one or more people with similar competencies as the producers of the work. Peer review methods are used to maintain quality standards, improve performance, and provide credibility. In academia, scholarly peer review is often used to determine an academic paper's suitability for publication.
The resources on the right are arranged by type: journal articles, videos, and database tutorials. They all relate to peer reviewed articles in some way, though individual resources may cover other aspects of research methodology.
Peer reviewed journals are scholarly journals which publish only articles that have been reviewed and approved by scholars in the author's field or area of expertise. Peer reviewed journals are also called refereed journals.
All peer reviewed journals are scholarly, but not all scholarly journals are peer reviewed. Scholarly journals contain articles written by experts in the field and are directed toward an academic audience, but may not require that these articles undergo the review process. Scholarly journals are published by academic organizations and their main purpose is to report research in the field. Many, but not all, are peer reviewed.
The library has several journal databases, including the ProQuest and EBSCOhost, databases, that contain articles from peer reviewed journals, as well as other types of publications. You can find peer reviewed articles by following these steps:
Mark the "Scholarly Journals" option.
EBSCOhost calls this "Scholarly (Peer Reviewed) Journals" but don't be confused by this; it really means scholarly journals including peer reviewed. ProQuest has a "Peer Reviewed" option, but like EBSCOhost, it often includes scholarly journals as well as peer reviewed.
Enter your search terms and mark any other options you desire, such as "Full Text."
Choose an article from the results list and find the journal title in the article information.
Check this title in UlrichsWeb to see if it is peer reviewed. Access this by clicking on A-Z Database List under Find on the library homepage, then click on UlrichsWeb. Now enter the journal title in the search box.
If you see the refereed symbol, next to the title name, this journal is peer reviewed.
All articles contained in the PsycARTICLES database are from peer reviewed journals, so there is no need to mark any options or check UlrichsWeb. You can also find PsycARTICLES on the A-Z Database List . | https://libguides.bellevue.edu/c.php?g=1150494&p=8915563 |
Chrysler Group is recalling an estimated 21,000 pickup trucks, SUVs and sedans to inspect and, if necessary, replace the shocks, struts or both.
The global recall involves 2014 model-year Ram 1500 pickups, 2015 MY Jeep Cherokee SUVs and 2015 MY Chrysler 200 sedans assembled within a 16-day period ending June 6 of this year.
An estimated 14,300 vehicles are in the U.S., 5,300 are in Canada, 160 are in the Mexico and 2,000 are outside the NAFTA region.
Chrysler Group said it’s staging the recall campaign to identify vehicles that may have been assembled using a shipment of shocks and struts that don’t meet quality standards. The components may break free from their mounts, which could lead to reduced shock damping and loss of vehicle control.
Chrysler said its parts supplier identified the problem. The automaker added that it isn’t aware of any related injuries, accidents or complaints. Vehicle owners will be notified when the recall remedy is available, so they can schedule a service appointment. There will be no charge for the repairs.
Vehicle owners can reach Chrysler at 1-800-853-1403. | https://www.automotive-fleet.com/119342/chrysler-recalls-21k-vehicles-globally-for-shocks-struts |
A new report presented recently at the European Parliament shows alarming trends of violations of artistic freedom in Europe.
Several countries are censoring and sentencing artists in all kinds of art forms for allegedly insulting heads of state, hurting religious feelings or glorifying terrorism.
The report by Freemuse, an international organisation founded in 1998 with headquarters in Copenhagen, is based on an analysis of 380 cases of violations of artistic freedom in Europe over the last two years.
In the majority of cases, government authorities were responsible for the violations. More than 800 artists and artworks were censored. 50 artists were detained in 5 countries (Turkey, Russia, Belarus, Georgia, Poland) and 31 artists imprisoned in 4 countries (Spain, Turkey, Russia, United Kingdom).
However, the actual number of cases is much higher according to Freemuse. which advocates and defends freedom of artistic expression. While a previous global report shows the usual suspects, dictatorships and authoritarian regimes suppressing artistic freedom, this regional report draws the attention to the fact that artistic freedom is also threatened in EU member states.
The conference was opened on Tuesday (21 January) by Juan Fernando López Aguilar, a Spanish MEP (S&D) who is chair of the Committee on Civil Liberties, Justice and Home Affairs. Himself a law professor, he underlined that artistic freedom is a fundamental right and can bring about change in society.
“But it has its enemies, that’s why we need to listen to the concerns of artists and protect them,” he said. Despite the protection of artistic freedom in article 20 of the Spanish constitution, Spain figures in the report as the country with the highest number of sentenced rappers, found guilty of insulting politicians and members of the Spanish royal family.
In fact, music is the most censored artform in the report, accounting for 39 % of all cases. “Music can be a very powerful art form and connects people emotionally,” explains Dr Srirak Plipat, executive director of Freemuse. “It often conveys a social message or protest against injustice which might upset the authorities. It’s also accessible almost everywhere.”
Fundamental right
MEP Domenec Ruiz Devesa (S&D), who moderated the conference, underlined that the report was timely as it follows a recent debate in the European Parliament on the rule of law in Poland and Hungary. The European Commission is concerned about the rule of law situation in these countries, where the independence of the judiciary has been undermined.
Elzbieta Podlesna, a civil rights activist from Poland, told that she was arrested last year by police and had her apartment searched because she had put up posters of a famous icon of the Black Madonna showing a rainbow halo in the colours of the LGBTQ flag. The decision to arrest her was apparently taken by the interior minister.
Her action was a protest against the attitudes of the Polish church against LGBTQ people. She is accused of having offended religious feelings, a crime punishable with up to two years in prison, and is awaiting trial. A court has already ruled that the arrest was legal though unproportional.
Even Sweden, a country known for its generous and liberal attitude to artists, has seen censorship of artistic expressions on local level. “It’s a worrying trend which needs to be taken seriously,” said Jessica Nordström from the Swedish Postcode Foundation, which co-financed the study.
“The EU has a responsibility to ensure the safety of artists and protect everyone’s freedom of artistic freedom,” says Srirak Plipat. “Its role is to create an enabling environment for artistic freedom for everyone in the Member States by improving legislation related to freedoms and fundamental rights so that they are consistent with European and international national human rights standards.”
Artistic freedom falls under the broad remit of freedom of expression and is a fundamental human right, protected in several international and European conventions. A former UN special rapporteur in the field of cultural rights, Farida Shaheed, has highlighted the importance of the arts for “the development of vibrant cultures and the functioning of democratic societies.”
In 2005, the UNESCO Convention for the protection and Promotion of the Diversity of Cultural Expressions was adopted. It is a binding legal instrument which has been ratified by all EU Member States. In 2019, new reporting requirements were added. States are now explicitly obliged to report on the state of artistic freedom and measures taken to promote and protect this right.
Asked by The Brussels Times if this also included an obligation to fund art, Luise Haxthausen, the UNESCO Liasion Office in Brussels, replied that it was of equal importance and followed from the obligation to provide an enabling environment for artistic expression. Artists have a right to have their artistic work supported, distributed and remunerated.
Limitations
That said, the international conventions do establish limits to artistic expression by stipulating that hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited. The European Convention on Human Rights states that the exercise of freedom of expression may be subject to restrictions on certain grounds prescribed by law.
However, such restrictions should only be enforced according to strict international human rights standards and must be necessary and proportional for the protection of a legitimate aim. According to Srirak Plipat, this is not always the case in EU.
“Blasphemy laws and laws on the protection of national symbols are not consistent with international standards,” says Plipat, “but EU is reluctant to take a stand against them since they fall under the competency of individual member states.”
A new challenge to artistic freedom is the growth of anti-terrorism legislation, criminalising the “glorification” of terrorism. An artist might support terrorism and his art could encourage hate crime or inspire other people to carry out terrorist acts. Should it not be banned?
“We need to distinguish clearly between artists and those who perpetrate terrorist acts,” Plipat replies. “Society has a right and duty to defend itself against terrorism but that does not mean that artists should also be targeted. In fact, this is one of our biggest concerns today.”
“In a democratic society, artists should have a right to express whatever they want, whether or not we agree with them. And any legal response must past the test of necessity and proportionality. Otherwise anti-terrorism legislation can easily be manipulated to silence debate and censor artistic expression.”
The report does not explicitly mention cartoons that might incite racism and hate crimes - do they also have a right of protection under artistic freedom?
“It’s an artistic form in its own right and linked to the right of expression, especially satirical cartoons mocking public figures and politicians,” he replied. “If you don’t like a cartoon, you should question and criticise it, but not ban it.”
But he admitted that cartoons which express racist stereotypes should be assessed on a case by case basis. “Very often we find ourselves in a grey area. Generally, we should allow a cartoon, however we may dislike it, rather than censor it.”
The report ends with a number of recommendations towards a new agenda on artistic freedom. Among others, Freemuse states that artistic expression may entail the appropriation of religious and national symbols, as part of a response to official narratives, unless the art work contains an essential element of incitement to hatred. Laws penalising insults to heads of state should be abolished.
Anti-terror legislation is misused in some countries against both journalists and artists to silence free debate and to falsely accuse them of being members of a terrorist organisation. Freemuse writes that laws should only criminalise expressions that encourages others to commit a recognizable criminal act with the intent to incite them to commit such an act. | https://www.brusselstimes.com/news/art-culture/91818/artistic-freedom-in-europe-under-threat/ |
Air Quality Fund
To deliver a cleaner network and improve the health of our neighbours and customers.
Competitions: we need your innovative ideas
Developing digital roads and improving air quality
We’re currently running two competitions to find solutions and ideas to develop digital technologies and improve air quality on and around our network.
We’re looking to invest in creative solutions covering six themes:
- Design, construction and maintenance
- Connected and autonomous vehicles
- Customer mobility
- Energy and the environment
- Operations
- Air quality
We are committed to improving air quality (NO2) alongside the roads we manage. We will contribute to improving air quality arising from the use of our network and help protect our neighbours from the health consequences through our Air Quality Fund.
There are a number of challenges associated with improving air quality. Despite the tightening of emission standards, we still have a large number of older polluting vehicles on the network directly impacting the air quality on and around our network. As responsible neighbours, we need to reduce the associated pollution impact of our improvement and expansion schemes. In addition, it can often be difficult to identify and deliver better, large-scale air quality solutions and quantify their true benefits.
- With these challenges in mind, the fund is balancing investment in a variety of areas:
Mitigating, where appropriate, and designing-out air pollution from any new schemes we build
- Exploring new and innovative approaches to air quality through pilots and feasibility studies with the anticipation some of these can be effectively scaled across parts of the network
- Tackling vehicle emissions
- Monitoring of NO₂ and oxides of nitrogen (NOₓ) across the network to build a better understanding of this issue
- Supporting local authorities as they lead the delivery of the Government’s national air quality plan, including implementing (where required) Clean Air Zones
We welcome ideas and proposals from existing and prospective partners to help us solve air quality challenges and work with us as we implement solutions. So far, we have been collaborating with local authorities to shape how this Fund is used and to inform how we address this vitally important issue. We work with our partners to make progress on reducing the Strategic Road Network’s impact on air quality to support wider government initiatives. We are also applying measures that will help us work towards the zero breaches target set out in air quality regulations.
Through our fund, we are supporting the government’s commitment to nearly every car and van to be a zero-emission vehicle by 2050. By working with operators, 95% of our network will have a charging point every 20 miles for Ultra Low Emission Vehicles (ULEVs). The aim of this is to encourage more people to purchase ULEVs which, in turn, will reduce the pollution impact of our network.
CRITERIA:
The programme seeks to prioritise investment in things that can support the delivery of Road Period 1 schemes and those things that provide evaluation and investment in innovative solutions that offer the possibility of benefits in the longer term.
To be considered for funding under the Air Quality Fund, proposals must meet legislative and policy requirements as a minimum. | https://highwaysengland.co.uk/designated-funds/our-funds/air-quality-fund/ |
Quite possibly the most astonishing aspect of the world around us is that so much of it can be understood by using a relatively small number of physical laws. Theoretical physicists devote themselves to uncovering the most appropriate mathematical laws for deducing the essence of physical phenomena on all scales, from the quantum world of microscopic matter and nanomaterials to the geometry of curved space-time and the large scale structure of the cosmos.
The core curriculum includes subjects such as Quantum Physics and Electromagnetism in your first year, Quantum Mechanics and Relativity in your second year, and Particle Physics, Atomic Physics and Condensed Matter Physics in your third year. In addition, in years two and three you take specialised modules on Quantum Theory, Electromagnetism, Condensed Matter Physics, Gravitation and Cosmology, and Elementary Particle Physics. You also have a choice of options such as Quantum Information and Advanced Gravity and Relativity.
Students will gain an insight into general integrational relations between current and charge sources, along with electromagnetic potentials in free space. This module explores energy and momentum of electromagnetic fields and the use of the Poynting vector to calculate radiated power. Students will investigate the electromagnetic power radiated by an accelerating charge and oscillating dipole, and explore wave solutions of Maxwell’s equations in free and bounded space.
This module will provide an understanding of the behaviour of electromagnetic modes in perfectly conducting rectangular and cylindrical waveguides and cavities. On completion, students will be able to describe EM fields with simple sources and boundary conditions, and will write down conservation laws in differential and integral form. In addition, students will develop the ability to calculate the power radiated from accelerating charges, in particular from an oscillating dipole, in addition to reinforcing their understanding of the mode structure of EM fields in simple bounded regions, such as waveguides and cavities.
The module develops students’ knowledge of Newton’s laws, central forces, integrals of motion and dynamics and orbits. Students will gain an insight into generalised coordinates and momenta, Hamiltonian function, Poisson brackets and canonical transformations. The module additionally features lectures on important analytical methods used both in classical mechanics and in broader areas of theoretical and mathematical physics.
Students will develop the ability to integrate equations of motion for dynamical problems in classic mechanics, and will have experience in using variational methods, in addition to gaining the knowledge required to relate Hamiltonian and Lagrangian approach to theoretical mechanics and canonical transformations. Students will be able to exploit the generality of Lagrangian and Hamiltonian techniques by using an appropriate generalised coordinates.
In the Theoretical Physics Group Project, students will work as part of a team and will receive guidance on project management, planning and meetings. The module will culminate in a writing-up stage in which the groups will prepare a group report, and students will be presented an opportunity to give an individual talk at the physics mini-conference.
This module requires students to undertake an independent study in various aspects of theoretical physics. It provides an opportunity for students to extend their preliminary studies by undertaking open-ended investigations into various aspects/problems of theoretical physics. Students will write up their findings in a report.
This module aims to teach analytical recipes of theoretical physics used in quantum mechanics, with the focus on the variational functions method, operator techniques with applications in perturbation theory methods and coherent states of a quantum harmonic oscillator. Students will be trained in the use of the operator algebra of 'creation' and 'annihilation' operators in the harmonic oscillator problem, which will develop a basis for the introduction of second quantisation in many-body systems. In addition, the module will introduce the algebra of creation and annihilation operators for Bose and Fermi systems, along with second-quantised representation of Hamiltonians of interacting many-body systems.
Students will learn to apply a mathematical basis of complex analysis in order to solve problems in mathematical and theoretical physics. They will also analyse Bose-Einstein condensation in one-, two-, and three-dimensional systems and will develop the ability to describe the condensate using the method of coherent states. Additionally, the module will reinforce students’ knowledge of the Ginzburg-Landau theory and the vortices in a superfluid.
With approximately 375 high-quality contact hours a year, you can always rely on our staff for guidance and assistance. In a typical week, you will have 12-15 hours of lectures and take part in 3-4 hours of seminars where the lecturer will explain solutions and help with any difficulties. | https://www.lancaster.ac.uk/study/undergraduate/courses/theoretical-physics-bsc-hons-f340/ |
The classical limit or correspondence limit is the ability of a physical theory to approximate or "recover" classical mechanics when considered over special values of its parameters. The classical limit is used with physical theories that predict non-classical behavior.
A heuristic postulate called the correspondence principle was introduced to quantum theory by Niels Bohr: in effect it states that some kind of continuity argument should apply to the classical limit of quantum systems as the value of Planck's constant normalized by the action of these systems becomes very small. Often, this is approached through "quasi-classical" techniques (cf. WKB approximation).
More rigorously, the mathematical operation involved in classical limits is a group contraction, approximating physical systems where the relevant action is much larger than Planck's constant ħ, so the "deformation parameter" ħ/S can be effectively taken to be zero (cf. Weyl quantization.) Thus typically, quantum commutators (equivalently, Moyal brackets) reduce to Poisson brackets, in a group contraction.
In quantum mechanics, due to Heisenberg's uncertainty principle, an electron can never be at rest; it must always have a non-zero kinetic energy, a result not found in classical mechanics. For example, if we consider something very large relative to an electron, like a baseball, the uncertainty principle predicts that it cannot really have zero kinetic energy, but the uncertainty in kinetic energy is so small that the baseball can effectively appear to be at rest, and hence it appears to obey classical mechanics. In general, if large energies and large objects (relative to the size and energy levels of an electron) are considered in quantum mechanics, the result will appear to obey classical mechanics. The typical occupation numbers involved are huge: a macroscopic harmonic oscillator with ω = 2 Hz, m = 10 g, and maximum amplitude x0 = 10 cm, has S ≈ E/ω ≈ mωx2
0/2 ≈ 10−4 kg·m2/s = ħn, so that n ≃ 1030. Further see coherent states. It is less clear, however, how the classical limit applies to chaotic systems, a field known as quantum chaos.
Quantum mechanics and classical mechanics are usually treated with entirely different formalisms: quantum theory using Hilbert space, and classical mechanics using a representation in phase space. One can bring the two into a common mathematical framework in various ways. In the phase space formulation of quantum mechanics, which is statistical in nature, logical connections between quantum mechanics and classical statistical mechanics are made, enabling natural comparisons between them, including the violations of Liouville's theorem (Hamiltonian) upon quantization.
In a crucial paper (1933), Dirac explained how classical mechanics is an emergent phenomenon of quantum mechanics: destructive interference among paths with non-extremal macroscopic actions S » ħ obliterate amplitude contributions in the path integral he introduced, leaving the extremal action Sclass, thus the classical action path as the dominant contribution, an observation further elaborated by Feynman in his 1942 PhD dissertation. (Further see quantum decoherence.)
One simple way to compare classical to quantum mechanics is to consider the time-evolution of the expected position and expected momentum, which can then be compared to the time-evolution of the ordinary position and momentum in classical mechanics. The quantum expectation values satisfy the Ehrenfest theorem. For a one-dimensional quantum particle moving in a potential , the Ehrenfest theorem says
Although the first of these equations is consistent with the classical mechanics, the second is not: If the pair were to satisfy Newton's second law, the right-hand side of the second equation would have read
But in most cases,
If for example, the potential is cubic, then is quadratic, in which case, we are talking about the distinction between and , which differ by .
An exception occurs in case when the classical equations of motion are linear, that is, when is quadratic and is linear. In that special case, and do agree. In particular, for a free particle or a quantum harmonic oscillator, the expected position and expected momentum exactly follows solutions of Newton's equations.
For general systems, the best we can hope for is that the expected position and momentum will approximately follow the classical trajectories. If the wave function is highly concentrated around a point , then and will be almost the same, since both will be approximately equal to . In that case, the expected position and expected momentum will remain very close to the classical trajectories, at least for as long as the wave function remains highly localized in position.
Now, if the initial state is very localized in position, it will be very spread out in momentum, and thus we expect that the wave function will rapidly spread out, and the connection with the classical trajectories will be lost. When Planck's constant is small, however, it is possible to have a state that is well localized in both position and momentum. The small uncertainty in momentum ensures that the particle remains well localized in position for a long time, so that expected position and momentum continue to closely track the classical trajectories for a long time.
Other familiar deformations in physics involve: | https://www.knowpia.com/knowpedia/Classical_limit |
Electronics Components: Parallel Resistors
So how do you calculate the total resistance for resistors in parallel on your electronic circuit? Put on your thinking cap and follow along. Here are the rules:
-
First, the simplest case: Resistors of equal value in parallel. In this case, you can calculate the total resistance by dividing the value of one of the individual resistors by the number of resistors in parallel. For example, the total resistance of two, 1 kΩ resistors in parallel is 500 Ω and the total resistance of four, 1 kΩ resistors is 250 Ω.
Unfortunately, this is the only case that’s simple. The math when resistors in parallel have unequal values is more complicated.
-
If only two resistors of different values are involved, the calculation isn’t too bad:
In this formula, R1 and R2 are the values of the two resistors.
Here’s an example, based on a 2 kΩ and a 3 kΩ resistor in parallel:
-
For three or more resistors in parallel, the calculation begins to look like rocket science:
The dots at the end of the expression indicate that you keep adding up the reciprocals of the resistances for as many resistors as you have.
In case you’re crazy enough to actually want to do this kind of math, here’s an example for three resistors whose values are 2 kΩ, 4 kΩ, and 8 kΩ:
As you can see, the final result is 1,142.857 Ω. That’s more precision than you could possibly want, so you can probably safely round it off to 1,142 Ω, or maybe even 1,150 Ω.
The parallel resistance formula makes more sense if you think about it in terms of the opposite of resistance, which is called conductance. Resistance is the ability of a conductor to block current; conductance is the ability of a conductor to pass current. Conductance has an inverse relationship with resistance: When you increase resistance, you decrease conductance, and vice versa.
Because the pioneers of electrical theory had a nerdy sense of humor, they named the unit of measure for conductance the mho, which is ohm spelled backward. The mho is the reciprocal (also known as inverse) of the ohm.
To calculate the conductance of any circuit or component (including a single resistor), you just divide the resistance of the circuit or component (in ohms) into 1. Thus, a 100 Ω resistor has 1/100 mho of conductance.
When circuits are connected in parallel, current has multiple pathways it can travel through. It turns out that the total conductance of a parallel network of resistors is simple to calculate: You just add up the conductances of each individual resistor.
For example, suppose you have three resistors in parallel whose conductances are 0.1 mho, 0.02 mho, and 0.005 mho. (These are the conductances of 10 Ω, 50 Ω, and 200 Ω resistors, respectively.) The total conductance of this circuit is 0.125 mho (0.1 + 0.02 + 0.005 = 0.125).
One of the basic rules of doing math with reciprocals is that if one number is the reciprocal of a second number, the second number is also the reciprocal of the first number. Thus, since mhos are the reciprocal of ohms, ohms are the reciprocal of mhos.
To convert conductance to resistance, you just divide the conductance into 1. Thus, the resistance equivalent to 0.125 mho is 8 Ω (1 ÷ 0.125 = 8).
It may help you remember how the parallel resistance formula works when you realize that what you’re really doing is converting each individual resistance to conductance, adding them up, and then converting the result back to resistance. In other words, convert the ohms to mhos, add them up, and then convert them back to ohms. That’s how — and why — the resistance formula actually works. | https://www.dummies.com/programming/electronics/components/electronics-components-parallel-resistors/ |
A great reference to the theory that has been implemented in practice through WIPL‑D code can be found in the book “Electromagnetic Modeling of Composite Metallic and Dielectric Structures” written by Professor Branko M. Kolundzija and Professor Antonije R. Djordjevic.
Contents: Introduction. Formulation of the Problem. Method of Moments. Geometric Modeling of Structures. Approximation of Currents. Treatment of Excitation. Solution of Equations for Current Distribution. Post Processing of Data. Numerical Examples.
This resource provides a much wider choice of analytical solutions to the everyday problems encountered in electromagnetic modeling. The book enables usage of cutting‑edge method‑of‑moments procedures, with new theories and techniques that helps to optimize computer performance in numerical analysis of composite metallic and dielectric structures in the complex frequency domain.
For the first time, comparisons and unique combinations of techniques bring the elements of flexibility, ease of implementation, accuracy, and efficiency into clear view. Numerous examples are given – from simple to complex – including scatterers, antennas and microwave circuits. One can get an in-depth presentation of intricate models, including TV UHF panels, horn, parabolic, microstrip patch antennas, and many others. More than 800 equations and 150 illustrations support key topics.
Publications describing theoretical basis, technical aspects of using the code, application papers and finally papers referencing the use of WIPL-D products before 2012 can be downloaded from the links below.
Most relevant IEEE publications that describe numerical methods/algorithms embedded into our software are shown below.
Implementation of max-ortho basis functions is proposed in a method for analysis of axially symmetric metallic antennas based on exact kernel of electric field integral equation in combination with Galerkin testing. High-precision evaluation of matrix elements is enabled by: a) representing them as a linear combination of impedance integrals due to the Legendre polynomials and their first derivatives; b) using the singularity cancelation techniques; and c) evaluating the Legendre polynomials and their first derivatives by well-known recurrent formulas. Applicability of max-ortho bases up to expansion order of n =128 is illustrated on a full-wave thick dipole antenna.
This paper presents a novel method for evaluating potential and impedance integrals appearing in the method of moment analysis of arbitrary axially symmetric metallic structures based on exact wire kernel and higher order bases. Due to new variable transforms proposed for singularity cancellation and smoothing the integrands, high accuracy up to machine precision is achieved using relatively small number of integration points. Simple formulas are determined for predicting a number of integration points needed for prescribed accuracy. Benefits of high-precision evaluation of impedance integrals are illustrated on a number of numerical examples.
This paper presents a general theory of maximally orthogonalized div- and curl-conforming higher order basis functions (HOBFs) over generalized wires, quadrilaterals, and hexahedra. In particular, all elements of such bases, necessary for fast and easy implementation, are listed up to order n=8. Numerical results, given for div-conforming bases applied in an iterative method of moments solution of integral equations, show that the condition number and the number of iterations are a) much lower than in the case of other HOBFs of polynomial type and b) practically not dependent on the applied expansion order.
Authors: Tasic, M.S., Kolundzija, B.M.
Authors: Kolundzija, B.M., Petrovic, V.V.
Authors: Kolundzija, B. M., Ognjanović, J. S., Sarkar T. K.
This software seeks to make the job easier, cut design time, and reduce costs for designers developing an antenna embedded in a material body, passive microwave circuit components, or determining electromagnetic scattering from complex, lossy/dielectric structures. Now featuring a Windows-based interface, it delivers a powerful program for analysis of electromagnetic radiation and scattering from composite metallic and/or finite-sized dielectric/magnetic structures.
Authors: Kolundzija, B. M., and Djordjević, R. A.
WIPL is a program which allows fast and accurate analysis of antennas. The geometry of any metallic structure (even a very large structure) is defined as a combination of wires and plates. WIPL’s analysis features include evaluations of the current distribution, near and far field, and impedance, admittance and s-parameters. The program uses an entire-domain Galerkin method. Efficiency of the program is based on the flexible geometrical model, and sophisticated basis functions. In this paper, the basic theory implemented in the program, and some results concerning TV UHF panel antennas and large horn antennas are given. | https://wipl-d.com/support/theoretical-background/ |
International Asteroid Day 2020: History, significance and other details you should know
New Delhi: June 30 is celebrated as International Asteroid Day which aims to raise public awareness about the asteroid impact hazard and to spread awareness about the importance and role played by asteroids in our solar system.
The United Nations General Assembly adopted resolution in December 2016 declaring 30 June International Asteroid Day in order to "observe each year at the international level the anniversary of the Tunguska impact over Siberia, Russian Federation, on 30 June 1908, and to raise public awareness about the asteroid impact hazard."
The Tunguska asteroid event on 30 June 1908, was the Earth's largest asteroid impact in recorded history.
Near-Earth objects (NEOs) are asteroids and comets that orbit the Sun, but their orbits bring them into Earth's neighbourhood - within 30 million miles of Earth's orbit.
The United Nations Office for Outer Space Affairs tweeted that the number of known NEOs was 22,800 as of May 2020, with over 2,000 asteroids now catalogued whose orbits bring them within 8mil km of Earth’s orbit.
#DYK: The number of known #NEOs was 22,800 as of May 2020, with over 2,000 asteroids now catalogued whose orbits bring them within 8mil km of Earth’s orbit! #FunFacts@asteroidday
Asteroids are small, rocky objects that orbit the sun. They are left over from the formation of our solar system. Although they orbit the sun like planets, but they are much smaller than planets. Most of the Asteroids live in the main asteroid belt—a region between the orbits of Mars and Jupiter.
How were Asteroids formed?
About 4.6 billion years ago our solar system began when a big cloud of gas and dust collapsed. During this, most of the material fell to the center of the cloud and formed the sun. Some of the condensing dust in the cloud became planets. The objects in the asteroid belt never had the chance to be incorporated into planets. They are leftovers from that time long ago when planets formed. | |
Q:
How to calculate the right and left speed for a tank-like rover?
I am trying to control the Rover 5 robot using an Android app with a touch-based joystick control in the app UI. I want to calculate the speed of the left and right motors in the rover when joystick is moved.
From the joystick, I get two values, pan and tilt. I convert them into the polar coordinate system with r and theta. Where r ranges from 0 to 100 and theta from 0 to 360. I want to derive an equation which can convert the (r, theta) to (left_speed, right_speed) for rover. The speed values also are in the [0;100] range.
Now, here is what I have figured out till now. For any value of r,
If theta = 0 then left_speed = r, right_speed = -r (turning right on spot)
If theta = 90 then left_speed = r, right_speed = r (moving forward at speed r)
If theta = 180 then left_speed = -r, right_speed = r (turning left on spot)
If theta = 270 then left_speed = -r, right_speed = -r (moving backwards at speed r)
For other values, I want it moving and turning simultaneously. For example,
If theta = 45 then left_speed = alpha*r, right_speed = beta*r (moving forward while turning right)
So, basically for any (r, theta), I can set speeds as,
(left_speed, right_speed) = (alpha*r, beta*r)
I need to formulate an equation where I can generalize all these cases by finding alpha and beta based on theta.
How can I do this? Is there is any existing work I can refer to?
A:
You're trying to find a formula to convert a given $(r, \theta)$ to left and right thrust percentages, where $r$ represents your throttle percentage. The naive implementation is to base your function on 100% throttle:
At $0 ^{\circ}$, left and right thrust are equal to $r$
At $\pm45 ^{\circ}$, one side's thrust equals $r$ and the other side's equals 0
At $\pm 90 ^{\circ}$, one side's thrust equals $r$ and the other side's equals $-r$
At $180 ^{\circ}$, left and right thrust are equal to $-r$
This produces a function like the following:
The problem with this implementation is that you are only providing the desired $r$ when $\theta$ is at an exact multiple of $90 ^{\circ}$. At all other points, you sacrifice total speed for control. You can see this in the graph: "Absolute Thrust", the amount of thrust being delivered by both motors, is not (and cannot be) constant in this regime. Simply scaling this function is not optimal.
The maximum absolute thrust that can be sustained over the entire range of $\theta$ occurs when $r=50\%$ -- this is what your function should be based on.
In this implementation, total thrust remains constant between $\pm45 ^{\circ}$, and absolute thrust is constant no matter what direction is chosen -- the motors trade off their thrust to maintain the desired $r$. Above $r=50\%$, the range of angles where your total thrust can satisfy $r$ begins to shrink -- you begin to sacrifice control for speed.
This produces a python function like the following:
# assumes theta in degrees and r = 0 to 100 %
# returns a tuple of percentages: (left_thrust, right_thrust)
def throttle_angle_to_thrust(r, theta):
theta = ((theta + 180) % 360) - 180 # normalize value to [-180, 180)
r = min(max(0, r), 100) # normalize value to [0, 100]
v_a = r * (45 - theta % 90) / 45 # falloff of main motor
v_b = min(100, 2 * r + v_a, 2 * r - v_a) # compensation of other motor
if theta < -90: return -v_b, -v_a
if theta < 0: return -v_a, v_b
if theta < 90: return v_b, v_a
return v_a, -v_b
The result is the following:
| |
The financial markets’ low volatility underscores investors’ conviction that the long-term global economic trends of modest growth and tepid inflation will also define shorter-term cycles. But risks lie in mistaking the trend for the cycle.
The most pronounced risk in our 2018 outlook is that already tight global labor markets will grow tighter, finally leading to a cyclical uptick in inflation. A wage or inflation spike in 2018 could lead markets to anticipate a more aggressive normalization from historically low interest rates just as central banks are either normalizing monetary policy or contemplating doing so, thereby producing a market-rattling shock.
Global investment outlook: Higher risks, lower returns
For 2018 and beyond, our investment outlook is modest, at best. Elevated valuations, low volatility, and secularly low interest rates are unlikely to be allies for robust financial market returns over the next five years. Downside risks are more elevated in the equity market than in the bond market.
In our view, the solution to this challenge is not shiny new objects or aggressive tactical shifts. Rather, our market outlook underscores the need for investors to remain disciplined and globally diversified, armed with reasonable return expectations and low-cost strategies.
What follows is a brief overview of our economic and investment outlook for 2018.
Economic growth: Unemployment, not growth, is the key
We expect economic growth in developed markets to remain moderate in 2018, while strong emerging-market growth should soften a bit. Yet investors should pay more attention to low unemployment rates than GDP growth at this stage of the cycle for prospects of either higher spending for capital expenditures or wage pressures. We see low unemployment rates across many economies declining further. Improving fundamentals in the United States and Europe should help offset weakness in the United Kingdom and Japan. China’s ongoing efforts to rebalance from a capital-intensive exporter to a more consumer-based economy remains a risk, as does the need for structural business-model adjustments across emerging-market economies. We do not anticipate a Chinese “hard landing” in 2018, but the Chinese economy should decelerate.
Inflation: Secularly low, but cyclically rising
Previous Vanguard outlooks have rightly anticipated that the secular forces of globalization and technological disruption would make achieving 2% inflation in the United States, Europe, Japan, and elsewhere more difficult. Our trend view holds, but the cycle may differ.
In 2018, the growing impact of cyclical factors such as tightening labor markets, stable and broader global growth, and a potential nadir in commodity prices is likely to push global inflation higher from cyclical lows. The relationship between lower unemployment rates and higher wages, pronounced dead by some, should begin to re-emerge in 2018, beginning in the United States.
Monetary policy: The end of an era
The risk in 2018 is that a higher-than-expected bounce in wages—at a point when 80% of major economies (weighted by output) are at full employment—may lead markets to price in a more aggressive path or pace of global monetary policy normalization. The most likely candidate is in the United States, where the Federal Reserve is expecting to raise rates to 2% by the end of 2018, a more rapid pace than anticipated by the bond market. The European Central Bank is probably two years away from raising rates or tapering bond purchases, although a cyclical bounce may lead to a market surprise. Overall, the chance of unexpected shocks to the economy during this tightening phase is high, as is the chance that balance-sheet shrinkage will have an unpredictable impact on asset prices.
Investment outlook: A lower orbit
The sky is not falling, but our market outlook has dimmed. Since the depths of the 2008–2009 global financial crisis, Vanguard’s long-term outlook for the global stock and bond markets has gradually become more cautious—evolving from bullish in 2010 to formative in 2012 to guarded in 2017—as market returns have risen with (and even exceeded) improving fundamentals. Although we are hard-pressed to find compelling evidence of financial bubbles, risk premiums for many asset classes appear slim. The market’s efficient frontier of expected returns for a unit of portfolio risk now hovers in a lower orbit.
Based on our “fair-value” stock valuation metrics, the medium-run outlook for global equities has deteriorated a bit and is now centered in the 4 %–6% range. Expected returns for the U.S. stock market are lower than those for international markets, underscoring the benefits of global equity strategies in the face of lower expected returns.
And despite the risk for a short-term acceleration in the pace of monetary policy normalization, the risk of a material rise in long-term interest rates remains modest. For example, our fair-value estimate for the benchmark 10-year U.S. Treasury yield remains centered near 2.5% in 2018. Overall, the risk of a correction for equities and other high-beta assets is projected to be considerably higher than for high-quality fixed income portfolios; balanced portfolios are expected to stunt a rise in return volatility. | http://news.capitalwise.com.au/rising-risks-to-the-status-quo/ |
Throughout the year, we highlighted a wide variety of thought leaders in our On Point column. Each interviewee shared their insight on the next great technology trend, and we’ve compiled a list of their top trends to watch as we prepare to kick off a new year.
“Using them produces significant gains in sensitivity and thus signal-to-noise ratio. I admit, however, that I am less excited about quantum key distribution, believing that post-quantum public key cryptography that is resistant to cracking by quantum computation may prove to be more reliable.” —Vint Cerf, vice president and chief Internet evangelist at Google
Read our On Point Q&A with Vint Cerf
“Convergence is a broad trend that is going to impact every element of society. It is the synergistic impact of new capabilities feeding off of each other. Think of the mission impact of all these topics converged: cloud computing, artificial intelligence, mobility, big data analytics, robotics, IoT [Internet of Things], cybersecurity, quantum computing, virtual reality, augmented reality, additive manufacturing, space sensing, advanced communications (especially 5G, Wi-Fi6). So, imagine the power of transformation by considering them all at once!” —Bob Gourley, chief technology officer and co-founder of OODA LLC, a due diligence and cybersecurity consultant who also publishes OODAloop.com and CTOvision.com
Read our On Point Q&A with Bob Gourley
“The ethical and responsible use of consumer data. Particularly as emerging technologies such as artificial intelligence, machine learning, edge computing and 5G continue to grow, the conversation around data privacy and data usage is not going away anytime soon. This notion of “what do we do with data privacy” will continue to determine the way we approach legislation, enforcement and consumerization. We’ll need to figure out how to unblur the lines around data privacy responsibly.” —Juliana Vida, chief technical advisor, public sector at Splunk Inc., and a former Navy deputy chief information officer
Read our On Point Q&A with Juliana Vida
“Advanced nanotechnology capabilities that could repair damaged cells could be tooled into the next generation of vaccine delivery systems to target viruses at the cellular level. Intelligent systems could review and assess situations and deliver recommendations with a predictive probability of success of 97.999999%. For multifactor authentication, the fusion of cybersecurity technology and biometrics could increase that to include aspects of one’s blood type, hair color, DNA and even hundreds of other aspects. The cross-fertilization of varying technologies across numerous disciplines will integrate with information technology advancements to open up brand-new fields of research, development and capabilities.” —Col. Karlton Johnson, USAF (Ret.), chairman of the Cybersecurity Maturity Model Certification (CMMC) Accreditation Body board of directors
Read our On Point Q&A with Col. Johnson
“We virtualize compute, storage and networking. We’ll extend abstraction to include all data itself. Data virtualization systems could underlie more capable and usable handling of classification levels, ideally including “the anywhere SCIF.” And predictably, generative neural nets will exploit data virtualization for good and malign purposes, so keep on your toes.” —Lewis Shepherd, senior executive at VMware and the vice chair of AFCEA’s Intelligence Committee and an advisor to several government agencies
Read our On Point Q&A with Lewis Shepherd
“The mainstreaming of artificial intelligence/machine learning algorithms in the core of our businesses and the platforms that we use. We have to understand: are the algorithms sound behind the decisions; were they developed without bias with good and broad and diverse data sets; are we allowing these algorithms to make decisions for us and/or increase our bias or proclivity to things?” —Melissa Hathaway, president of Hathaway Global Strategies and former cyber advisor to President George W. Bush and President Barack H. Obama
Read our On Point Q&A with Melissa Hathaway
“Current encryption technology (AES 256) is more than 20 years old. While quantum computing is still in its infancy, it offers the promise of providing better security and encryption technologies. We will be seeing more innovations in the next few years in the areas of post-quantum cryptography, quantum key distribution and homomorphic encryption.” —Srini Iyer, chief technology officer and head of ManTech’s Innovation & Capabilities Office
Read our On Point Q&A with Srini Iyer
“Needing to remember dozens of long, complex passwords is difficult and cumbersome. Weak passwords are easily targeted by malicious actors, resulting in 679 password attacks every second (more than 18 billion each year). My company is moving to a passwordless future. The new availability of passwordless sign-in to Microsoft accounts for enterprise accounts will enhance security of millions of users.” —Rick Wagner, corporate vice president for Microsoft Federal
Read our On Point Q&A with Rick Wagner
“Advances in computing techniques and biomedicine will unlock physiological potential, while advances in manufacturing will unlock potential for our tooling. We started innovating with fire and wheels, and now we’re working on interplanetary space flight, factories that can make suggestions about what they should produce next, and finding new ways to overcome the human body’s inherent limitations.” —David Benhaim, Chief Technology Officer, Markforged
Read our On Point Q&A with David Benhaim
“It is how the digital revolution is driving an Age of Consciousness. This changes everything.” —Dr. William Halal, Professor emeritus of Management, Technology and Innovation at The George Washington University and author of the book Beyond Knowledge
Read our On Point Q&A with Dr. William Halal
More information about text formats
© AFCEA International, 4114 Legato Rd Ste 1000, Fairfax, Virginia, 22033. All rights reserved.
SIGNAL ® and The CyberEdge ® are registered in the U.S. Patent and Trademark Office. | https://www.accutreq.net/2021/12/30/10-technology-trends-to-watch-in-2022-signal-magazine-signal-magazine/ |
We address impacts and potential mitigation of threats to the health of forests in Pennsylvania and beyond. Threats include forest loss, climate change, invasive species, pollution, and more.
Pennsylvania at its core is a forest state. We are the department that is addressing the challenges needed to make sure our forests stay healthy for future generations. The ecological and social value of forests and natural areas are affected by both the loss of forests and changes in the health and condition of forests. The loss of forests is most often attributed to fragmentation; however, climate change, invasive species, and pollution also represent substantial stressors on our forests.
Our faculty in research, teaching, and extension use an interdisciplinary approach to understanding the complexity of forest health and resilience at multiple scales from an individual site to landscapes and from single species to communities including plants, animals and the aquatic ecosystems within forested streams and wetlands. Faculty study how past and current management practices, biological invasions such as chronic wasting disease in deer and chestnut blight in American chestnut, as well as invasive insects such as hemlock woolly adelgid, emerald ash borer, and spotted lanternfly can cause changes in the forest ecosystem resulting in direct and indirect effects on the abundance and distribution of co-occurring plants, birds and other wildlife. Our faculty also work to develop innovative conservation and management strategies for mitigating these disturbances while incorporating societal needs and values into mitigation efforts including the needs and values of private forest landowners. Most forests in the eastern United States are in the private sector and are managed by a large number of individuals acting independently with their cumulative actions being able to either positively or negatively impact forest health. The effective conservation and management of forests to ensure ecosystem health and resilience will require the coordination of diverse groups of individuals and agencies in both the private and public sector. As such, ongoing and future research efforts will also examine public perceptions of ecosystem health/biological invasions and the roles people are willing to take to address these issues.
Researchers in the Department of Ecosystem Science and Management working in the area of Forest Health. | https://ecosystems.psu.edu/research/areas/forest-health |
Systemic bias and institutional marginalization of individuals and communities is pervasive throughout all aspects of our society based on, but not limited to, race, gender identity, age, ethnicity, sexual orientation, disability, socioeconomic status, cultural practices, and religion. Working at the science-policy interface, we see these inequalities reflected in the marine and coastal science landscape, including disparities in the education pipeline, access to ocean recreation, resilience to climate change impacts, and in coastal and ocean policy engagement, among much else. These inequalities are highly visible within the two fields we span as a boundary organization; both marine science and marine policy are not representative of California’s diversity. It is critical that we challenge these systemic problems through detailed and obtainable commitments to diversity, equity, and inclusion.
At California Ocean Science Trust (OST), we see diversity as central to our organizational philosophy and culture and imperative to our mission to accelerate progress towards a healthy and productive ocean future for California. Serving the most diverse state in the country, and recognizing that diverse groups are more productive, more innovative, and healthier than their homogenous counterparts, we believe that it is impossible to develop efficient and equitable solutions to climate change and its disproportionate impact on frontline communities of color without including a broad set of perspectives, experiences, and knowledge. We also recognize that diversity does not always guarantee equity or inclusion, thus we have committed to foster a culture of equity and inclusion within our organization, with our partners, and in the work we do.
Dedication to these initiatives is not only an expression of OST as an organization, but also as individuals recognizing an urgent need for action. This is why we have already implemented measures to express values of diversity, equity, and inclusion (DEI) internally and externally, such as:
- Diversifying what a scientist looks like by hiring staff from various backgrounds, and with diverse experiences and expertise – not limiting prospects to traditional paths
- Supporting the use of gender-inclusive pronouns in email signatures, social media accounts, and other identifiers (e.g. name-tags, tent cards)
- Beginning all workshops and meetings with indigenous land acknowledgements
- Ensuring the use of community commitments during workshops and meetings to cultivate inclusion and equity during discussions
- Convening a DEI Advisory Committee within the Ocean Protection Council Science Advisory Team (OPC SAT)
- Reaching out to current partners to share perspectives and learnings on our DEI initiatives
- Providing equitable investments in staff professional development
- Providing staff with work schedule flexibility (e.g. life management leave, work-from-home policy)
- Continuing land acknowledgements in all meetings with an emphasis on expanding virtual meeting land acknowledgements to include all participant locations
- Continuing to foster an inclusive environment for our Board, staff, and external partners
- Share our commitment with our community, including in job postings, internship, and fellowship descriptions
- Promote best practices when convening working groups to elevate marginalized voices and ensure inclusion of traditionally underserved communities
Acknowledging that our current efforts are too few, we have embraced this homegrown initiative and are committed to:
Short Term
- Produce electronic documents consistent with California guidance on digital document accessibility
- Pursue funding to further support our internal DEI initiative, providing staff with formalized workshops, trainings, and education opportunities
- Update our recruitment process by researching, designing, and implementing an equitable selection strategy that reflects our DEI values and commitments
- Strengthen partnerships with minority-serving institutions to build capacity and increase diversity in the coastal and ocean resources policy sphere and the natural and social sciences workforce of California, with an emphasis on empowering the next generation of scientists and professionals
Long Term
- Convert OST library of products to comply with California guidance on digital document accessibility
- Partner with organizations that put DEI at the forefront of their mission. As we acknowledge that we are not experts, we commit to supporting and amplifying their voices
As an organization, we commit to continuously assess our progress on this initiative and recognize where we need significant improvements, more awareness, and deeper engagement, using these opportunities to reflect and to advance our learning and dedication to DEI.
Additional Resources and Reading
Societal change is a communal effort; for that, we thank the following groups for their continued labor and educational efforts, and invite readers to look at some of the resources available on their websites. This list is a work in progress, please contact [email protected] with your suggestions and input. | https://www.oceansciencetrust.org/about-us/dei/ |
A Wells Fargo study shows embracing certain attitudes and behaviors could help you reach your investing and savings goals—including those for retirement.
The financial and economic impacts of the coronavirus pandemic have been far-reaching. Among people’s worries is what could come next, as well as how they might position themselves to get back on track to work toward their financial goals. Research in the past few years shows that getting and staying on track is strongly connected to mindset—in particular, a planning mindset.
Workers with a planning mindset are nearly 2 times more satisfied with their overall financial life, 2.5 times more likely to have a strong sense of personal control over their current debt situation, and 5 times more likely to have a long-term plan with overarching goals.
In 2019, the Wells Fargo Retirement study outlined some of the key statements that connect to the attitudes and behaviors that underscore what having a planning mindset means. These include:
- Setting and achieving a goal or set of goals during the past six months to support their financial lives (a financial advisor could help you set—and stick to—a plan for achieving financial goals).
- Working diligently toward a long-term goal.
- Feeling better about having finances planned out over the next one to two years.
- Preferring to save for retirement now (vs. later) to ensure they have a better life in retirement.
Below, Tim Sturr, Managing Director of Planning at Wells Fargo Advisors, shares three actions that can help you create—or strengthen—a planning mindset and potentially improve your financial well-being.
1. Make a detailed budget
Sturr says the first thing to do as part of developing a planning mindset is to create a budget—and then track what you spend. “Input from a financial specialist could help here,” he says, to make sure your budget is specific and detailed. The more detailed, the better.
A well-thought-out budget includes actions to take to reach short-term and long-term savings goals. If possible, schedule actions so that your savings can accumulate automatically, making it easier to follow through.
2. Create your retirement vision
This relates to a key behavior tied to a planning mindset: preferring to save for retirement now to help ensure you have a better life in retirement.
Creating a complete picture of what you’ll need in retirement involves evaluating the standard of living you want in retirement and being aware of your current wants and needs.
Consider factors such as costs associated with hobbies and activities you might take up in retirement, as well as necessities such as health care costs.
3. Review and update your goals regularly—including now
Sturr suggests you can take action to respond to the economic impacts of the coronavirus pandemic on your assets and income by meeting with your financial advisor. Use the meeting, he says, to understand the market movements now to help ensure your investment plan is still aligned with your near- and long-term goals. This is true even for those who already have a planning mindset.
Also at the meeting, review how much you’re spending, how much you’re saving or investing, and your risk profile to help make sure all components of your plan are aligned with your current circumstances. | https://lifescapesdirect.wellsfargoadvisors.com/retirement-plans/ |
Arjuna said, “O Janārdana, if it be Your opinion that Wisdom that is Yoga of Wisdom which is equanimity practised along with Yoga of Knowledge with the single-pointed conviction of the Supreme Goal is superior to action then, why O Kesava, do You urge me to do this horrible and cruel action that will cause injury to another?
O Lord, though, Your speech and utterance is eloquent, still, to me who is of dull understanding, it appears conflicting. You bewilder my understanding as it were with this conflicting statement. You have surely undertaken this speech to dispel my ignorance and dullness and confusion arising from this grave situation and yet I am confused. Tell me for certain, from which and through which namely Knowledge or action, may I attain the Highest Good.”
The Blessed Lord said, “O unblemished One, O sinless One, there are two kinds of steadfastness and persistence in what is undertaken in this world, for the people of the three castes who are qualified for following the scriptures and were spoken of by Me, the Omniscient God, who had revealed for them the traditional teachings of the Vedas which are the means of securing prosperity and the highest Goal in the days of yore and at the beginning of creation. Jnāna–yogena, through the Yoga of Knowledge where Knowledge itself being the Yoga for the men of realization whose single-pointed conviction of the Supreme Goal namely Me remains steadfast and undeterred throughout their existence wherein even after realizing the Supreme Goal that is Me in its entirety, they continue to live till the physical frame drops off as a Jeevanmukta. The other being karma-yogena, through Yoga of action as stated for the yogis where every action and fruits of the action is surrendered to Me as they have only the right to action and not to the fruits of the action or the cause for the production of the actions. Here, they adore Me in every action, inaction, word and thought surrendering to Me practising righteousness.
A person does not attain freedom from action, that is to be steadfast in the Yoga of Knowledge by abiding in one’s own Self at all times by abstaining from actions and nor does he attain fulfilment that is steadfastness in the Yoga of Knowledge characterized by freedom from action merely through renunciation.
Because, no one ever remains without doing some work or action even for a moment. For, all the creatures born in this Nature are made to work under the compulsion of the gunas of sattva, rajas and tamas
One who after withdrawing the organs of action sits mentally recollecting and thinking about the objects of the senses, that one of deluded mind is called a hypocrite and a sinful person.
But, on the other hand, O Arjuna, one who is unenlightened and who is eligible for action and right to perform his ordained duty, engages in Karma Yoga. He engages in Karma Yoga with the organs of action controlling the sense-organs with the mind and becomes unattached, excelling the other one as he is ever-aware and mindful of the Supreme Goal.
You, O Arjuna, perform the obligatory duties as ordained and those that you are competent to do, for action and performance of duty is superior to inaction and non-performance of ordained and customary duties. Why, through inaction and non-performance of customary actions, even the maintenance of your body will not be possible.
This man, the one who is eligible for action becomes bound by actions, becomes bound by actions other than that action meant for God. Therefore, without being attached and being free from the attachment to the results of actions, O son of Kunti, you perform karma, for Him.
In the days of yore, in the beginning of creation, having created the beings, together with the sacrifices, Prajāpati said, “By this sacrifice, you multiply. Let this sacrifice be your yielder of coveted objects of desire and particular results as desired.
You nourish the gods, Indra and others with this sacrifice. Let those gods nourish you. By nourishing one another, you shall attain the Supreme Good.”
Being nourished by sacrifices, the gods will indeed bestow upon you the coveted enjoyments. He is certainly a thief and a stealer of the wealth of gods and others who gratifies only his own body and organs with what enjoyable things have been bestowed by the gods without offering any to them thereby not repaying the debt to them.
Those who are partakers of the remnants of sacrifices, who, after making offering to the gods, eat the remnants of those offerings, called nectar, become freed from all sins. But the unholy selfish persons, who cook only for themselves, incur sin.
It is a matter of understanding that from the food that is eaten, creatures are born. The origin of food is from rainfall and rainfall originates from sacrifice and sacrifice has action as its origin.
Know that karma has Brahma and the Veda as its origin and further Brahma that is (aksara–samudbhavam), has aksara, the Immutable Brahman who is the Supreme Self as its source. Since the Veda has its origin in the Immutable Supreme Self, it becomes the revealer of everything and all-pervading. Even though all-pervading, the Veda is eternal and immortal based on sacrifice because the injunctions about sacrifices predominate in it.
O Pārtha, he lives in vain, who, though competent for action and executing the ordained and customary duties, does not follow here, the wheel of the world that is set in motion and whose life is sinful and who indulges in the senses and who only wants to enjoy the objects through the senses.
But that man has no duty to perform, who is a sannyāsin (the man of Knowledge and steadfast in the knowledge of the Self), who rejoices only in the Self and who is satisfied only with the Self and is contented only in the Self.
Moreover, for him, who rejoices in the Supreme Self, there is neither any concern or interest at all in performing action nor is there any concern or interest in non-performance of action. Moreover, for him, there is no dependence on any object, from Brahmā to an unmoving thing that will serve any purpose.
Therefore, remaining unattached, always perform the obligatory, customary and ordained duties, for by performing one’s duty without attachment (doing the assigned work as dedication to God), the person attains the Supreme or the Highest Liberation.
For in the olden days, Janaka and other learned beings, strove to attain Liberation through action itself. You should perform your ordained and assigned duties keeping also in view, the prevention of mankind from going astray and failing to do their assigned work and duties.
Whatever action a superior person or a leader does, another person follows him and does that very action. Further, whatever the superior person upholds as authority, an ordinary person follows that very thing.
na me parthasti kartavyam
trisu lokesu kincana
nanavaptam avaptavyam
varta eva ca karmani – O Pārtha, there is no duty whatsoever for Me to fulfil in all the three worlds; there is nothing that remains unachieved nor to be achieved and still I continue to do action.
yadi hy aham na varteyam
jatu karmany atandritah
mama vartmanuvartante
manusyah partha sarvasah – For O Pārtha, if at any time, I do not continue in action tirelessly, then men will follow My path in every way.
utsideyur ime loka
na kuryam karma ced aham
sankarasya ca karta syam
upahanyam imah prajah – If I do not perform karma, then all these worlds will be ruined owing to the absence of assigned and required work that is customary and obligatory for the maintenance of the worlds. Further, I shall become an agent of intermingling and confusion among beings and consequently I shall be destroying these beings.
O scion of the Bharata dynasty, as some unenlightened people act with attachment to work, so should the enlightened person, knower of the Self act without attachment being aware and conscious of preventing people from going astray.
The enlightened man should not create disturbance in the beliefs of the ignorant and the non-discriminating ones and those who are attached to work but instead, should continue performing the customary, obligatory and assigned work and duties of the ignorant while encouraging them and making the ignorant do all those duties.
While customary and obligatory actions as ordained are being done in every way by the gunas of Nature, one who is deluded by egoism thinks that “I am the doer”.
But, O mighty-armed one, the one who is a knower of facts about the qualities of the gunas and the according diversity of actions does not become attached, thinking thus: “The gunas act on the objects of the gunas namely the organs and senses.”
Those who are wholly deluded by the gunas of Nature become attached to the activities of the gunas. The knower of All and one who is the knower of the Self, should not disturb those who are attached to actions who are of dull intellect, who do not know the Supreme All.
mayi sarvani karmani
sannyasyadhyatma-cetasa
nirasir nirmamo bhutva
yudhyasva vigata-jvarah – Devoid of the fever of conscience that is being free from repentance and without remorse, engage in battle, by dedicating all actions to Me, who am the Omniscient Supreme Lord, the Supreme Self, with your mind intent on the Self becoming free from expectations and results and egoism.
Those men who eternally follow this teaching of Mine performing their ordained and obligatory duties dedicating it to Me with faith and without cavil, will also become freed from actions.
But those, who ignore and criticize this instruction of Mine and do not follow My teaching are to be known as deluded in various ways with respect to all knowledge and are devoid of discrimination and have gone to ruin.
Even a jnānavān (a man of wisdom) behaves according to his own nature. Therefore, all the created beings follow their prescribed nature. What can restraint do?
Attraction and repulsion are accorded to the objects of all organs. Therefore, one should not come under the sway of these two with extreme reactions of love and hate or like and dislike because they are his adversaries.
To do one’s own duty however blemished and deficient it might be is more commendable than doing another’s duty skillfully. Even death is better while engaged in one’s own duty as compared to being alive while engaged in somebody else’s duty which is fraught with fear.”
Arjuna said, “O scion of the Vrisni dynasty, being impelled by what that acts as a cause, makes a man commit a sin even against his wish, being constrained by a force as it were?”
The Blessed Lord said, “kama esa krodha esa
rajo-guna-samudbhavah
mahasano maha-papma
viddhy enam iha vairinam – This desire, this anger that is born of the quality of rajas is the great devourer and a great sinner. Know this desire to be the enemy here in this world.
As fire which is naturally bright is enveloped by smoke, which is born concomitantly with fire and is naturally dark, or as a mirror is covered by dirt, and, as a foetus remains enclosed in the womb, so is this shrouded by that.
O son of Kunti, Knowledge is covered by this constant enemy of the wise in the form of desire which is an insatiable fire.
The organs, the mind and the intellect are said to be its abode. This one desire diversely deludes the embodied being by veiling Knowledge with the help of these.
So therefore, O scion of the Bharata dynasty, after first controlling the organs being ever-aware and intent on the Supreme Goal, renounce this one enemy under consideration which is sinful and a destroyer of learning and wisdom.
The learned ones say that the organs are superior to the external gross body; the mind is superior to the organs; the intellect is superior to the mind. However, the one who is innermost and superior to the intellect is He, the Supreme Self who is a witness of the intellect.
evam buddheh param buddhva
samstabhyatmanam atmana
jahi satrum maha-baho
kama-rupam durasadam – Understanding the Self as thus superior to the intellect and completely establishing the mind in the Self in absolute immersion and absorption, O mighty-armed one, vanquish this enemy in the form of desire that is difficult to subdue. | https://sandeepa.in/2019/03/04/karma-yoga-chapter-3-slokas-3-01-3-43/ |
Tous les services de P.I.
Toutes les politiques
Toutes les activités de coopération
Toutes les sources d'information
Tout sur la P.I.
34, chemin des Colombettes
CH-1211 Genève 20, Suisse
Venez nous rendre visite |
Contactez-nous
Au sein de l'OMPI
Rechercher
Rétablir
Interrogation > Creative Commons (CC) > Anglais > 2019 > Current
WIPO Re:Search: Advancing Product Development for Neglected Infectious Diseases through Global Public-Private Partnerships
WIPO Re:Search Consortium unites public and private market forces to address neglected tropical diseases (NTDs), malaria, and tuberculosis (TB) through sharing of intellectual property across sectors and geographies. To date, WIPO Re:Search has catalyzed over 150 R&D collaborations and managed capacity-building fellowships for scientists across sub-Saharan Africa and other low- and middle-income regions. This publication highlights seven exciting collaborations that are advancing solutions to help over one billion people who suffer from NTDs, malaria, and TB.
Année de publication: 2019
WIPO Re:Search Partnership Stories 2016-2019: Driving R&D for Neglected Infectious Diseases Through Global Cross-Sector Collaborations
WIPO Re:Search is a global public-private consortium that accelerates drug, vaccine, and diagnostic research and development (R&D) to address unmet medical needs for neglected infectious diseases and drive progress toward the United Nations Sustainable Development Goals. Established in 2011, WIPO Re:Search catalyzes royalty-free sharing of intellectual property—including compounds, data, clinical samples, technology, and expertise—among Consortium Members in targeted, mutually beneficial R&D collaborations. This publication contain stories of collaborations established through WIPO Re:Search from 2016 to 2019.
WIPO Collection of Leading Judgments on Intellectual Property Rights
People's Republic of China (2011–2018)
This casebook of judgments by the Supreme People's Court of the People's Republic of China is the first volume in the WIPO Collection of Leading Judgments on Intellectual Property Rights. The WIPO Collection gives the global intellectual property community access to landmark judgments from some of the most dynamic litigation jurisdictions of the world, through a succession of volumes that illustrate intellectual property adjudication approaches and trends by jurisdiction or by theme.
A Guide to the Main WIPO Services
The World Intellectual Property Organization offers a wide range of global IP services. They provide a highly efficient, fast and cost-effective means of helping innovators and creators - both corporate and individual - protect their inventions, trademarks and designs in multiple countries, and also resolve their IP disputes, including those involving domain names. This brochure offers a brief overview of these global services.
Annual financial report and financial statements
WIPO financial statements are submitted to its Assemblies of Member States in accordance with the Financial Regulations and Rules.
Measuring Innovation in the Autonomous Vehicle Technology
Economic Research Working Paper No. 60
Automotive industry is going through a technological shock. Multiple intertwined technological advances (autonomous vehicle, connect vehicles and mobility-as-a-Service) are creating new rules for an industry that had not changed its way of doing business for almost a century. Key players from the tech and traditional automobile sectors – although with different incentives – are pooling resources to realize the goal of self-driving cars. AV innovation by auto and tech companies' innovation is still largely home based, however, there is some shifting geography at the margin. AV and other related technologies are broadening the automotive innovation landscape, with several IT-focused hotspots – which traditionally were not at the center of automotive innovation – gaining prominence.
Global Roots of Innovation in Plant Biotechnology
Economic Research Working Paper No. 59
Innovation in agricultural biotechnology has the potential to increase agricultural productivity and quality, ultimately raising incomes for farmers across the world. Advances in the field have produced crops that are resistant to certain diseases, that result in higher yield than before, that can grow in extreme soil conditions, such as in arid and salty environments and even those that are infused with nutrients. Moreover, the technology has been hailed as a potential solution to addressing global issues of hunger and poverty. It therefore follows that innovation in this field finds strong support from the public sector as well as the private sector. This paper traces the evolution of the global innovation landscape of plant biotechnology over the past couple of decades. Drawing on information contained in patent documents and scientific publications, it identifies the sources of innovation in the field, where they are located and demonstrates how these innovative centers connect to one another. There are three important findings. First, the global innovation network of agricultural biotechnology showcases a prime example of how innovation activities spread to many parts of the world. Second, while there are more countries participating in the innovation network, most of these innovation centers are concentrated in the urban areas and away from the rural where most of the transgenic crops are harvested. Third, the increasing need for collaboration between the private and public sectors to bring the invention to the market may have effect on how the returns to innovation are appropriated.
Guidelines to using evidence from research to support policymaking
This Guide elaborates on the best practices in conducting empirical studies in the intellectual property (IP) field. In so doing, it seeks to improve the credibility of studies, enhance transparency about what conclusions can and cannot be drawn from such studies, and encourage responsible use of studies by IP stakeholders.
Urgent Innovation – Policies and Practices for Effective Response to Public Health Crises
Public health crises require urgent innovation, not only in research and development (R&D) but also in the delivery of therapies and diagnostics. What constitutes “urgency” and “innovation” in these contexts? How are priorities and targets determined? Who is best placed to deliver results? This edition of the Global Challenges in Focus series explores themes discussed at a recent Global Challenges Seminar on the policies and practices that facilitate effective responses to global health crises.
Guide to WIPO's services for country code top-level domain registries
This guide presents country code top-level domain (ccTLD) registry operators and national authorities with information on how to resolve third-party domain name disputes in a cost- and time-saving manner. It explains the main policy design features of a successful Alternative Dispute Resolution (ADR) system, and provides information on the WIPO-created Uniform Domain Name Dispute Resolution Policy (UDRP), including the possibility to tailor the UDRP for specific ccTLD requirements. | https://www.wipo.int/publications/fr/search.jsp?set6=cc&lang=EN&cat2=1&sort=pubDate&pubDate=2019 |
Despite decades of research that have significantly enhanced our understanding of the etiology and treatment of eating disorders, many individuals with these life-threatening illnesses continue to struggle without help or even recognition of their disorder. The eating disorder community (i.e., patients, caregivers, advocates, providers) are well-versed in the physical and psychological dangers conferred by these illnesses and continue to work tirelessly to help ensure that those affected receive the necessary treatment. However, many individuals with eating disorders report feeling unprepared or afraid of reaching out for help. With the many myths and stigma that exist about eating disorders, it is little wonder that those with these disorders feel reluctant to seek help. In fact, a recent systematic review conducted by Ali and colleagues (2017) sought to shed light on the perceived barriers to treatment and, importantly, identifying key factors that help facilitate help-seeking.
The authors of this systematic review searched databases for studies that examined barriers and/or facilitators of help-seeking for eating disorders. Their search revealed 3,493 abstracts, of which 3,349 were dually reviewed to determine which met the predetermined inclusion/exclusion criteria (144 were removed from the initial search as they were duplicates). Only 47 abstracts were found to meet the criteria for inclusion and these full manuscripts were dully reviewed for a more detailed review of their data. Thirty-five studies were excluded after full review, most commonly because the study did not examine perceived barriers or facilitators for help-seeking in eating disorders. After identifying an additional study in the review process, a total of 13 studies were included in the systematic review (see PRISMA diagram here). Risk of bias ratings (an evaluation of the methodological quality of each of the included studies) were dually conducted according to established criteria and any disagreements about these ratings were resolved by discussion.
Thus, the results of this systematic review were based on eight qualitative, three quantitative, and two mixed-methods studies (N=13) that were included in the review. Most of the studies were conducted in the United States and primarily included adult women with eating disorder diagnoses across the spectrum (e.g., anorexia nervosa, bulimia nervosa, binge-eating disorder, and eating disorders not otherwise specified). Barriers and facilitators reported in each study were extracted and coded into subthemes such that the authors identified 15 key barriers and 10 key facilitators to help-seeking. Among the barriers identified were: stigma and shame, denial/lack of awareness, cost/transportation/inconvenience factors, and a lack of encouragement from others. Facilitators of help-seeking included: other mental health problems, health problems, and encouragement from friends and family (for an exhaustive list of the prominent barriers and facilitators, please refer to Ali et al., 2017).
Results from this systematic review highlight areas that prevention and education programs can target to help reduce help-seeking barriers. In fact, the authors suggest that awareness and prevention programs should focus on reducing shame and stigma around eating disorders, and educating people about these disorders, their impact, and available resources. Thankfully, there are some great existing resources like those from the National Eating Disorders Association and Project Heal that aim to do just that. Similarly, those local to the Chapel Hill area are encouraged to attend an Embody Carolina training – a peer-to-peer training designed to help train individuals to become allies for those with eating disorders. Finally, you can learn more about the truths about eating disorders (rather than the myths) to help reduce stigma and educate yourself about these illnesses. Collectively, these evidence-based efforts have the potential to help make a meaningful impact on those struggling with eating disorders who might be otherwise hesitant to reach out for crucial treatment. | https://uncexchanges.org/2017/01/13/barriers-and-facilitators-of-help-seeking-for-eating-disorders-results-from-a-recent-systematic-review/?shared=email&msg=fail |
Integrative primary health care (IPHC) is an approach to health care that combines conventional medicine with evidence-based complementary and alternative medicine (CAM) therapies.
IPHC is often used interchangeably with the term “complementary and Integrative Medicine” (CIM). However, CIM is a broader term that includes any health care approach that augments or complements conventional medicine, while IPHC specifically refers to the delivery of health care services that integrate both conventional and CAM therapies.
The goal of IPHC is to provide patients with comprehensive, individualized care that addresses the mind, body, and spirit. This approach recognizes that each person is unique and, as such, may require a different combination of treatments to achieve optimal health and well-being.
Why Is It Important?
It Can Help Address the Root Cause of Disease
Integrative and functional medicine practitioners take a comprehensive approach to health care, which means they look beyond the symptoms of a disease to identify and treat its underlying causes. This is in contrast to conventional medicine, which often focuses on treating the symptoms of disease with medication. By addressing the root cause of disease, integrative and functional medicine can help to prevent the need for medication in the first place.
It Emphasizes Prevention
Conventional medicine has traditionally focused on treating diseases and conditions after they have developed. While this approach is often effective, it can also be costly and may not always address the underlying cause of the problem. In contrast, practitioners of integrative and preventative health care (IPHC) take a proactive approach to health and wellness. Rather than wait for problems to arise, IPHC practitioners work with patients to prevent disease and promote overall health and wellbeing. This may involve lifestyle counseling, such as guidance on diet and exercise, as well as stress management techniques. By taking a preventative approach, IPHC practitioners can help their patients avoid many of the health problems that often lead to doctor’s visits in the first place.
It Integrates the Best of Conventional and CAM Therapies
The integration of conventional and CAM therapies is one of the key features of the IPHC approach to care. By using a combination of therapies, IPHC practitioners can tailor treatment plans to the individual needs of each patient. This customized approach often leads to better outcomes for patients, as it takes into account the unique nature of each person’s condition. The integration of conventional and CAM therapies also allows IPHC practitioners to provide a more comprehensive level of care. By offering both types of therapies, patients can receive the best possible care for their specific needs. This commitment to providing high-quality, individualized care is one of the many things that sets IPHC apart from other healthcare providers.
It Treats the Whole Person
IPHC practitioners view each patient as a unique individual, rather than simply a collection of symptoms. This holistic approach to care recognizes that the mind, body, and spirit are interconnected and that each plays an important role in overall health and wellbeing. By taking the whole person into account, IPHC practitioners can develop treatment plans that address the physical, mental, and emotional needs of their patients. This comprehensive approach often leads to better outcomes, as it helps to address the underlying causes of disease.
It Is Personalized
One of the key features of IPHC is that it is personalized. This means that practitioners take into account the unique needs of each patient when developing a treatment plan. This individualized approach often leads to better outcomes, as it allows practitioners to tailor care to the specific needs of each person. In addition, this personalized approach can also help to build trust and rapport between practitioner and patient. By taking the time to get to know their patients, IPHC practitioners can provide the best possible care.
It Is Evidence-Based
IPHC is an evidence-based approach to health care. This means that practitioners use the latest research to guide their decision-making. By staying up-to-date on the latest developments in the field, IPHC practitioners can ensure that their patients receive the best possible care. In addition, this evidence-based approach also helps to ensure that treatments are safe and effective. By basing their decisions on the latest research, IPHC practitioners can provide their patients with the highest quality of care possible.
The integrative health care approach is holistic, meaning that it takes into account all aspects of a person’s life when diagnosing and treating an illness or disease. This approach is becoming more popular as people are looking for ways to improve their overall health and well-being. If you’re interested in learning more about integrative primary health care or would like to schedule an appointment, call us today. We would be happy to answer any questions you have and help you get started on your journey to better health.
Disclaimer: The materials available on this website are for informational and entertainment purposes only and not for the purpose of providing health advice. You should contact your physician to obtain advice with respect to any particular issue or problem. You should not act or refrain from acting on the basis of any content included in this site without seeking medical, legal or other professional advice. The information presented on this website may not reflect the most current medical developments. No action should be taken in reliance on the information contained on this website and we disclaim all liability in respect to actions taken or not taken based on any or all of the contents of this site to the fullest extent permitted by law.
By Stedil – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=64014086
Do You Need a Functional Medical Clinic You Can Trust?
As you know Functional Medicine asks how and why illness occurs and restores health by addressing the root causes of disease for each individual. Our goal for all of our patients at Hope for Healing is to optimize whole health, wellness, immunity, and longevity and find and fix the root problems permanently. All of our licensed medical providers have been trained by the Institute for Functional Medicine (ifm.org) and work collaboratively as part of the provider team under the leadership and direction of Paula Kruppstadt MD DABP FAAP IFMCP. We are trained to listen to our patients and take the time to do an appropriate root-cause analysis to find and implement permanent solutions together. If this sounds like we’d be the right fit for you, contact us today at (281) 725-6767! | https://get2theroot.com/integrative-primary-health-care-what-is-it-and-why-its-important/ |
At MountainView Risk & Analytics we provide guidance to clients around Bank Secrecy Act (BSA) and Anti-Money Laundering (AML) models. The scope of our work includes model documentation, model performance assessments, and model conceptual evaluations. Our quantitative expertise, risk management, and compliance background allow us to advise on how SR 11-7 impacts the practices surrounding BSA/AML and risk management. In this blog, Peter Caya breaks down the White Paper, which discusses how the principles of risk management guidance apply to BSA/AML models. Peter is Assistant Vice President at MountainView Risk & Analytics and an anti-money laundering and fraud validation specialist with extensive experience. He is a Certified Anti Money Laundering Specialist (CAMS).
The White Paper was published in response to a joint statement issued on April 9, 2021 by the Board of Governors of the Federal Reserve System, with FDIC, the Office of Comptroller of the Currency in consultation with the Financial Crimes Enforcement Network and the National Credit Union Administration. The statement, referred to as the SR 11-7 mandate, describes the principles of how risk management applies to the BSA and AML models. International organizations and the US government continue to push for governance principles to be applied to money laundering, as it has accelerated in part due to COVID-19. We explain why and how to investigate these AML models in the White Paper.
Typically, the institutions and employees that are working with these models know how they are applied and have direct practical experience related to AML. The missing component is that they often do not understand how model risk management principles apply to these models.
This is as expected since the history of applying risk management principles to AML models is short. Additionally, the BSA/AML area poses its own unique problems. Criminals are always trying to launder cash from illegal activities and innovating their methods. They use banks, casinos, and art dealers as a few examples to clean money. They are constantly evolving and therefore these models need to be tested often for accountability and accuracy.
We offer guidelines for governing these models, establishing what counts as a model and why it is important. SR 11-7 describes a model as a system within the bank that contains three components. It must have an input component (takes data in), transformation (transforms data) and a reporting component for returning results to the business user. In the White Paper, we provide a succinct outline and flow chart to help determine and rule out anything that may not stand up to the requirements of being a model.
After determining what counts as a model, it’s important to ensure that the model is working correctly. Often this takes the form of three different kinds of analysis: Classification Accuracy, Historic Performance and Sensitivity Testing. These methods verify that banks are testing their models and taking responsibility for them. Our White Paper describes each of these methods in detail.
The White Paper also addresses Handling Third Party Models. It is not uncommon that an outside software vendor has sold a bank software that they can use internally. Software companies might give the institution some idea of what their software does, but often they don’t share the inner workings of the model. How can we help institutions choose a better model to work with? The governance aspect applies to how the bank is managing its relationship with the vendor. Is the vendor providing information on the model as it updates and changes? Is the vendor communicating with the bank? Sometimes vendors might update a model, but not inform the bank of what changes they made or why they made them. This exchange of information is very important to model validation.
Banks and credit unions still retain responsibility for their BSA/AML models despite bringing in an outside company to provide the model software. The institution is still responsible for performing its own analysis, monitoring its third-party models and conducting its own independent performance testing. They are responsible for understanding the relationship with the model vendor and for making sure the model is working effectively and that its results can be justified. SR 11-7 describes accountability and governs everything in terms of risk management. It clearly puts the responsibility on the institutions.
The information presented in the White Paper is based on firsthand experience and the most up-to-date details and regulations of model risk management as it applies to money laundering. At MountainView Risk & Analytics, we advise based on a wide breadth of experience in model risk management. This experience extends to a specialty in advising compliance professionals and business line users on the management and improvement of software used for fraud and BSA/AML. This is combined with a deep understanding of the quantitative background that these models are built upon that allows greater scrutiny on their performance and mechanics. We invite you to gain more guidance on the SR 11-7 framework by reading our White Paper:How Do the Principles of Model Risk Management Guidance Apply to BSA/AML Models? or contacting our team today.
Written by Peter Caya, CAMS
About the Author
Peter advises financial institutions on the statistical and machine learning models they use to estimate loan losses, or systems used to identify fraud and money laundering. In this role, Peter utilizes his mathematical knowledge, model risk management experience to inform business line users of the risks and strengths of the processes they have in place. | https://www.mountainviewra.com/2021/08/17/making-sense-of-sr-11-7-guidelines-for-bsa-aml/ |
Social Share:
Creativity to cut carbon: Technological, financial and business model innovation in the United States
Innovation is a core part of the US energy sector. Over the last decade, its effects have reverberated across the entire energy value chain, from technological advances in oil and gas to financial and process innovations in clean energy.
In fossil fuels, for instance, natural gas production per rig has increased nearly 9-fold since 2010 in the northeastern US. As for low-carbon technologies, costs for onshore wind power have fallen 40% between 2008 and 2014, while those for solar PV (solar photovoltaic panels) technology were 59% lower in 2014 than they were in 2008. Financial innovations, such as solar leasing, have also helped spur the rollout of distributed solar PV in the US market.
Though encouraging, these gains due to innovation are slow moving. Of particular concern is public spending on clean energy research, development and deployment (RD&D), which is languishing. As a share of total public RD&D spend, public spending on energy RD&D has fallen considerably in rich countries – from 11% in the 1980s to 3-4% in 2015. And while the private sector has invested substantially in deployment, the level of research and development (R&D) within the energy sector is relatively low, particularly compared to other technologically advanced industries, such as information technology or pharmaceuticals.
In that context, and as part of an effort involving 20 countries, the US announced at the 2015 Paris Climate Change Conference its intention to double its public investment in clean energy R&D over the next five years. This would bring US investment levels in clean energy to nearly $10bn per year by 2020. Expanding basic research today is absolutely vital to developing tomorrow’s innovations.
Managing carbon
The need to meet environmental goals will require the US to focus on further decoupling carbon emissions from economic growth. Although energy-related CO2 emissions have grown more slowly than GDP since 2014 in the US (0.9% in 2015 vs. 2.4% in 2014) and have dropped in absolute terms by 15% since last year (due largely to the switch from coal to gas for power generation, to the deployment of renewables, and to improvements in energy efficiency) overall emissions from the sector are still higher than they were in 2012.#_ftn1" name="_ftnref1" title="">[1] In particular, some two-thirds of America’s electricity is still generated with fossil fuels. Even with the Obama administration’s Clean Power Plan, this is only forecast to decline to about 53% by 2040.#_ftn2" name="_ftnref2" title="">[2]
With such a high share of fossil fuels in the 2030 electricity mix, some still hope that carbon capture and storage (CCS) will play a significant role in future emissions reductions. Indeed, North America accounts for more than half of CO2 capture projects in development or operation globally, according to the International Energy Agency (IEA). And in 2014, Canada became the first country with an operational, large-scale coal power plant with full CCS capabilities. But while an important milestone, technical problems and cost overruns suggest the widespread use of CCS technology is hardly imminent.
In fact, under current trends, projected CCS annual storage capacity for 2025 from the entire world’s existing and planned CCS projects is only 63MtCO2e—a tiny fraction of today’s US emissions of more than 5,400MtCO2e. This discrepancy is critical as it is often assumed that CSS will play a major role in keeping the increase in global temperature to less than 2 degrees centigrade.
Financial and business model innovation
Meanwhile, financial innovation has helped deliver renewable energy projects that would not have been feasible before. These include green bonds—which direct or earmark funds for projects delivering both returns and clear emissions reductions— and Yieldcos, which offer steady dividends from operating renewable energy projects. While Yieldcos are currently struggling, they have succeeded in bringing new sources of capital into energy infrastructure. Going forward, more financial innovation is needed, from solar leasing to securitisation. Even crowdfunding has found its way into financing distributed energy generation.
Other innovations address business model shifts. In Texas, for example, some utilities have been offering residential customers free electricity from 9pm-6am, coupled with slightly higher daytime rates, as a way to shift electricity consumption patterns. The state, which accounts for 24% of the total installed wind capacity in the US#_ftn3" name="_ftnref3" title="">[3], saw record wind-power generation in 2015; on some days, wind supplied over a third of the total load. These sorts of demand management activities can be further supported through the integration of smart meters or smart appliances. Connectivity, automation and data management can also make energy consumption more flexible.
Renewable energy
In recent years, the majority of new generation capacity addition has been in wind and solar, although the variability of their power generation leaves each plant producing fewer kilowatt hours for a given megawatt of installed capacity than fossil, hydro or nuclear resources. Indeed, the rapid growth of wind and solar can obscure the fact that together they accounted for a mere 5.3% of the US’s electricity production in 2015.#_ftn4" name="_ftnref4" title="">[4]
It is clear that more is needed. According to the IEA, keeping temperature increases below 2 degrees will require the US to cut its share of electricity produced with fossil fuels from two-thirds in 2013 down to a third by 2040. But even as the cost of renewable energy falls, integrating intermittent energy sources into the electricity system, at a large scale, will require system adaptations and new investments.
“The early trials of high penetration of renewables are debunking the myth of [renewables] being bad for the grid because we’re seeing very concrete ways to tackle it—like smart inverters—and, ultimately, as storages prices come down, that will solve any problems,” says Nancy Pfund, managing partner at DBL Partners, a venture capital and impact investment firm that has been an early investor in companies such as Tesla and SolarCity.
Energy storage
The US, like the world more generally, is far from deploying energy storage at scale, however. For instance, to stay beneath a 2 degree temperature increase, the US would need to quadruple its storage capacity from 20GW in 2011 to 80GW in 2040. Pumped hydropower, which currently accounts for nearly all of US energy storage capacity, will not be able to fill this gap on its own due to geographical constraints that limit the number of places here large-scale dams can be built. This means that a range of new technologies will be needed.
There have been positive developments. In 2015, for instance, the US installed some 221MW of storage capacity—a 340% increase over 2014 levels, according to McKinsey & Company.#_ftn5" name="_ftnref5" title="">[5] Tesla’s Giga-factory is also now open, and could produce up to 35GWh worth of storage capacity for electric vehicles as early as 2018. Building on these developments will require significant R&D investments and policy efforts. “Until the right policies are lined up so you can sell all the value of the storage, the market pull isn’t that big. Adequacy and grid balancing are not being monetised at the moment, for instance,” notes John Coequyt, director of Federal and International Climate Campaigns at the Sierra Club, an environmental advocacy group.
Some states have started to recognise this need for action. California, in particular, aims to procure 1.3GW of new energy storage capacity by 2020, with installations required no later than 2024. Most storage technologies, however, have a levelised cost of storage (a metric used for cost comparisons between different storage technologies) that is still higher than the approximately $175/MWh considered to be competitive with a gas-peaker plant.#_ftn6" name="_ftnref6" title="">[6]
Innovation is thus central to decarbonisation efforts in the US and elsewhere. At the moment, however, US public investment in research, development and demonstration of battery technology remains limited.
Ramping up innovation efforts in clean energy will be key if the US is to meet its energy and environmental needs. For as David Turk, deputy assistant secretary for International Climate and Technology at the US Department of Energy notes, “If you don’t have robust innovation, if you don’t have new technologies, and if you don’t have cost reductions, countries are just not going to be able to be as ambitious as needed in their climate change efforts going forward".
Carolyn is a senior editor for The Economist Intelligence Unit's thought leadership division in the Americas. She manages research programs for foundations and corporations on topics ranging from urbanization and jobs to sustainability and youth economic prospects. She has over 20 years’ experience in journalism. Until 2013 Carolyn contributed articles to Fortune, Newsweek, the IHT and SciAm.com about urbanization, infrastructure, trade, technology and transportation, among other topics. She has also written materials for Ernst & Young, Columbia Business School and the United Nations. Earlier Carolyn covered the technology and healthcare beats for Barron’s Online and Dow Jones Newswires in Paris, respectively. She broke into journalism covering the 1992 Earth Summit and subsequently worked for the World Wildlife Fund in Switzerland. Ms. Whelan holds a B.A. in Communications from the University of Virginia and is a 2006 Columbia Business School Knight-Bagehot Fellow. She is Swiss and American, and speaks fluent French and Spanish.
Join our Opinion Leaders Panel
Complete the form to join our panel and receive rewards every time you complete our business surveys. You will also receive the weekly newsletter, containing the latest cutting edge reports, blogs and industry data.
| |
Our friendly neighborhood photog, Kevin, has created a 2017 calendar of images from performances and workshops at Suspend and, by happy coincidence, Denis and I are collectively Mr. February (I’m a February baby).
The image he used is one of my all-time favorites, captured during our dress rehearsal/tech run of “Duelo Trapecio.”
In a lot of ways, this image speaks to the best gift that Denis has given me: specifically, a stable foundation from which to fly.
Literally, in this picture, I’ve just mounted the trapeze from a candlestick:
…and I’m lifting my body out of Denis’ hands so he can roll to the side and I can beat up to a pike balance. (Technically, in this choreography, that’s all one move for me: I use the muscles of my back to pull up into an arc, release my back à la Martha Graham at the top, then allow momentum to carry me around the bar and the act of straightening my legs to pull me into the pike balance.)
As these things go, it’s a fairly basic acro-to-trapeze transition, but it’s not without risk.
In this sequence, timing is crucial — if he releases before my knees catch the bar, I have a split-second to react so I don’t pile-drive into his face and potentially break my own neck. If I enter the swinging phase of my beat too soon, I’ll whack him in the head with my hands or head at high velocity.
Likewise, if (as happened in the night of our first performance!) something goes wrong(1) and the trapeze isn’t where it should be, it’s up to me to gracefully exit the candlestick without making both of us look like idiots (hello, walk-over), and up to him to proceed smoothly with his portion of the choreography.
- What happened, in practice, was that D somehow got blinded by the stage lights during his transition from the previous sequence (in which I cartwheel and he catches my legs) and whacked his head on the trapeze! It’s on a rotating point, so it turned 90 degrees and wasn’t there when I reached the apex of the candlestick. Thank G-d for the billion years of training and preparation that made me steam right on through with a walk-over followed by a straddle mount.
Metaphorically, he has grounded himself so I can reach my goal (the trapeze) and soar. He has lifted me up without hanging on. I have trusted him to support me, and he has trusted me to take care of myself and of him.
As a model for relationships, there’s much to be said in favor of partnering. Each party must do his or her share of the work, each party is accountable to the other, and when both parties do what they need to do, the result is a beautiful harmony of movement; poetry in motion indeed.
When things go wrong, as they sometimes do, the dancers or aerialists in a good partnering relationship are able to respond accordingly — and while nothing can prevent all harmful outcomes, the care and attention that go into this kind of work allow for damage control through rapid-fire adjustments (and the kind of trust that can think, “I get that you’re presently holding me up by my unmentionables so I won’t fall and break my neck so later we can laugh at this trainwreck instead of crying about it…”).
Perhaps most importantly, though, a good partnering relationship allows us to accomplish things we cannot do alone — like a pas-de-chat that floats two meters above the ground, or (as in our example above) mounting a dance trapeze from a handstand(2).
- In an unassisted handstand, this trap hits too high for that. I could manage an ankle hang, or I could maybe mount from a front handspring, but a regular handstand won’t get me to the position depicted.
A good relationship of any kind, really, allows us to accomplish things we couldn’t on our own.
I am able to pursue my dreams because I have a strong and stable partner helping to lift me up towards them. I hope that I am, at least to some degree, doing the same for D. But it’s not only romantic partners and spouses who can do those things — good friends, loving parents and siblings, and even our peers in the dance studio lift us towards our dreams.
Just as ballet partnering depends not on romantic attachment(3), but on consistency and trust, so with the relationships in our lives that allow us to fly.
- Not that I would deny a certain kind of romantic sensibility that can evolve even in the most most platonic of these relationships — but that’s a topic for another time.
I am, of course, planning on buying a copy of Kevin’s calendar for our house (and for my Mom and Mother-in-Law, as Christmas presents). It will help keep him in photographic equipment so he can continue to grow as an artist and to take amazing pictures of all of us that sometimes manage to say a great deal about important things.
Posted on 2016/11/23, in balllet and tagged a picture is worth a thousand words, aerials, art demonstrates life, calendar boys, fotoewizzard photography, Kevin Spalding, partnering. Bookmark the permalink. Leave a comment. | https://danseurignoble.com/2016/11/23/calendar-boys/ |
What Is a Moving Average?
A moving average (MA) is a widely used indicator in technical analysis that helps smooth out price action by filtering out the “noise” from random short-term price fluctuations. It is a trend-following, or lagging, indicator because it is based on past prices.
The two basic and commonly used moving averages are the simple moving average (SMA), which is the simple average of a security over a defined number of time periods, and the exponential moving average (EMA), which gives greater weight to more recent prices. The most common applications of moving averages are to identify the trend direction, and to determine support and resistance levels. While moving averages are useful enough on their own, they also form the basis for other technical indicators such as the Moving Average Convergence Divergence (MACD).
Because we have extensive definitions and articles around specific types of moving averages, we will only define the term "moving average" generally here.
The Formulas For Moving Averages Are
Simple Moving Average:
SMA=nA1+A2+…+Anwhere:A=average in period nn=number of time periods
The simple moving average calculates the arithmetic mean of a security over a number (n) of time periods, A.
Exponential Moving Average:
EMAt=[Vt×(1+ds)]+EMAy×[1−(1+ds)]where:EMAt=EMA todayVt=Value todayEMAt=EMA todays=smoothingd=number of days
To calculate an EMA, you must first compute the simple moving average (SMA) over a particular time period. Next, you must calculate the multiplier for weighting the EMA (the smoothing), which typically follows the formula: [2 ÷ (selected time period + 1)]. So, for a 20-day moving average, the multiplier would be [2/(20+1)]= 0.0952. Then you use the smoothing factor combined with the previous EMA to arrive at the current value. The EMA therefore gives a higher weighting to recent prices, while the SMA assigns equal weighting to all values.
What Do Moving Averages Tell You?
Moving averages lag current price action because they are based on past prices; the longer the time period for the moving average, the greater the lag. Thus, a 200-day MA will have a much greater degree of lag than a 20-day MA because it contains prices for the past 200 days. The length of the moving average to use depends on the trading objectives, with shorter moving averages used for short-term trading and longer-term moving averages more suited for long-term investors. The 50-day and 200-day MAs are widely followed by investors and traders, with breaks above and below this moving average considered to be important trading signals.
Moving averages also impart important trading signals on their own, or when two averages cross over. A rising moving average indicates that the security is in an uptrend, while a declining moving average indicates that it is in a downtrend. Similarly, upward momentum is confirmed with a bullish crossover, which occurs when a short-term moving average crosses above a longer-term moving average. Downward momentum is confirmed with a bearish crossover, which occurs when a short-term moving average crosses below a longer-term moving average.
Predicting trends in the stock market is no simple process. While you can not predict what will happen exactly, you can give yourself better odds using technical analysis and research. Putting your research and technical analysis to the test in the market would require a brokerage account. Picking a broker can be frustrating due to the variety among them, but you can pick one of the best online stockbrokers to find the right platform for your needs.
Moving averages are a totally customizable indicator, which means that the user can freely choose whatever time frame they want when creating the average. The most common time periods used in moving averages are 15, 20, 30, 50, 100 and 200 days. The shorter the time span used to create the average, the more sensitive it will be to price changes. The longer the time span, the less sensitive, or more smoothed out, the average will be. There is no "right" time frame to use when setting up your moving averages. The best way to figure out which one works best for you is to experiment with a number of different time periods until you find one that fits your strategy.
Key Takeaways
- A moving average is a technique often used in technical analysis that smooths price histories by averaging daily prices over some period of time.
- Simple moving averages (SMA) takes the arithmetic mean of a given set of prices over the past number of days, for example over the previous 15, 30, 100, or 200 days.
- Exponential moving averages (EMA) uses a weighted average that gives greater weight to more recent days to make it more responsive to new information.
- When asset prices cross their moving averages, it may generate a trading signal for technical traders.
Simple vs. Exponential Moving Average
The simplest form of a moving average, appropriately known as a simple moving average (SMA), is calculated by taking the arithmetic mean of a given set of values. In other words, a set of numbers, or prices in the case of financial instruments, are added together and then divided by the number of prices in the set. The exponential moving average is a type of moving average that gives more weight to recent prices in an attempt to make it more responsive to new information. Learning the somewhat complicated equation for calculating an EMA may be unnecessary for many traders, since nearly all charting packages do the calculations for you.
Now that you have a better understanding of how the SMA and the EMA are calculated, let's take a look at how these averages differ. By looking at the calculation of the EMA, you will notice that more emphasis is placed on the recent data points, making it a type of weighted average. In the figure below, the numbers of time periods used in each average is identical (15), but the EMA responds more quickly to the changing prices. Notice how the EMA has a higher value when the price is rising, and falls faster than the SMA when the price is declining. This responsiveness is the main reason why many traders prefer to use the EMA over the SMA. (For further reading, see: Simple Vs. Exponential Moving Averages)
Example Of Calculating A Moving Average
Moving Average
A moving average (MA) is calculated in different ways depending on its type.
Let's look at a simple moving average (SMA) of a security with the following closing prices over 15 days:
Week 1 (5 days) – 20, 22, 24, 25, 23
Week 2 (5 days) – 26, 28, 26, 29, 27
Week 3 (5 days) – 28, 30, 27, 29, 28
A 10-day moving average would average out the closing prices for the first 10 days as the first data point. The next data point would drop the earliest price, add the price on day 11 and take the average, and so on as shown below. | https://www.investopedia.com/terms/m/movingaverage.asp |
Call for Applications: U.S.-German Forum Future Agriculture
Agriculture plays a major role in the economy, society, the environment, and for the future. The sector faces significant challenges due to rising population growth, declining biodiversity, and the need to adapt to the impacts of climate change, such as more frequent extreme weather events and exacerbated water scarcity. The war in Ukraine and the COVID-19 pandemic have added stress to supply chains and led to rising investment costs, price volatility, and global trade conflicts. These factors have increased pressure on farms and farmers as the industry already struggles to recruit younger generations and as a rapidly changing field places new demands on professional training. Identifying new ways to face these challenges is essential; an environmentally, economically, and socially sustainable agriculture can play a significant role in ensuring the future prosperity of rural areas, protecting natural resources, preserving biodiversity, and mitigating climate change.
Both Germany and the United States are central to shaping a more resilient agricultural future. Both countries face similar challenges, but a better mutual understanding of different agricultural practices and standards is needed to provide joint global leadership in shaping the future of agriculture. The U.S.-German Forum Future Agriculture, led by the Aspen Institute Germany, together with implementing partner, the University of Illinois Urbana-Champaign, addresses precisely this need by bringing together German and U.S. farmers and key agricultural stakeholders from research and business. Through the exchange of experiences, the opportunity to visit best practices on-site, and the establishment of new transatlantic networks, this project will promote innovative approaches for the future of agriculture and rural areas.
Farmers and other key agricultural stakeholders have until December 2, 2022 to apply to be a part of this unique transatlantic dialogue during the first program year in 2023. The focus will be on eastern Germany (Mecklenburg-Vorpommern, Brandenburg, Saxony-Anhalt, Saxony, Thuringia) and the Corn Belt of the U.S. Midwest (Iowa, Indiana, Illinois, Missouri, Ohio). In addition to the fundamental social, economic, and political importance of agriculture for rural regions, the first cohort will focus in particular on the core issue of climate. Farmers are on the frontlines of climate change. Drought, temperature increases, and extreme weather such as storms, heavy rain, hail, and floods pose new challenges for agriculture. Crop failures, scarce water resources, changes in crop planting patterns, and new invasive species are just some of the consequences that farmers have to deal with. The current situation necessitates an agricultural strategy which involves both adapting to the climate change impacts we already experience and those that are yet to come, as well as mitigating climate change by reducing emissions in agricultural production. Farmers’ involvement in developing economically viable climate solutions for the well-being of rural communities and the environment is critical. As a result, this program will focus on the following questions:
- What measures are needed to make agricultural systems more resilient to climate change, thereby securing the livelihood of farmers and their contribution to rural regions in the face of climate-related risks?
- How can agriculture be made more climate-smart and sustainable to reduce greenhouse gas emissions, more efficiently use water resources, and mitigate negative environmental impacts in a way that simultaneously benefits farmers?
- How can agriculture strengthen social and political cohesion in rural regions and in our societies?
- How can policy shape the framework for more socially and economically sustainable agriculture?
- How can investments in research and development of new technologies be best supported and effectively connected with agricultural practitioners, to bridge the gap between research and practice?
- How can transatlantic cooperation in the agricultural sector be promoted and how can transatlantic partners better learn from each other?
- How can we overcome longstanding transatlantic conflicts and differing policy approaches around food production to assume a joint global leadership role under the umbrella of the transatlantic partnership?
Program outline and benefits of participation
To explore these questions, the project will give participants the opportunity to engage with international peers and other leaders in the field and shape the transatlantic dialogue on agriculture during in-person and virtual programming, to include:
- interactive virtual sessions between March and July 2023 (approximately 11 hours total),
- a 5-day in-person meeting (travel time to and from the meeting included) in Champaign-Urbana, Illinois in June/July 2023,
- collaboration on a joint publication to be released in September 2023, and
- a virtual closing event in September 2023.
During the program, participants will learn more about agricultural practices and policies in each other’s countries, conduct site visits to see best practices and innovative solutions on the ground, and explore opportunities for transatlantic collaboration. Participants are expected to actively contribute to the development of concrete recommendations for decision-makers in politics, business, and the field itself, which will be published and presented in a final closing event. All costs for participation will be covered by the project.
Eligibility
Applicants for this exchange program must:
- be active farmers and/or work in agriculture-related business, politics, research, etc.,
- have at least 3 years of experience in the field of agriculture,
- be able to actively participate in all components of the virtual and in-person exchange,
- be able to participate in the development and publication of recommendations in the form of a publication and a final event,
- have an interest in transatlantic exchange and the topic of climate and agriculture,
- have sufficient knowledge of English.
- German applicants should live and work in the eastern part of Germany (Mecklenburg-Vorpommern, Brandenburg, Sachsen-Anhalt, Sachsen, Thüringen).
- U.S. applicants should live and work in the Corn Belt (Iowa, Indiana, Illinois, Missouri, Ohio).
Application Process
To apply, you may either:
- Send your CV or resume in English by email to Katja Greeson at [email protected], OR
- Submit an application via the online form here.
Applications should be submitted by December 2, 2022.
COVID-19 Disclaimer: Program components and schedule are subject to change due to changes in travel restrictions.
The project is supported by the Transatlantic Program of the Federal Republic of Germany, funded by the European Recovery Program (ERP) of the Federal Ministry of Economics and Climate Protection (BMWK). | https://www.aspeninstitute.de/transatlantic-program/future-agriculture-call-for-participants/ |
Fatigue-induced ACL injury risk stems from a degradation in central control.
Fatigue contributes directly to anterior cruciate ligament (ACL) injury via promotion of high risk biomechanics. The potential for central fatigue to dominate this process, however, remains unclear. With centrally mediated movement behaviors being trainable, establishing this link seems critical for improved injury prevention. We thus determined whether fatigue-induced landing biomechanics were governed by a centrally fatiguing mechanism. Twenty female NCAA athletes had initial contact (IC) and peak stance (PS) three-dimensional hip and knee biomechanics quantified during anticipated and unanticipated single-leg landings, before and during unilateral fatigue accumulation. To induce fatigue, subjects performed repetitive (n = 3) single-leg squats and randomly ordered landings, until squats were no longer possible. Subject-based dependent factors were calculated across prefatigue trials and for those denoting 100%, 75%, 50%, and 25% fatigue and were submitted to three-way mixed-design analyses of covariance to test for decision, fatigue time, and limb effects. Fatigue produced significant (P < 0.01) decreases in IC knee flexion angle and PS knee flexion moment and increases in PS hip internal rotation and knee abduction angles and moments, with differences maintained from 50% fatigue through to maximum. Fatigue-induced increases in PS hip internal rotation angles and PS knee abduction angles and loads were also significantly (P < 0.01) greater during unanticipated landings. Apart from PS hip moments, significant limb differences in fatigued landing biomechanics were not observed. Unilateral fatigue induces a fatigue crossover to the contralateral limb during single-leg landings. Central fatigue thus seems to be a critical component of fatigue-induced sports landing strategies. Hence, targeted training of central control processes may be necessary to counter successfully the debilitative impact of fatigue on ACL injury risk.
| |
Prema Entha Madhuram 10 September 2021 Written Update: Padma is accused of stealing the necklace
As Subbu asks Padma to return the necklace, Anu reveals that it’s missing. Later, the necklace is found in the kitchen.
In the previous episode of Prema Entha Madhuram, Padma is panic-stricken as the necklace goes missing. Arya puts bangles on Anu’s hands. In the press meet, Arya announces that Anu will be the new vice-chairman of Vardhan Group of Industries.
In the next episode of Prema Entha Madhuram, Jalandhar is frustrated as Arya manages to escape from his trap yet again. He envies Arya and Anu’s togetherness and vows revenge on Arya. As the ‘mehendi’ function gets over, Arya’s family gets ready to leave Anu’s house. Sharada Devi discusses the particulars of the ‘mangala snanam’ (ritual) with Padma. Subbu asks Padma to return the necklace to Mansi, leaving Padma panic-stricken. Sharada Devi and Mansi refuse to take the necklace back, but Subbu insists on returning it. Later, Anu tells Subbu that the necklace is missing, which shocks everyone.
Upon being questioned, a tensed Padma explains to Subbu that the necklace went missing during the power cut. Padma cries while Subbu scolds her. Sharada Devi thinks that Padma may have misplaced it, but Anu tells her that it’s nowhere to be found. Mansi questions Padma for being careless. Meera asks Padma if she suspects anyone. When Padma mentions Sampat calling her, Raghupati confronts her for accusing Sampat. Raghupati then hits Sampat. Subbu apologises to Arya’s family and asks them to check his entire house. Arya’s family refuses initially, but Meera convinces them.
Jende and Meera begin searching through Subbu’s house. Raghupati suggests checking the kitchen because that’s where women hide things. Meera finds the necklace in the kitchen, leaving everyone stunned. Padma pleads innocence and says that she doesn’t know how the necklace came to the kitchen. Sharada Devi says that Padma may have forgotten in the kitchen. Raghupati suggests taking the case to the police. Arya says that the culprit must be punished. He decides to take help of the police dogs to find out the truth, which scares Raghupati.
Read more stories on Prema Entha Madhuram here. | https://www.zee5.com/zee5news/prema-entha-madhuram-10-september-2021-written-update-padma-is-accused-of-stealing-the-necklace |
A mere 83 days before the November election, and the ground below us has shifted. Even with an ongoing pandemic, young people across the nation have risen to the occasion.
More than half of the marches for the Black Lives Matter movement was organized by young advocates. Various students have taken this opportunity to start creative initiatives, such as sewing masks for the needy, volunteering for social justice causes, starting community organizations, and so much more.
Of course, youth voices have always existed, but the pandemic has served as a catalyst that amplified it.
Through my school community, I could sense the apparent change in ethos. Although there were always peers who spoke passionately about social justice issues, the months of quarantine at home worked to amplify that energy.
Since the beginning of quarantine in March until now, early August, stories range from local issues to international affairs: supporting the #BLM protests, speaking against Anti-Asian xenophobia, elections, advocating #PayUp for Bangladeshi workers, etc.
At its start, I thought many people were simply jumping on the hype with their social media activism. After all, some of those individuals who avidly spoke for the Black Lives Matter movement were the same people who stayed silent during the waves of xenophobic attacks towards Asian-Americans. But I dont think that was the case.
Rather, I think the overwhelming support and voices for the #BLM Movement proved the importance of our collective effort. Peers who were once hesitant of speaking out found reason and motivation to do so.
My voice matters. Theres so many young leaders out there, why cant I be one of them? said an anonymous high school activist.
She had believed that social media was a only a platform for attention seekers and unhealthy content. But now, she also realized the importance of social media campaigning.
Its the single best way to reach students. Theres good and bad content out there, it depends on how you utilize it. This student has worked with her group of peers to demand for greater change through petitions and community service. It was social media campaigning that gave her the opportunity to expand her outreach and connect with a vast group of like-minded activists.
Social media aside, I think the greatest change that has occurred since quarantine is a mental shift. The belief that we, as young members of society, have the ability to advocate for better policies and a fairer system.
Millennials and Generation Z will constitute 37% of the voting population for the next election, making this election an important one to echo our opinions. Compared to previous generations, this young electorate is more diverse ethnically, racially and in many other respects.
We are not afraid to explore that diversity!
Make a Donation
BK Reader is brought to you for free daily. Please consider supporting independent local news by making a donation here. Whether it is $1 or $100, no donation is too big or too small! | https://bkreader.com/2020/08/11/young-voices-amplified-during-the-pandemic/ |
The most recent version of the International Committee of the Red Cross (ICRC) “People on War” poll has shown an alarming shift in attitudes, particularly among the populations of the five permanent UN Security Council members, toward accepting war crimes as just “part of war.” No nation saw a bigger shift, however, than the United States.
Deliberate military attacks conducted knowing civilian casualties will result were supported by all five permanent members, and each of them was disturbingly okay with the idea of attacking hospitals, even though that is illegal under international law.
Tolerance for torture, however, is increasingly a uniquely American phenomenon, with roughly half of the Americans polled seeing torture as an acceptable way to try to gather information. None of the other permanent Security Council members were anywhere near this, and the only other countries on the planet with comparable figures were Nigeria and Israel.
The broadest opposition to war crimes, unsurprisingly came from respondents to the poll in countries ravaged by war, with people in Yemen uniformly against torture and against attacking hospitals. The Afghan public, similarly was overwhelmingly opposed to both.
| |
What are the 5 methods of waste disposal?
5 Types of Waste Disposal Methods
- Recycling. Recycling is one of the best methods of disposal simply because it goes a long way to preserve the environment. …
- Animal Feed. Your pet can be quite an effective waste disposal entity. …
- Biological Reprocessing. …
- Incineration. …
- Landfill.
What are the two methods used to fill up a landfill?
Trench and area methods, along with combinations of both, are used in the operation of landfills. Both methods operate on the principle of a “cell,” which in landfills comprises the compacted waste and soil covering for each day.
What are landfill methods?
There are three general methods of landfills, which are: (1) area method, (2) trench method, and (3) ramp, or slope method. The area method is best suited for flat or gently sloping areas where some land depressions may exist.
How many types of landfills are there in India?
There are 59 constructed landfill sites, and 376 are under planning stage in India as reported by CPCB (2013). The properly engineered landfills are seldom found in emerging economies like India.
How many landfills are in the EPA?
State-Level Project and Landfill Totals from the LMOP Database
|State||Operational Projects||All Landfills|
|California (September 2021) (xlsx)||55||300|
|Colorado (September 2021) (xlsx)||2||38|
|Connecticut (September 2021) (xlsx)||2||24|
|Delaware (September 2021) (xlsx)||4||4|
How many landfills are in the US 2021?
There are around 1,250 landfills.
What are the various methods of garbage disposal describe the landfill method?
Those groups include source reduction and reuse, animal feeding, recycling, composting, fermentation, landfills, incineration and land application. You can start using many techniques right at home, like reduction and reuse, which works to reduce the amount of disposable material used.
What is the best methods of waste disposal?
“For textiles, there’s not very many statistics, but what there is shows reuse is clearly optimal, followed by recycling and then energy recovery [incineration]. “For food and garden waste, anaerobic digestion looks preferable; then composting and incineration with energy recovery come out very similar. | https://enrichedearth.org/dump/how-many-landfill-methods-are-there.html |
Exploring Economics, an open-access e-learning platform, giving you the opportunity to discover & study a variety of economic theories, topics, and methods.
The current Great Recession, the worst crisis that capitalism has faced since the Great Depression, has failed, at least so far, to generate a change in the teaching and practice of Macroeconomics. This seems bizarre as if nothing has happened and the economists are just going about doing business as usual. In light of this, the current paper attempts to address how Macroeconomics ought to be taught to students at the advanced intermediate level, which gives them an overall perspective on the subject.
This paper surveys the development of the concept of socialism from the French Revolution to the socialist calculation debate. Karl Marx’s politics of revolutionary socialism led by an empowered proletariat nurtured by capital accumulation envisions socialism as a “top-down” system resting on political institutions, despite Marx’s keen appreciation of the long-period analysis of the organization of social production in the classical political economists. Collectivist thinking in the work of Enrico Barone and Wilfredo Pareto paved the way for the discussion of socialism purely in terms of the allocation of resources. The Soviet experiment abandoned the mixed economy model of the New Economic Policy for a political-bureaucratic administration of production only loosely connected to theoretical concepts of socialism. The socialist calculation debate reductively recast the problem of socialism as a problem of allocation of resources, leading to general equilibrium theory. Friedrich Hayek responded to the socialist calculation debate by shifting the ground of discussion from class relations to information revelation
Exploring Economics, an open-access e-learning platform, giving you the opportunity to discover & study a variety of economic theories, topics, and methods.
Exploring Economics, an open-access e-learning platform, giving you the opportunity to discover & study a variety of economic theories, topics, and methods.
Exploring Economics, an open-access e-learning platform, giving you the opportunity to discover & study a variety of economic theories, topics, and methods.
After a brief illustration of sovereign green bonds’ features, this paper describes the market evolution and identifies the main benefits and costs for sovereign issuers. The financial performance of these securities is then analysed.
Karl Marx was the greatest champion of the labor theory of value. The logical problems of this theory have, however, split scholars of Marx into two factions: those who regard it as an indivisible component of Marxism, and those who wish to continue the spirit of analysis begun by Marx without the labor theory of value.
Exploring Economics, an open-access e-learning platform, giving you the opportunity to discover & study a variety of economic theories, topics, and methods.
In 18th century Europe figures such as Adam Smith, David Ricardo, Friedrich List and Jean Baptiste Colbert developed theories regarding international trade, which either embraced free trade seeing it as a positive sum game or recommended more cautious and strategic approaches to trade seeing it as a potential danger and a rivalry and often as a zero-sum game. What about today?
A central question in development economics literature is, “Why do countries stay poor?” The key disagreements are whether the lack of economic growth stems from institutions or from geography (Nunn 2009). From an institutional perspective, hostile tariff regimes and commodity price dependencies form a barrier to a sectoral shift that would otherwise lead to economic development in developing countries (Blink and Dorton 2011) (Stiglitz 2006).[i]
The core of Georgism is a policy known as the Land Value Tax (LVT), a policy which Georgists claim will solve many of society and the economy’s ills. Georgism is an interesting school of thought because it has the twin properties that (1) despite a cult following, few people in either mainstream or (non-Georgist) heterodox economics pay it much heed; (2) despite not paying it much heed, both mainstream and heterodox economists largely tend to agree with Georgists. I will focus on the potential benefits Georgists argue an LVT will bring and see if they are borne out empirically. But I will begin by giving a nod to the compelling theoretical and ethical dimensions of George’s analysis, which are impossible to ignore.
This syllabus provides an overview of the contents of the course "Understanding Economic Models" at the University of Helsinki.
The Nobel laureate Amartya Sen´s text analyzes three main figures in social sciences and the relation between them: the Italian economist Piero Sraffa, the Austrian philosopher Ludwig Wittgenstein, and the Italian politician and philosopher Antonio Gramsci.
This film looks at the role economic growth has had in bringing about this crisis, and explores alternatives to it, offering a vision of hope for the future and a better life for all within planetary boundaries.
In this article, Hannah Ritchie presents the data we need to understand the scale of their contribution, and which countries are most reliant on Ukraine for their food supplies.
In this course you will study the different facets of human development in topics such as education health gender the family land relations risk informal and formal norms public policy and institutions While studying each of these topics we will delve into the following questions What determines the decisions of …
This MOOC (Massive Open Online Course) discusses Global Workers’ Rights and shows instruments and strategies which can be used to implement them.
Experimental economists are leaving the reservation. They are recruiting subjects in the field rather than in the classroom, using field goods rather than induced valuations, and using field context rather than abstract terminology in instructions.
Adam Smith and Karl Marx recognized that the best way to understand the economy is to study the most advanced practice of production. Today that practice is no longer conventional manufacturing: it is the radically innovative vanguard known as the knowledge economy.
As opposed to the conventional over-simplified assumption of self-interested individuals, strong evidence points towards the presence of heterogeneous other-regarding preferences in agents. Incorporating social preferences – specifically, trust and reciprocity - and recognizing the non-constancy of these preferences across individuals can help models better represent the reality.
An essay of the writing workshop on contemporary issues in the field of Nigerian economics: In Nigeria, it appears that there is nothing in the constitution, which excludes the participation of women in politics. Yet, when it comes to actual practice, there is extensive discrimination. The under-representation of women in political participation gained root due to the patriarchal practice inherent in our society, much of which were obvious from pre-colonial era till date.
The notion that the demand and supply side are independent is a key feature of textbook undergraduate economics and of modern macroeconomic models. Economic output is thought to be constrained by the productive capabilities of the economy - the ‘supply-side' - through technology, demographics and capital investment. In the short run a boost in demand may increase GDP and employment due to frictions such as sticky wages, but over the long-term successive rises in demand without corresponding improvements on the supply side can only create inflation as the economy reaches capacity. In this post I will explore the alternative idea of demand-led growth, where an increase in demand can translate into long-run supply side gains. This theory is most commonly associated with post-Keynesian economics, though it has been increasingly recognised in the mainstream literature.
‘We cannot afford their peace & We cannot bear their wars’: Value, Exploitation, Profitability Crises & ‘Rectification’
Exploring Economics Dossier on the economic fallout of the COVID-19 pandemic and the structural crisis of globalization. COVID-19 encounters a structural crisis of globalization and the economic system that drives it, with an uncertain outcome. We asked economists worldwide to share with us their analysis of current events, long-term perspectives and political responses. The dossier will be continuously expanded.
How long the COVID-19 crisis will last, and what its immediate economic costs will be, is anyone's guess. But even if the pandemic's economic impact is contained, it may have already set the stage for a debt meltdown long in the making, starting in many of the Asian emerging and developing economies on the front lines of the outbreak.
This Perspective argues that ergodicity — a foundational concept in equilibrium statistical physics — is wrongly assumed in much of the quantitative economics literature. By evaluating the extent to which dynamical problems can be replaced by probabilistic ones, many economics puzzles become resolvable in a natural and empirically testable fashion.
There was a time when the world still seemed a good and above all simple place for monetary authorities Every few weeks they had to decide whether in view of the latest price developments it would be better to raise the key interest rates by a quarter point or not …
This course introduces students to political economy and the history of economic thought. We will cover the core ideas in various schools of economic thought, positioning them in the historical and institutional context in which they were developed. In particular, we will cover some economic ideas from the ancient world and the middle ages; the enlightenment; the emergence of and main ideas in classical political economy (Adam Smith, David Ricardo, Thomas Malthus, and others); Marx, Mill, and Keynes; European versus American economic thought through history; the rise of mathematical economics; economic theories around state-managed economies versus socialism; Austrian economics; behavioral economics; and the future of economics.
Post-Colonialisms Today researchers Kareem Megahed and Omar Ghannam explain how early post-independence Egypt sought economic independence via industrialization.
The goal of this course is to explore these differences in economic outcomes observed among women and men, measured by such things as earnings, income, hours of work, poverty, and the allocation of resources within the household. It will evaluate women’s perspectives and experiences in the United States and around the world, emphasizing feminist economics.
By the end of this course, students should understand the basic economic theories of the gender division of labor in the home and at the workplace, and theories of gender differences in compensation and workforce segregation. | https://www.exploring-economics.org/en/search/?q=working+time+reductions&page=6 |
Digital inclusion: Activating skills for the next billion jobs
The COVID-19 crisis has put millions of people out of work and exacerbated economic inequality around the world. It has also squeezed years of digital transformation of the economy into just a few months — opening up new possibilities and challenges. Many workers will likely spend the next year or two in a "hybrid economy," with work continuing at least partially remotely. That means it will be more important for people to have the tech skills to succeed in a totally new workplace.
Connecting the more than 3 billion people who today lack reliable internet access to the communications tools and essential services they need to participate in the modern economy is an essential first step. Broadband is the electricity of the 21st century. Without universal access to broadband, the economic recovery from COVID-19 will be neither comprehensive nor inclusive. The pandemic underscores the risks of a digital divide — increasing the reliance of households, small businesses, and entire economies on internet access, while leaving those without it further and further behind.
In addition to eliminating or transforming current jobs, the pandemic may also generate many new ones. If industries maximize digital transformation, the 2020 lockdown could generate by 2025 as many as 150 million new tech jobs in software development, cyber security, data analysis, and other fields. To take advantage of this opportunity, governments, the private sector, and international organizations will need to invest in teaching workers new skills and reverse a two-decade decline in training on the job.
In order to make sure that the post-pandemic economic recovery is inclusive, we need to ensure that all people — especially those unreached or displaced by technology — have access to the skills needed for jobs and livelihoods as well as the connectivity to enable the development of the skills needed in this more digital economy.
What's the UN doing about it?
The International Telecommunication Union (ITU) - the UN's specialized agency for information and communication technologies - is tackling digital inclusion through its Connect 2030 Agenda. The Agenda is working across five goals:
- Growth - enabling and fostering access to the digital economy,
- Inclusiveness - bridging the digital divide and providing broadband access for all,
- Sustainability - managing emerging risks, challenges, and opportunities as society digitizes,
- Innovation - driving technological improvement, and
- Partnerships - broadening the coalitions working to expand access to digital opportunities.
The UN and ITU play a key role in collecting data — more than 100 indicators across 200 economies — to help better understand connectivity challenges and to benchmark progress toward closing the digital gap and expanding opportunities, including for women, youth, and minority communities.
On the ground, the UN also works extensively with governments and businesses to expand digital access and training in developing countries — from Colombia and Kenya to Thailand and beyond. But the UN can't do this alone, particularly when so much important data and insights are being generated by private tech platforms — from videoconferencing to networking to content creation — that underpin the 21st century economy.
How are others trying to help?
Industry, civil society, and government have to help prepare people with the skills they need for a 21st century economy. A key bottleneck is access, especially in rural or low-income areas. Technology firms including Microsoft have launched programs to connect people with fast, safe, and reliable internet, and to ensure that once they get online they can take advantage of educational resources to build up skills. Microsoft's Airband program to expand internet access in rural communities is operating in 20 countries, 25 US states, and serving 16 million people through programs and partnerships. And it in turn is helping to support Microsoft's commitment to help 25 million people worldwide acquire the digital skills needed in a post-pandemic economy.
What's needed next?
Everyone has a role to play, from government to industry to nonprofits. Employers can play a bigger role than they have in recent years to help employees develop these new skills. Governments can provide funding for citizens to access the relevant skills training or provide incentives to employers to do so. They can also make some of their data sets available for public use to enable job seekers and employers to identify in-demand skills and growth areas.
How can I get involved?
- Read the UN Secretary General's Roadmap for Digital Cooperation.
- Check out some case studies of what digital inclusion strategies can look like for creators, coaches, researchers, and refugees around the world. | https://www.gzeromedia.com/global-stage/digital-equity/digital-inclusion-activating-skills-for-the-next-billion-jobs |
The actor, 36, was initially cast in a starring role but wound up being replaced by Harry Styles shortly after production began in 2020.
Director Olivia Wilde claimed in an interview published this week that Shia was fired because he had “a combative energy” that she felt she needed to “protect” co-star Florence Pugh and others from. However, Shia has a different version of events.
Responding to the claims in emails to Variety, Shia said he actually “quit the film” in August of 2020, citing a “lack of rehearsal time.”
Variety also shared two emails that Shia claimed to have sent Olivia after her statements on his exit, in which he wrote, “You and I both know the reasons for my exit. I quit your film because your actors and I couldn’t find time to rehearse.”
Shia also mentioned that he met with Olivia the night before he officially left the film to discuss his departure, and shared a text she allegedly sent him after where she thanked him for “letting me in on your thought process.”
“I know that isn’t fun,” she reportedly continued. “Doesn’t feel good to say no to someone, and I respect your honesty. I’m honored you were willing to go there with me, for me to tell a story with you. I’m gutted because it could have been something special. I want to make clear how much it means to me that you trust me. That’s a gift I’ll take with me.”
But Shia claims that Olivia continued to reach out to him, even sending a video where she allegedly said she was “not ready to give up on this yet.” “And I too am heartbroken and I want to figure this out,” she continued, seeming to reference issues between Shia and Florence. “You know, I think this might be a bit of a wake-up call for Miss Flo, and I want to know if you’re open to giving this a shot with me, with us. If she really commits, if she really puts her mind and heart into it at this point and if you guys can make peace — and I respect your point of view, I respect hers — but if you guys can do it, what do you think? Is there hope? Will you let me know?”
Olivia hasn’t commented on any of this yet, but in the meantime, you can read the full email and report here. | https://oacoree.com/shia-lebeouf-denies-firing-from-dont-worry-darling/ |
Electromagnetic Theory (EE 301): Home
Electromagnetics is fundamental in electrical and electronic engineering. Electromagnetic theory based on Maxwell's equations establishes the basic principle of electrical and electronic circuits over the entire frequency spectrum from dc to optics.
Course Description
Electromagnetic Theory covers the basic principles of electromagnetism: experimental basis, electrostatics, magnetic fields of steady currents, motional e.m.f. and electromagnetic induction, Maxwell's equations, propagation and radiation of electromagnetic waves, electric and magnetic properties of matter, and conservation laws. This is a graduate level subject which uses appropriate mathematics but whose emphasis is on physical phenomena and principles. | https://libguides.riphah.edu.pk/c.php?g=934887&p=6759223 |
Classical lines and ballet vocabulary are the tools used to help students overcome their preconceived limitations, and cultivate their own language in contemporary ballet.
David J. Amado´s Contemporary Ballet is an opportunity to transcend classical dance. It incorporates cultural traditions deeply rooted in the African diaspora, following the structure of a traditional ballet class. Mobility of the spine and "groundedness" are the key components of the technique.
This fusion of the classical with the contemporary, and the European with the African Diasporic, creates an inclusive environment where everyone is welcome regardless of dance experience or background.
CLASSICAL BALLET
Classical dance should be used as a transformational tool to help us build confidence and self esteem by showing us that we all possess strength, grace and beauty.
David J. Amado's class follows the Vaganova (Russian) syllabus which places an emphasis on proper alignment, strength, rotation, aplomb and port de bras. Musicality and phrasing are key elements.
Detailed, individualized attention is given to each and every student resulting not only in improved ballet technique, but also in deeper understanding of the dance form and the acquisition of ballet vocabulary. | https://www.davidjamado.com/classes |
Zero trust architecture (ZTA) is not a new concept, but with the White House Executive Order published earlier this year, many in the networking space have started to ask about how network visibility analytics fits into the equation. To answer that, we first need to look at what’s driving this shift.
Section 3 of the EO states that agencies (and organizations working with them) must adopt security best practices, advance toward ZTA, and accelerate movement to secure cloud services SaaS, IaaS and PaaS. It called for government agencies to update existing plans to prioritize resources for the adoption and use of cloud technology, and to develop plans to implement ZTA (using NIST’s migration steps). It also called on the Cybersecurity and Infrastructure Security Agency (CISA) to modernize to be fully functional with cloud-computing environments with ZTA, and for FedRAMP to develop security principles governing cloud service providers for incorporation into agency modernization efforts.
With all the vendor confusion around what is and is not ZTA, I’d like to explore its impact on network visibility (and the requirements).
According to NIST, “zero trust is the term for an evolving set of cybersecurity paradigms that moves defenses from static, network-based perimeters to focus on the users, assets and resources. ZTA uses zero trust principles to plan industrial and enterprise infrastructure and workflows.” The basic NIST tenets of this approach include:
- The entire enterprise private network is not considered an implicit trust zone
- Devices on the network may not be owned or configurable by the enterprise
- No resource is inherently trusted
- Not all enterprise resources are on enterprise-owned infrastructure
- Remote enterprise subjects and assets cannot fully trust their local network connection
- Assets and workflows moving between enterprise and non-enterprise infrastructure should have consistent security policies and postures.
But how is ZTA relevant to network visibility or network performance monitoring (NPM)?
There are three NIST architecture approaches for ZTA that have network visibility implications. The first is using enhanced identity governance, which (for example) means using identity of users to only allow access to specific resources once verified. The second is using micro-segmentation, e.g., when dividing cloud or data center assets or workloads, segmenting that traffic from others to contain but also prevent lateral movement. And finally, using network infrastructure and software defined perimeters, such as zero trust network access (ZTNA) which for example allows remote workers to connect to only specific resources.
NIST also describes monitoring of ZTA deployments. Outlining that network performance monitoring will need security capabilities for visibility. This includes that traffic should be inspected and logged on the network (and analyzed to identify and reach to potential attacks), including asset logs, network traffic and resource access actions.
Furthermore, NIST expresses concern about the inability to access all relevant and encrypted traffic – which may originate from non-enterprise-owned assets (such as contracted services that use the enterprise infrastructure to access the internet) or applications and/or services that are resistant to passive monitoring. Organizations that cannot perform deep packet inspection or examine encrypted traffic must use other methods to assess a possible attacker on the network.
The DoD breaks down ZTA architectures in seven trust pillars: User, Device, Network/Environment, App and Workload, Data, Visibility and Analytics, and Automation and Orchestration. Not surprisingly, NPM is directly relevant to the Visibility and Analytics pillar. Here are four ways that NPM fits into the ZTA architecture:
- NPM can offer contextual details and an understanding of performance, behavior and activity across other pillars including applications and users. For example, monitoring application performance over various portions of the network and/or pointing to security issues like denial of service or compromised network devices. NPM can also provide analysis of user behavior with respect to the traffic patterns of the applications from various user devices.
- NPM improves detection of anomalous behavior and enables stakeholders to make dynamic changes to security policies and real-time access decisions. For example, leveraging network AIOps to look for anomalous behavior within and across sites. If large volume of traffic is indicating some type of data exfiltration, that can be alerted and security policies adjusted. Another example is with micro segmented networks using VXLAN and having visibility of the various virtual networks, including traffic within each VXLAN. This is important to know to understand if proper security policies are working.
- NPM gives situational awareness of environments and triggers alerts for response. For example, in SD-WAN networks, applications can adjust paths taken, which may have performance and security consequences. NPM products that integrate to SD-WAN can alert on those types of conditions. Another example is in cloud networks where monitoring and alerting on rejected traffic can be used to look for suspicious activity and adjust cloud security policies accordingly.
- NPM captures and inspects traffic. It allows IT to investigate specific packets, accurately discover traffic on the network, and observe threats that are present to orient defenses. For example, using encrypted traffic analysis (ETA), packet analytics can be used to continuously monitor traffic for suspicious behavior and correlate that with threat indicators to narrow down high impact incidents in real time. This is becoming more important as much of the traffic is becoming encrypted and having visibility into that traffic is key for enterprise and cloud environments.
As organizations look to align with ZTA, visibility plays an important role in meeting new requirements. | https://www.helpnetsecurity.com/2021/11/10/visibility-analytics-zero-trust/ |
Following the methods of Triphasic Training and the goal of applying maximal stress on specific adaptations from training, the three phases that occur in every dynamic contraction, eccentric, isometric, and concentric, are trained on an individual basis. A previous article “Supramaximal Slow Eccentrics and the Safety Bar Split Squat” explained the training ideologies, reasoning, and implementation of supramaximal training in the eccentric phase. As laid out in the previous article, the stretch-shortening cycle (SSC), which is utilized in every dynamic movement and consists of the eccentric, isometric, and concentric phases, is one of the most important aspects of training and is the focal point of Triphasic Training. Once the eccentric phase of contraction has been improved through specific, supramaximal training, the isometric training phase is implemented to continue the optimization of the power and efficiency of the SSC.
The isometric phase of movement is absolutely critical for the optimal transition between the lengthening and shortening of a muscle in contraction. Without the training of this phase specific to movement requirements, every athlete will lose potential free-energy from the SSC in each completed action in training and competition. This reduced free-energy leads to inefficient movement and reduced power production.
If the “V” shape, used in the previous article and shown again below in Figure 1, is readdressed, the eccentric and concentric muscle actions of dynamic movement become clear, while the isometric action is commonly overlooked as it is not clearly portrayed within the “V”. The isometric phase occurs in the brief amount of time as the muscle shifts from the eccentric to the concentric action. Amortization is another term that has been used for the isometric phase of dynamic movement.
A knowledgeable coach, one who has utilized the supramaximal eccentric training outlined in the previous article, understands the eccentric portion of the “V” now has the ability to occur at a steeper slope due to the increased ability of the body to absorb high forces. This enhanced capability of the muscle and tendon components of the body to absorb force leads to a greater potential ability to store free-energy within the elastic components of the musculo-tendon structure. However, unless the isometric phase of contraction is trained appropriately, this enhanced ability of these elastic components to produce force in the concentric manner will be diminished. As the isometric phase becomes trained, the amount of free-energy that can be transferred from the eccentric to the isometric and finally through the concentric muscle action is improved. It is only through the specific training of each of these dynamic muscular contraction components that the efficiency of the body and power production can be truly optimized.
Now that the importance of the isometric phase of dynamic muscular contraction is better understood, it is necessary to address the specifics of training this phase to the greatest extent. Figure 2 below depicts the force-velocity curve of a muscle depending on the specific phase the muscle is experiencing. In the previous article discussing supramaximal training it became clear that in order to truly improve the eccentric, force absorbing capabilities of the musclo-tendon structure, loads greater than the concentric 1RM of that movement must be used. If lower loads are used the adaptations seen in this phase of movement will not occur to the greatest extent. The same principle applies to the training of the isometric phase. Although the isometric phase is weaker than the eccentric phase, it is still able to produce higher force values than concentric muscle actions. For this reason, supramaximal loads are continued to be used throughout the isometric training block in Triphasic Training. This training is completed specifically to maximize the ability of this brief, yet vital portion of dynamic movement and the SSC. Once again it should be noted this supramaximal eccentric training should be used only with highly advanced athletes and a spotter should be used on both sides of the bar at all times.
The “V” graphic in Figure 1 has already been referred to in the previous section, but its importance to performance enhancement cannot be overemphasized. If the isometric phase, which can be easily overlooked in training, is not maximally trained, an athlete will do a poor job of transferring the energy stored from the eccentric phase into power; especially if the eccentric phase has been maximized as explained in the previous article. When the isometric phase is overlooked, the steep “V”, which is the goal, as increased force is produced in less time, may end up looking more like “\_/”. Clearly this second shape is less than ideal as it takes a greater amount of time to reach the concentric phase, thus less free-energy from the SSC is transferred and is lost.
Supramaximal isometric training continues to lead to high quality tissue adaptations to the muscle fibers being trained. As there is no muscle length change in isometric movements, velocity of contraction is zero and is not considered in training. This means the amount of time spent in the desired position and load used are the determining factors of adaptation in this training method. By training isometrically in a supramaximal fashion, greater tissue adaptation is realized within the muscle. This adaptation occurs as muscle fibers are fired and re-fired throughout the duration of the repetition to the greatest extent with supramaximal loads, even though no movement is occurring. The adaptations occurring within the tissue through this training maximizes the free-energy that is transferred throughout every dynamic muscle contraction and the SSC.
As introduced above, with zero length change in the muscle during isometric muscle actions, the two primary factors in determining stress in this training method include the load utilized and the duration of each completed set. With loading parameters already being set according to the supramaximal training used in this muscle action phase training, only the time of each set remains as a parameter affecting this training. With the goal of training with high-quality at all times, sets are kept below ten seconds in the Triphasic Training Program. By keeping this time low and allowing brief rest between all movements energy utilization is reduced in training ultimately leading to mTOR, or muscle building abilities, being enhanced.
Adaptations outside of the musculo-tendon structure also occur within the supramaximal isometric training model. As with the supramaximal eccentric training, supramaximal isometric training leads to the greatest hormone response within the body due to training. This is due to the high stress levels applied in this training protocol. This increased hormonal response continues to enhance the adaptation processes occurring within the athlete. A second, more centralized, adaptation that has been seen due to this training model is improved efficiency within the cardiac output system. This could be due to multiple factors in this training program, but it is believed to be due to the high pressures placed on the cardiac system that lead to this improved efficiency.
Figure 2 below solidifies the training mechanisms used within the Triphasic Training System to specifically train the isometric phase of any muscle action. In order to train the isometric muscle action phase to the fullest extent, which is the main purpose of this method, loads must be high enough to stress the isometric phase. It is with this idea in mind that supramaximal loads are continued through the isometric training block. Supramaximal loads must be used if the isometric muscle action phase is to be stressed and improved to the greatest extent, which must be the purpose of training in this block.
The same exercise used in supramaximal isometric training as was used in supramaximal eccentric training, the hands assisted, safety bar split squat. As described in the eccentric article, this training exercise has been found, at this time, to be the optimal movement to induce total body stress. By training this exercise in all three muscle action phases, efficiency and power outputs are increased in single leg movements. This leads to greater transfer of training to sport, which is the ultimate goal of all training. The hands assisted aspect gives a better support system and takes pressure off the back, which is usually the weak link in squatting exercises. The support of the arms also allows single leg training to be used without balance becoming an issue. Once again the goal of the program is to maximize stress on the body, the hands assisted, safety bar split squat allows for this and leads to the necessary adaptations for enhanced performance.
Supramaximal isometric exercises, such as the hands assisted, safety bar split squat described above should only be used in the first training block of the day. If supramaximal loads are used for more than one exercise, or too many sets with these high loads are completed, too much stress will be placed on the body, leading to performance decreases. Supramaximal isometric exercises can be used as potentiation exercises, just as the supramaximal eccentrics were in the previous phase. This potentiation leads to increased rate of force development during plyometric movements, such as the French Contrast Method. Supramaximal isometric training can be applied in any manner that you desire as a coach, you can even keep the exercises you are already using within your program.
The coaching point used for the hands assisted, safety bar split squat will be the same as in the supramaximal eccentric article, other than the exercise is now being completed strictly as an isometric exercise. Focus is still placed on the back, chest, and leg positioning. The back should be kept in a neutral, supported position with the chest up. The front leg position should be around 90-90, with the back leg at an angle slightly extended beyond 90 degrees, be sure the back leg does not become too extended, as it will begin to pull the athlete’s hips out of proper alignment. Another key coaching point in this movement is teaching athletes to push through their back leg and fire that corresponding glute. This leads to increased stabilization of the pelvis while also utilizing the correct kinetic sequencing for athletes. To begin the set, the athlete will move from the starting position into the bottom position in a controlled motion. Once the athlete has reached the bottom position, the timed set will begin and the athlete will hold the bottom position until the set is completed. The athlete is encouraged to begin every isometric lift with a belly breath and then hold their breath for the duration of the rep. This is believed to cause greater adaptations within the circulatory system, as discussed earlier, and will be covered thoroughly in a separate article. Just like the eccentric phase, spotters will be required as the supramaximal loads will be too heavy for the athlete to lift concentrically on their own. We suggest a spotter on each side of the bar for these supramaximal load movements. The athlete will continue to focus on exploding out of the bottom position concentrically when the set is completed, even though the load will be too heavy for the athlete to lift completely on their own.
The three phases of every dynamic muscular contraction must be trained individually based on both their physiological and force producing capability differences. Based on the differences shown in Figure 2 supramaximal training is necessary to train both the eccentric and isometric phases of contraction to the greatest extent. Isometric strength, when maximized, allows for the optimal free-energy stored throughout the eccentric phase of the SSC to be converted into power through the concentric phase, which is the ultimate goal for athletes. The stress imposed on the body using supramaximal loads during isometric contractions allows the maximization of this second phase of the SSC. Supramaximal isometric training will continue to strengthen the tissue, which was also done during the supramaximal eccentric phase of Triphasic Training. To this point the greatest adaptations have been seen using the hands assisted, safety bar split squat which maximizes stress on the body. This controlled stress leads to increased adaptations, and ultimately maximized performance. After the isometric phase of Triphasic Training has been utilized properly, an athlete is now prepared for the reactive phase of Triphasic Training. This final phase of Triphasic Training’s specific muscle action phase training will put all 3 components of the SSC together and the free-energy transfer through the SSC will be maximized to steepen the “V” even further. This steepening of the “V” will continuously lead to a more explosive and efficient athlete. | http://www.vandykestrength.com/pages/supra_iso |
Cybersecurity Alerts for Medical Devices are on the Rise – A Cause for Concern, but what can be done?June 5, 2018 Tweet
The Department of Homeland Security (ICS-CERT) recently issued more warnings about cybersecurity vulnerabilities which has become all too common in recent months. In most cases, these vulnerabilities are found by researchers or the manufacturers themselves. It’s encouraging that discovery is improving but it’s discouraging that so many of the vulnerabilities are right out of “security 101.” For example, the use of hard coded credentials for external access and code injection from un-sanitized external input.
These bulletins follow a recent increase in medical device warnings from the DHS. The FDA has recognized cybersecurity as a critical part of software development and they released recent guidance on cybersecurity to make sure manufacturers are taking the matter seriously. However, this doesn’t cover products already in the market and may not be enough to impact products already under development.
Given the increased scrutiny of medical devices and the potential for harm to patients what are the options open to manufacturers? The answer depends on which stage of development they are in. In the early stages of development is by the far the easiest place to adopt better security practices. In fact, this should already be the norm for medical device developers. However, products that are in later stages of development or already productized and on the market, are much harder to deal with. Despite this there are options available and advanced static analysis has a role in improving quality and security at any stage of development.
Threat Assessment and Security Audits
It’s never too late to perform a system-wide threat assessment on a product. In fact, some of the issues disclosed in the CERT advisories could have been discovered during an assessment. In many cases, however, external expertise is required and companies may shy away from the intellectual property exposure or the potential impact of the assessment on their current products and projects. Regardless, the understanding of the attack surface and potential threats from such an assessment are critical to improving security in the long term. Although threat assessments are often manual and without tool support (depending on what they entail), they are a critical first step into improving overall product security.
A security audit is a next step or integral to the system wide threat assessment and although it isn’t only based on source and binary analysis, static analysis tools play an important part in evaluating the security “health” of the code base. Although it’s ideal to start using static analysis as early as possible it’s still very useful even for products that are already on the market. In fact, the FDA used GrammaTech CodeSonar to do an independent evaluation of source code for various commercial infusion pumps following a series of device failures. The outcome of the threat assessment and audit(s) result in a “to do list” of improvements to make on the product. Whether these improvements are economically feasible in an already fielded product is a different discussion. Despite this, there’s still ways to understand a product’s current security status and a path to improvement rather than depending on issues discovered by external sources.
Applying Static Analysis in Late Stage Development
The role of static analysis tools in products that are either already manufactured or near completion is more of an audit tool than a day-to-day software development tool. Long term, this should be the goal for any new development henceforth. As a security auditing tool, static analysis can detect security vulnerabilities and other quality issues that extensive unit and system testing may have missed.
Applying static analysis tools to an existing code base requires some initial effort to make sure the tool understands the code properly. In most cases, it’s parse errors that come from missing information about the software build. However, once an initial analysis is complete, it’s possible to focus the static analysis reports solely on high priority, high risk security warnings. By analyzing these warnings, the development team can decide whether there’s a likelihood of exploiting the vulnerability and the associated risk from an attack. CodeSonar has filtering and priority assignment capabilities that allows teams to quickly build their security must-fix list.
Digging into SOUP
Risk management of third-party software and other SOUP (software of unknown pedigree/provenance) is already a required activity for FDA pre-market approval for medical devices. However, this scrutiny may have missed important security testing and analysis. Static analysis tools provide quality and security assessments of code without extensive hands-on testing or understanding of the code or source. Security vulnerabilities and serious bugs can be detected and analyzed for cause and effect. Detailed reports can be sent to software vendors or internal teams for reparation.
GrammaTech's binary analysis technology, built into CodeSonar, can evaluate object and library files for quality and security vulnerabilities, augmenting static source code analysis by detecting tool-chain induced errors and vulnerabilities. It can also be used to evaluate the correct use of library functions from the calling source into a binary object, making the combination of source and binary analysis a very powerful tool indeed.
Although the possibility of investigating and fixing issues found in third-party code is often limited, it does provide a bellwether of the security of the code. Customers of commercial off-the-shelf (COTS) products can go back to technical support of the vendor and ask for confirmation and analysis of the discovered vulnerabilities. Key here is that the impact on risk management is better understood -- third-party software with a large number of vulnerabilities found using binary analysis must be dealt with appropriately in the risk management plan.
Applying Static Analysis in Early Stage Development
Using static analysis tools as part of the development process, as early as possible, is the ideal situation. Finding and fixing security vulnerabilities and other bugs before they become a problem later in the lifecycle has a significant impact on overall product quality. This is a topic I’ve discussed in a previous post so I won’t repeat it here although the gist is: Static analysis finds bugs and security vulnerabilities that “slip through the cracks” of traditional testing and prove invaluable when dealing with third party code (“SOUP”) and proving due diligence when applying for a premarket approval. Manufacturers can leverage static analysis tools immediately on existing products and adopt them as standard practice in new products.
Conclusion
A recent increase in DHS ICS-CERT advisories on medical devices certainly raises concerns about the security of medical devices. Although products already in the marketplace are harder and more expensive to fix there is still an important role to play for proper security practices. These practices include threat assessments and security audits and the continuous application of static analysis tools to detect security vulnerabilities. Products like CodeSonar are designed to work with large legacy code bases and still provide significant benefit in tracking down quality and security issues. In the long term, the application of static analysis on an ongoing basis early in the SDLC for new products is encouraged. | https://blogs.grammatech.com/cybersecurity-alerts-for-medical-devices-are-on-the-rise-a-cause-for-concern-but-what-can-be-done |
Evaluate the legal nature and significance
1 major political writings hobbes wrote several versions of his political philosophy, including the elements of law, natural and politic (also under the titles human nature and de corpore politico) published in 1650, de cive (1642) published in english as philosophical rudiments concerning government and society in 1651, the english leviathan published in 1651, and its latin revision in 1668. Whenever you do research – especially legal research – you must evaluate the information you find before you rely on it although it is important to evaluate information published in any format, evaluation is particularly important for information found on the web. Background the provision of high-quality, affordable, health care services is an increasingly difficult challenge due to the complexities of health care services and systems, investigating and interpreting the use, costs, quality, accessibility, delivery, organization, financing, and outcomes of health care services is key to informing government officials, insurers, providers, consumers. To judge or determine the significance, worth, or quality of assess: to evaluate the results of an experiment mathematics to determine or calculate the numerical value of (a formula, function, relation, etc.
Thus, elucidating the conditions of legal validity and explaining the normativity of law form the two main subjects of any general theory about the nature of law in section 1, we will explain some of the main debates about these two issues. To finalize it, the law of evidence in the major legal systems/ ie, in the common law, civil law or in countries that have a mixed legal system) is the body of legal rules developed or enacted to govern. Nature of the ethical system or belief: problems in the ethical system: eternal law: moral standards are given in an eternal law, which is revealed in scripture or apparent in nature, and then is interpreted by religious leaders or philosophers the belief is that everyone should act in accordance with the interpretation.
Meaning and nature: by stratification we mean that arrangement of any social group or society by which positions are hierarchically divided the positions are unequal with regard to power, property, evaluation and psychic gratification. Emerson depicts moral law as lying at the center of the circle of nature and radiating to the circumference he asserts that man is particularly susceptible to the moral meaning of nature, and returns to the unity of all of nature's particulars. Estimate, appraise, evaluate, value, rate, assess mean to judge something with respect to its worth or significance estimate implies a judgment, considered or casual, that precedes or takes the place of actual measuring or counting or testing out.
Example: right to property, right to reputation, right to etc, e) wherever, there is a legal right bestowed by the law on any person, there are corresponding legal duties mandated on others by the very same law not to violate the rights. Law of nature definition is - a natural instinct or a natural relation of human beings or other animals due to native character or condition a natural instinct or a natural relation of human beings or other animals due to native character or condition. Evaluating resources when using a book, article, report, or web site for your research, it is important to gauge how reliable the source is is the material primary or secondary in nature primary sources are the raw material of the research process secondary sources are based on primary sources when writing a research paper, it is.
Auditors need to obtain an understanding of the methods and assumptions a specialist uses, make appropriate tests of data provided to the specialist and evaluate whether the specialist's findings support the related financial statement assertions. The law serves many purposes and functions in society four principal purposes and functions are establishing standards, maintaining order, resolving disputes, and protecting liberties and rights 31 establishing standards the law is a guidepost for minimally acceptable behavior in society. The importance of ethics and the application of ethical principles to the legal profession we need to remind ourselves of the honourable nature of the profession otherwise there is little point talking about ethics it is the substance and not the form that matters here.
Evaluate the legal nature and significance
A business plan should be presented in a binder with a cover listing the name of the business, the name(s) of the principal(s), address, phone number, e-mail and website addresses, and the date. What are the roles of the judge in evaluating the evidence and the like to this effect, robert arthur melin [here after referred as melin], have made an attempt to define evidence law in a more comprehensive way nature of evidence law purpose or significance of evidence law evidence is the “key” which a court needs to render a. The business environment is full of agreements between businesses and individuals while oral agreements can be used, most businesses use formal written contracts when engaging in operations. Natural law: natural law, in philosophy, a system of right or justice held to be common to all humans and derived from nature rather than from the rules of society, or positive law there have been several disagreements over the meaning of natural law and its relation to positive law.
The meaning of “liberalism” negative freedom, realistic view of human nature, spontaneous order, natural law including strict defense of property rights: realistic view of human nature, organic change, human order with “extra-human” origins, counter movement roger scruton’s the meaning of conservatism, robert nisbet’s. Balance of power: meaning, nature, methods and relevance article shared by: “balance of power is a nearly fundamental law of politics as it is possible to find” —martin wright the new importance of ideology and other less tangible but, nevertheless, important elements of national power have further created unfavorable.
The primary focus of this chapter is the nature, scope and importance of international organisations with special reference to their classification and role in the present global scenario after going through this chapter you should be able to: • • know the meaning, nature and scope of international organisation differentiate between the. Evaluate or estimate the nature, quality, ability, extent, or significance of i will have the family jewels appraised by a professional access all the factors when taking a risk evaluate, pass judgment, judge (verb. “nature of” might be explained as the set of characteristics which define the person, place, or thing what was the nature of your proposition, sir tell me, if you will, the nature of your request, young man what, might i ask, was the nature of her inquiry, reverend. | http://yptermpaperyaok.soalmatematika.info/evaluate-the-legal-nature-and-significance.html |
We ensure that with this free Uniformly Accelerated Motion Calculator tool, you will be able to quickly determine an object's uniform acceleration. Simply enter the moving object's initial velocity, final velocity, and travelled distance in the provided input fields, then press the calculate button to get the outcome as acceleration and time right away.
Uniformly Accelerated Motion Calculator: Do you think that solving physics questions involves a lot of patience and concentration? Then don't be concerned; you've arrived at the correct place. Using our helpful calculator tool, you can acquire answers to Uniformly Accelerated Motion questions quickly and easily.
You'll also discover a step-by-step process for finding uniform acceleration in fractions of seconds beside this user-friendly calculator. In the next sections of this page, you'll find important information such as the definition of Uniformly Accelerated Motion, formulas, and solved examples.
The motion of an object with a constant acceleration is known as uniformly accelerated motion (UAM). To put it another way, the acceleration remains constant; it is equivalent to a number that does not change as a function of time. Uniform acceleration refers to a body's constant acceleration depending on time.
The following are the formulas for calculating uniform acceleration motion
We can write the following formula, using the first formula t = (2s) / (v + u)
Acceleration is calculated using the third formula a = (v - u) / t where u = initial velocity, v = final velocity, a = acceleration of an object, t = time, s = distance travelled by an object.
The basic and simple steps listed below will assist you in answering uniform acceleration motion questions. Follow these instructions to ensure a successful outcome.
The following is the procedure how to use the Uniform Acceleration calculator
For more concepts check out physicscalculatorpro.com to get quick answers by using this free tool.
Example 1:From rest, a car accelerated at 7.5 m/s^2 for 10 seconds. a) What is the car's position at the end of the 10 seconds? b) What is the car's velocity at the end of the 10 seconds?
Solution:
The car starts from rest, the initial speed u = 0. We assume the initial position is equal to 0 because nothing is written about it. As a result, the equation gives the x position.
x = (1/2) a t2
where a is the acceleration (= 7.5 m/s^2) and t is the period of time between initial and final positions
x = (1/2) 7.5 (10)^2 = 375 m
b) The car's velocity ‘v’ at the end of 10 seconds is given by
v = a t = 7.5 x 10 = 75 m/s
1. Is there a difference between constant and uniform acceleration?
Yes, they are identical. The terms constant acceleration and uniform acceleration refer to the fact that the rate of change of velocity with respect to time must be constant at all times.
2. Which correctly describes the case of a uniformly accelerated motion?
Non uniform motion describes the movement of an object that is speeding up or slowing down. Uniformly accelerated motion occurs when an object's speed increases at a constant rate.
3. What do you mean by uniform motion is uniformly accelerated motion?
The continuous acceleration of a body, regardless of the function of time, is referred to as uniform acceleration. A uniform accelerated motion on the horizontal plane or dimension occurs when an item moves at a constant velocity on the x-axis plane.
4. Is uniform acceleration zero?
A type of motion in which the velocity of an object changes by an identical amount every equal time period is known as uniform or constant acceleration. If the velocity does not change, the velocity is constant, and hence the acceleration is zero.
5. What is the uniform acceleration graph?
The displacement-time graph is a symmetric parabola about the displacement axis for homogeneous acceleration. | https://physicscalculatorpro.com/uniformly-accelerated-motion-calculator/ |
The Cool Physics of a Supersonic Baseball
If I select a portion of the data at the beginning of the video, I can use a linear fit to determine the slope of the position vs. time which gives the velocity. From this, I get an initial velocity of 456 m/s at a time of around 0.002 seconds. Near the end of the video, the graph has a slope of 382 m/s at a time of about 0.011 seconds. From this change in velocity over this time interval, I can calculate the horizontal acceleration of the ball.
But why does the ball slow down? After the baseball leaves the launcher, there are just two interactions that cause it to change its velocity. There is the downward pulling gravitational force and the backwards pushing air drag force due to the collision between the ball and the molecules in the air.
The gravitational force is usually fairly significant—however, in this case we are looking at a super short time interval such that it doesn’t really cause a large change in velocity of the ball. But what about the air drag? We can build a model for this air drag force that depends on the speed of the ball (v), the density of air (ρ), the cross sectional area of the ball (A) and a drag coefficient that depends on the shape (C). Most of these values are known, but the drag coefficient at high speeds can sometimes be difficult to determine.
OK, I like to say that you don’t really understand something until you can build a model of it—so let’s do that. Of course the motion of this supersonic baseball isn’t so trivial. The air drag force makes the ball slow down—but the air drag force changes with the velocity of ball. But this force decreases as the speed decreases—but that makes the ball slow down less. This means that there is no analytical solution for the position of this ball as a function of time. Our only hope is to build a numerical model.
The key idea of a numerical model is to start with some initial values for the position and velocity of the ball. With the velocity, I can then calculate the force on that ball at that instant. The next trick is to just find the velocity and position of the ball after some very, very short time interval. During this interval, we can assume that the air drag force is constant—it’s at least approximately constant. Then at the end of the short time step, we can use the new velocity to calculate the new air drag force and repeat the whole thing again. Really, the only problem with this method is that instead of one very complicated mathematical problem you get thousands of simpler problems. | https://techietricks.com/2020/09/23/the-cool-physics-of-a-supersonic-baseball/ |
Flow variation is a common feature among lotic aquatic systems. The known ecological effects of changing flow regimes vary, depending upon landscape, catchment and habitat characteristics. In upland intermittent streams the effects of low flow periods may be exacerbated by water extraction for human needs. The importance of these impacts magnifies when low flow refugia, threatened species (such as the Purple-spotted Gudgeon, Mogurnda adspersa) and water extraction points are found in the same place.
Water managers need to understand how biogeographical, ecological and physical factors interact to determine the resilience and resistance of wetland pool ecosystems during low flow and no flow conditions in order to sustain ecological condition and extractive needs in variable settings.
This study aims to develop an understanding of the factors underpinning ecosystem structure and function of wetland pools in upland intermittent streams under varying conditions. A further aim is to produce conceptual models of categorised upland refugia demonstrating the major biogeographical, physical and ecological factors driving variation in trophic pathways, food sources and fish population dynamics. Outcomes from the project will lead to principles to guide cease-to-pump management rules in river systems.
The proposed methods will assess medium-term variation in fish and macroinvertebrate community structure, trophic pathways and landscape factors over changing flow and among refugia with varied characteristics at catchment and macro-habitat scales. Experimental in-situ flow manipulation will be utilised to assess short-term responses to water extraction in Tenterfield Creek. | http://asl-2013.p.asnevents.com.au/days/2013-12-03/abstract/9714 |
"In general relativity, an event horizon is a boundary in space-time beyond which events cannot affect an outside observer. In layman's terms, it is defined as "the point of no return", i.e., the point at which the gravitational pull becomes so great as to make escape impossible, even for light. An event horizon is most commonly associated with black holes."
In a previous post I attempted to make the case that not enough information makes it out of the development process turning a normally black box software process into a black hole. This is problematic for us and the both the upstream and downstream processes as detailed information can provide guidance when making strategic decisions. By producing a black hole when developing software we actually limit our ability to positively influence the outcomes of work we own.
The question then is can we send signals upstream, beyond our Event Horizon as it were, to decision makers via a series of metrics, that will allow them to make better decisions?
Several of my colleagues, including my good friend Larry Oesterreich, are engaged in creating and disseminating these metric. Larry in particular has recognized some interesting code smells that need to be cleaned up, and of course he is right. The thing I think is being missed is that these signals alone do not guarantee the right choice will ever be made, it will only make sure all concerns are correctly communicated. Isolated metrics mean little without an accompanying heterogeneous analysis.
This is the part where our development team has to move outside of its comfort zone and understand the motivation for what appear to be arbitrarily bad decisions earlier in our development pipeline. So while you spend time understanding your own metrics you also have to understand what motivates the Support Manager, and the Sales and Account Executives. Their metrics may look nothing like ours (the Event Horizon is complex) but not being able to compare these concerns to entrenched developer concerns is simply a failure of imagination. | https://www.poppastring.com/blog/developing-code-beyond-the-event-horizon |
But this isn't your average aurora.
When the solar wind slams into the Earth's atmosphere, the solar plasma (made up mainly of high-energy protons) hit air molecules, kicking off some light. When you have a lot of these collisions in the upper atmosphere, the sky will light up as an aurora.
However, a black hole doesn't have an atmosphere like ours, so how can an aurora be generated?
Spin It, Feed It
Masaaki Takahashi, from Aichi University of Education, Kariya, and Rohta Takahashi, from the Institute of Physical and Chemical Research, Wako, started out by modeling a rapidly spinning black hole.
As black holes have a massive gravitational pull, they will suck in any dust, gas or even stars that stray too close. If you have a spinning black hole, it's predicted to form a disk of hot radiating plasma around its equator. This is called an 'accretion disk.'
As the disk will contain charged particles, it's possible that a magnetic field will be generated -- much like the internal dynamo of the Earth, generating the magnetic field of our magnetosphere.
In their paper, published in The Astrophysical Journal Letters last month, Takahashi and Takahashi's model predicts a black hole 'magnetosphere' is generated where the magnetic field lines thread through the accretion disk and get dragged into the poles of the black hole's event horizon.
Now we have a black hole with its own magnetosphere, and like the Earth's magnetosphere, space plasma will be funneled along the magnetic field lines -- like water being pumped through a fire hose.
But this is no ordinary fire hose.
Shocked Plasma
The plasma flow will be so fast when being funneled into the event horizon that it will break the plasma 'sound barrier' (exceeding what is known as the Alfven speed).
This is when the black hole 'aurora' might be generated. In a similar way to a supersonic aircraft breaking the sound barrier in our atmosphere (producing a sonic boom), the supersonic plasma will create a shock. The Japanese researchers found that this shock will form a halo, crowning the black hole's poles, just above the event horizon. As the plasma hits this shock it releases energy, rapidly heating up and generating light.
This all sounds very exciting, but this model is purely theoretical. A black hole 'aurora' could never be observed, right?
Actually, this is why I find this black hole aurora research so cool.
Shadow of the Beast
Using several networked observatories around the globe, a technique known as "very long baseline interferometry" (or "VLBI") could be used to directly image the Milky Way's supermassive black hole (something that isn't currently possible). As this is the largest black hole nearby, its event horizon should be big enough to see, assuming enough observatories are included in a future VLBI campaign.
Assuming this can be achieved, the "shadow" of the black hole's event horizon (a dark circle) might be visible.
In a 2009 study carried out by Vincent Fish and Sheperd Doeleman of the MIT Haystack Observatory, Mass., a VLBI campaign was simulated and the results were striking (the modeled black hole shadow is pictured here).
If we are able to network enough radio telescopes around the globe as an international VLBI campaign, the emissions from the black hole 'aurora' might also be resolved.
However, the researchers are unsure about the type of radiation produced by the shocked plasma; it would depend on the magnetic conditions near the black hole's event horizon. But should the conditions be extreme enough, this model may be used to explain the voracious black holes residing inside active galactic nuclei.The Fingerprint of Extreme Curvature"Such a black hole magnetosphere may be considered as a model for the central engine of active galactic nuclei, some compact X-ray sources and gamma-ray bursts." -- Takahashi and Takahashi, 2010
There's another aspect to hunting for a black hole's aurora: the powerful radiation generated so close to the black hole's event horizon would reveal some very useful information about this extreme space-time environment.
The event horizon is the point of no return, the boundary where even light cannot escape from the extreme curvature of space-time caused by the immense gravitational dominance of the black hole.
But as the plasma shocks predicted by the Japanese researchers generates aurorae-like light just above the event horizon, some of that radiation will escape from the clutches of the event horizon, allowing future radio astronomers a peek into the mysterious phenomena that are thought to surround black holes.
One phenomenon that comes to mind is "frame dragging" (also known as the Lense-Thirring effect), when a massive spinning object drags the neighboring space-time with it. If we could find a way to "see" the light from a black hole aurora, perhaps we'll be able to detect the fingerprint of frame dragging too.
Although it's likely to be a long time before VLBI becomes sensitive enough to detect a black hole's aurora-like radiation (if it even exists), it is certainly a very exciting study with the potential of probing within a hair's-breadth from the event horizon of the Milky Way's supermassive black hole.
- The Astrophysical Journal Letters, Vol 714, No 1, | https://asterisk.apod.com/viewtopic.php?f=31&t=19478 |
Citations of:
Morality and virtue: An assessment of some recent work in virtue ethics
Ethics 114 (3):514-554 (2004)
Add citationsYou must login to add citations.
|
|
|
|
Philippa Foot’s virtue ethics remains an intriguing but divisive position in normative ethics. For some, the promise of grounding human virtue in natural facts is a useful method of establishing normative content. For others, the natural facts on which the virtues are established appear naively uninformed when it comes to the empirical details of our species. In response to this criticism, a new cohort of neo-Aristotelians like John Hacker-Wright attempt to defend Foot by reminding critics that the facts at stake (...)
|
|
In this paper, we propose a conceptual model to improve moral sensitivity in human resource development (HRD) to assist human resource (HR) practitioners in contending with moral challenges in HRD. The literature on the relationship between ethics and HRD suggests that the organizational and employee development discipline deals with ethical issues at three different levels: Individual, organizational and communal, and international levels. In section I, we elaborate on moral challenges facing HRD. In section II, we conceptualize moral sensitive HRD, proposing (...)
No categories
|
|
No categories
|
|
The past couple of decades have witnessed a remarkable burst of philosophical energy and talent devoted to virtue ethical approaches to Confucianism, including several books, articles, and even high-profile workshops and conferences that make connections between Confucianism and either virtue ethics as such or moral philosophers widely regarded as virtue ethicists. Those who do not work in the combination of Chinese philosophy and ethics may wonder what all of the fuss is about. Others may be more familiar with the issues (...)
|
|
It is commonly assumed that Aristotle's ethical theory shares deep structural similarities with neo-Aristotelian virtue ethics. I argue that this assumption is a mistake, and that Aristotle's ethical theory is both importantly distinct from the theories his work has inspired, and independently compelling. I take neo-Aristotelian virtue ethics to be characterized by two central commitments: (i) virtues of character are defined as traits that reliably promote an agent's own flourishing, and (ii) virtuous actions are defined as the sorts of actions (...)
|
|
If good taste is a virtue, then an account of good taste might be modelled on existing accounts of moral or epistemic virtue. One good reason to develop such an account is that it helps solve otherwise intractable problems in aesthetics. This paper proposes an alternative to neo-Aristotelian models of good taste. It then contrasts the neo-Aristotelian models with the proposed model, assessing them for their potential to contend with otherwise intractable problems in aesthetics.
|
|
Virtue as Value: A Comparison between Christoph Halbig and Max Scheler The aim of the following contribution is to compare the virtue conceptions of Christoph Halbig and Max Scheler in order to scrutinize their common positions and differences and thus to answer two questions: Firstly, is it true that Scheler's approach is based on the basic assumptions of the recursive theory of virtues, as Halbig asserts this? Secondly, can the virtues be defined as attitudes, or should they be conceived as (...)
No categories
|
|
In four experiments, we offer evidence that virtues are often judged as uniquely important for some business practices (e.g., hospital management and medical error investigation). Overall, actions done only from virtue (either by organizations or individuals) were judged to feel better, to be more praiseworthy, to be more morally right, and to be associated with more trustworthy leadership and greater personal life satisfaction compared to actions done only to produce the best consequences or to follow the correct moral rule. These (...)
|
|
In recent decades, the idea has become common that so-called virtue ethics constitutes a third option in ethics in addition to consequentialism and deontology. This paper argues that, if we understand ethical theories as accounts of right and wrong action, this is not so. Virtue ethics turns out to be a form of deontology . The paper then moves to consider the Aristotelian distinction between right or virtuous action on the one hand, and acting rightly or virtuously on the other. (...)
|
|
According to agent-based approaches to virtue ethics, the rightness of an action is a function of the motives which prompted that action. If those motives were morally praiseworthy, then the action was right; if they were morally blameworthy, the action was wrong. Many critics find this approach problematically insensitive to an act’s consequences, and claim that agent-basing fails to preserve the intuitive distinction between agent- and act-evaluation. In this article I show how an agent-based account of right action can be (...)
|
|
Over the last few decades, there have been intense debates concerning the effects of markets on the morality of individuals’ behaviour. On the one hand, several authors argue that markets’ ongoing expansion tends to undermine individuals’ intentions for mutual benefit and virtuous character traits and actions. On the other hand, leading economists and philosophers characterize markets as a domain of intentional cooperation for mutual benefit that promotes many of the character traits and actions that traditional virtue ethics accounts classify as (...)
|
|
In this paper I present a new objection to the Aristotelian Naturalism defended by Philippa Foot. I describe this objection as a membership objection because it reveals the fact that AN invites counterexamples when pressed to identify the individuals bound by its normative claims. I present three examples of agents for whom the norms generated by AN are not obviously authoritative: mutants, aliens, and the Great Red Dragon. Those who continue to advocate for Foot's view can give compelling replies to (...)
|
|
|
|
Philippa Foot 's version of ethical naturalism, centered on the idea of “natural goodness,” has received a good deal of critical scrutiny. One pervasive criticism contends that less than virtuous modes of conduct may be described as naturally good or, at least, not naturally defective on her account. If true, this contradicts the most ambitious aspect of Foot 's naturalistic approach to ethics: to show that judgments of moral goodness are a subclass of judgments of natural goodness. But even if (...)
|
|
In Morals From Motives, Michael Slote defends an agent-based theory of right action according to which right acts are those that express virtuous motives like benevolence or care. Critics have claimed that Slote’s view— and agent-based views more generally— cannot account for several basic tenets of commonsense morality. In particular, the critics maintain that agent-based theories: (i) violate the deontic axiom that ought implies can , (ii) cannot allow for a person’s doing the right thing for the wrong reason, and (...)
|
|
|
|
Imagining a future world in which people no longer die provides a helplul tool for understanding our present ethical views. It becomes evident that the cardinal virtues of prudence, temperance, and courage are options for reasonable people rather than rational requirements. On the assumption that the medical means to immortality are not universally available, even justice becomes detached from theories that tie the supposed virtue to the protection of human rights. Several stratagems are available for defending a categorical right to (...)
|
|
|
|
My question in this paper concerns what eudaimonist virtue ethics (EVE) might have to say about what makes right actions right. This is obviously an important question if we want to know what (if anything) distinguishes EVE from various forms of consequentialism and deontology in ethical theorizing. The answer most commonly given is that according to EVE, an action is right if and only if it is what a virtuous person would do in the circumstances. However, understood as a claim (...)
|
|
|
|
This paper seeks to assess the claim of Aristotelian naturalism to successfully vindicate the virtues. To this end, I consider two ways to understand the claims of Aristotelian naturalism and, thus, the normative authority of nature. The first is represented by an interpretation of Aristotelian naturalism as defending the claim that practical rationality is species-relative. I argue that the view fails because it cannot accommodate certain forms of moral disagreement. As an alternative, I propose seeing Aristotelian naturalism as the expression (...)
|
|
In a series of papers, Stephen Kearns and Daniel Star defend the following general account of reasons: R: Necessarily, a fact F is a reason for an agent A to Φ iff F is evidence that an agent ought to Φ.In this paper, I argue that the reasons as evidence view will run afoul of a motivational constraint on moral reasons, and that this is a powerful reason to reject the reasons as evidence view. The motivational constraint is as follows: (...)
|
|
The central claim of Aristotelian naturalism is that moral goodness is a kind of species-specific natural goodness. Aristotelian naturalism has recently enjoyed a resurgence in the work of philosophers such as Philippa Foot, Rosalind Hursthouse, and Michael Thompson. However, any view that takes moral goodness to be a type of natural goodness faces a challenge: Granting that moral goodness is natural goodness for human beings, why should we care about being good human beings? Given that we are rational creatures who (...)
|
|
Virtue ethics is often understood as a rival to existing consequentialist, deontological, and contractualist views. But some have disputed the position that virtue ethics is a genuine normative ethical rival. This chapter aims to crystallize the nature of this dispute by providing criteria that determine the degree to which a normative ethical theory is complete, and then investigating virtue ethics through the lens of these criteria. In doing so, it’s argued that no existing account of virtue ethics is a complete (...)
|
|
My aims are exegetical rather than critical: I offer a systematic account of Hursthouse's ethical naturalism with an emphasis on the normative authority of the four ends, and try to correct some misconceptions found in the literature. Specifically, I argue that the four ends function akin to Wittgensteinian hinge-propositions for our practice of ethical reasoning and as such form part of a description of the logical grammar of said practice.
|
|
|
|
IntroductionRecent work in virtue theory has breathed new life into the analogy between virtue and skill.See, for example, Annas ; Bloomfield ; Stichter ; Swartwood . There is good reason to think that this analogy is worth pursuing since it may help us understand the distinctive nexus of reasoning, knowledge, and practical ability that is found in virtue by pointing to a similar nexus found outside moral contexts in skill. In some ways, there is more than an analogy between skill (...)
|
|
This thesis comprises a study of the ethical thought of Iris Murdoch with special emphasis, as evidenced by the title, on how morality is intimately connected to self-improvement aiming at perfection and how the study of fiction has an important role to play in our strive towards bettering ourselves within the framework set by Murdoch’s moral philosophy.
|
|
Some people claim that some instances of suffering are intrinsically bad in an impersonal way. If it were true, that claim might seem to count against virtue ethics and for consequentialism. Drawing on the works of Jason Kawall, Christine Swanton and Nietzsche, I consider some reasons for thinking that it is, however, false. I argue, moreover, that even if it were true, a virtue ethicist could consistently acknowledge its truth.
No categories
|
|
|
|
In discussions about the ethics of enhancement, it is often claimed that the concept of ‘human nature’ has no helpful role to play. There are two ideas behind this thought. The first is that nature, human nature included, is a mixed bag. Some parts of our nature are good for us and some are bad for us. The ‘mixed bag’ idea leads naturally to the second idea, namely that the fact that something is part of our nature is, by itself, (...)
|
|
It is often assumed that neo-Aristotelian virtue ethics postulates an obligation to be a good human being and that it derives further obligations from this idea. The paper argues that this assumption is false, at least for Philippa Foot’s view. Our argument blocks a widespread objection to Foot’s view, and it shows how virtue ethics in general can neutralize such worries.
|
|
Can we adequately account for our reasons of mere taste without holding that our desires ground such reasons? Recently, Scanlon and Parfit have argued that we can, pointing to pleasure and pain as the grounds of such reasons. In this paper I take issue with each of their accounts. I conclude that we do not yet have a plausible rival to a desire-based understanding of the grounds of such reasons.
|
|
A revised and expanded English version was published as "Good Reasons and Natural Ends: Rosalind Hursthouse's Hermeneutical Naturalism", in M. Hähnel: Aristotelian Naturalism. A Research Companion, Springer 2020. | https://api.philpapers.org/citations/COPMAV |
Drought, population growth, sea level rise, and competitive pricing are all converging to prompt a new era of interest in seawater desalination projects. This approach to generating potable water has long held prominence around the world in areas of extreme water constraints, notably the Middle East. However, in the U.S. the number of seawater desalination plants has always been relatively small. The primary use in the U.S. was typically for private industrial use where the cost of operation could be justified.
In light of the new interest in desalination around the country, this article will briefly examine some of the technical challenges this approach faces and discuss some of the innovative approaches that have been employed. The article also addresses some of the primary permitting hurdles that these facilities are subject to.
Technical and Environmental Challenges
From a purely technical standpoint, the use of desalination to generate significant amounts of potable water has always been challenging. The two dominant methods for desalination are thermal based distillation and reverse osmosis (“R/O”). While thermal methods have been prominent in areas where energy costs are low, R/O has become the dominant process for large scale desalination plants in the U.S. Yet, while this method has improved in efficiency, it still is fraught with challenges. R/O essentially removes salts and impurities by forcing the water through a filter or membrane medium that permits the water molecules to pass through, but blocks the passage of the larger salt compounds. This process requires significant energy inputs to generate sufficient pressure to create a utility-scale volume of water. This high energy requirement in turn drives up the price for R/O generated water.
Beyond the cost, energy, and technical demands, desalination plants have significant environmental issues that must be dealt with. The lower the initial salt content, the greater the efficiency of an R/O facility. However sources for brackish water are limited. In some locations, aquifers near the coast that have experienced saltwater intrusion may be a viable source, but drawing down the water table at these locations may exacerbate the saltwater intrusion into inland fresh water sources. Seawater, while voluminous, often creates its own challenges. Higher salinity rates equate to less efficient plant operation, higher costs for pre-treatment and membrane maintenance and greater energy demands. Additionally, environmental advocacy groups often seek permitting assurances that ocean intakes will not siphon in significant sea-life or otherwise disrupt pelagic communities.
The greater environmental challenge is the adequate disposal of the hyper-saline concentrate that is the byproduct of the R/O process. Concentrate and residuals disposal from R/O desalination are the main areas of regulation. Disposal for less salty concentrate from brackish water R/O may include land application for aquifer recharge or irrigation, or for saltier concentrates, deep well injection, surface water discharge, or blending the concentrate with other wastewater treatment plant discharges from permitted facilities. Volume, location and the chemistry of the concentrate determine the available methods.
Despite all of the foregoing obstacles, desalination continues to garner more interest from utilities. Recent droughts in California have reignited the debate over the need to find new sources of water. Despite significant gains in conservation and industrial efficiency, the need for additional water remains. California has seen droughts before, and rain and snow fall in later years have caused hesitation about beginning a new significant investment. An R/O plant was built in Santa Barbara during the serious droughts that occurred during the late 80’s and early 90’s. However after a brief period of operation, sufficient rain patterns increased regional fresh water supplies. Consequently, the cost of running the Santa Barbara plant with older R/O technology became prohibitive and the site was mothballed. With the current drought cycle showing no sign of letting up, and predictions that climate change may make these cycles longer and hotter, Santa Barbara is now looking to dust off the plant and get it up and running again.
California is now once again betting big on desalination. In San Diego County, the $1 billion Carlsbad Desalination Project is nearing the final stages of construction. Slated to be completed by the end of 2015, the plant will be the largest desalination facility in the Western Hemisphere, with the potential of generating 50 million gallons per day (mgd) of potable water from a total intake of 100 mgd. The resulting 50 mgd of water with elevated salinity will be blended with additional seawater in order to sufficiently dilute the concentration down to a level that can be safely discharged to the ocean.
While only time will tell if the Carlsbad Plant will meet its operational goals, the scale of the permitting challenges it has encountered are not unique. Currently, the largest operational desalination facility in North America is located in Tampa Bay, Florida. An examination of the history surrounding the permitting and construction of the Tampa Bay facility provides a useful illustration of obstacles and innovate solutions that are part and parcel of any desalination project.
The Tampa Bay Example
Florida is a peninsula surrounded by water with a high rate of average annual rainfall, rich groundwater aquifers, and many lakes, estuaries and rivers—hardly a state where water supply would be an expected concern. However, the state has grown in the past one-hundred years to be one of the most populous in the country, primarily through coastal development, with a historically extensive agricultural community that competes with urban areas for water supply.
Over the past 40 years, losses to the State’s springs, wetlands, lakes, river flows and aquifer levels have amply demonstrated that continued reliance on fresh ground and surface water for drinking water and agriculture is not sustainable. Presently, the Florida Department of Environmental Protection (“FDEP”), Office of Water Policy finds that water usage in Florida is approximately 6.4 billion gallons a day, and is expected to increase by the year 2030 to 7.7 billion gallons per day. These volumes of freshwater have not been, and will not be, sustainable for Florida. Diversification and alternative water supplies are needed.
Florida has been a national leader in reverse osmosis desalination for drinking water, primarily through desalination of brackish ground and surface waters. All but one of the potable water desalination plants in Florida rely on R/O technology. Brackish water used for R/O in Florida is typically between 1-10 parts per thousand (ppt) total dissolved solids (“TDS” indicating salinity), with tidally influenced rivers and estuaries fluctuating between 10 and 33 ppt TDS compared to ocean seawater of 35 ppt TDS. Florida presently has over 100 independent brackish desalination water supply facilities.
Tampa Bay Water (“TBW”) is a regional water supply authority in Florida consisting of Hillsborough, Pinellas, and Pasco Counties and the Cities of Tampa, St. Petersburg, and New Port Richey. It has historically supplied drinking water to these locations primarily from well fields and river withdrawals. These sources have become increasingly unsustainable due to population growth, aquifer over pumping causing environmental damage, and regulatory minimum flows and levels for rivers and streams. Based on the need to reduce groundwater pumping and to develop alternative water supplies, TBW looked to seawater desalination.
The Tampa Bay Water Seawater Desalination facility was granted an NPDES permit by FDEP in 2001. Until the Carlsbad desalination plant in California begins operation, the TBW Desalination plant has been the largest seawater desalination facility in North America. Able to produce up to 25 million gallons a day of fresh drinking water, it was permittable due to its unique co-location with the Tampa Electric Company (“TECO”) Big Bend power plant. The location permits the intake of estuarine water with up to 32 ppt TDS, less than ocean seawater, and the estuarine discharge location with extensive flushing and salinity fluctuations.
The TBW Desalination plant was intended to be an all-season water supply source that would be impervious to rainfall and water levels. It was designed as an R/O membrane treatment facility that would produce up to 25 mgd. This presented many permitting challenges: intake water permitting, construction permitting, treatment permitting, assurance of treated drinking water standards, health department permits, and, finally, the discharge of concentrate and residuals. In order to produce 25 mgd of potable water, the plant would need to intake 44 mgd of seawater with a concentrate discharge of 19 mgd.
In its initial search for a plant location, TBW focused on existing area power plants with cooling water intakes for co-location and the use of permitted intake water. The costs and challenges of attempting to address construction and permitting requirements for entrainment and impingement were too great to locate an independent plant. Of the area’s power plants, the TECO plant on Tampa Bay was especially attractive, as it is located in the Tampa Bay Estuary with reduced salinity compared to ocean seawater, which greatly reduced the cost of treatment and provided extensive flushing for the discharge from the TECO’s cooling water, tides, rainfall and river flows. The TECO plant, with all four power units in operation, is permitted to withdraw and discharge up to 1.4 billion gallons a day in cooling water. TECO offered an existing cooling water intake and discharge system that TBW could use to siphon heated cooling water from the plant and return the concentrates to an internal piping system with substantial dilution and mixing prior to discharge. The heated cooling water allows the R/O membrane process to function more efficiently. By the time the facility was permitted by FDEP in 2001, the facility already held twelve permits from other state and local entities that were required for the operation of the plant.
The TBW Desalination plant required NPDES industrial water discharge permits from FDEP for both the plant itself and for TECO’s modified discharge with the mixed concentrate. Both permits were then challenged by an association formed by nearby residents living on a series of canals located not far from TECO’s cooling water discharge canal in Tampa Bay. The cases were consolidated for hearing and a full evidentiary hearing was held before the Florida Division of Administrative Hearings: Save Our Bays, Air and Canals v. Tampa Bay Desalination and Department of Environmental Protection, Case No. 01-1949, and Save Our Bays, Air and Canals v. Tampa Electric Company, Case No. 01-2720. The Administrative Law Judge’s Recommended Order to the agency reviewed the evidence in great detail relating to permitting standards for water quality including dissolved oxygen, nutrients, salinity, toxicity, metals, bioavailability of metals, synergistic effects, and ph. Additionally, the order reviewed effects of the discharge on seagrasses, fish, manatees and phytoplankton.
A primary feature of the facility is the dilution of up to 19 mgd of concentrate in the maximum cooling water discharge from TECO of 1.4 bgd. At these rates, the maximum concentrate discharged would be diluted under best case scenarios at 70:1 prior to entering the Bay. The judge imposed permit limits on the minimum amount and timing of dilution based on the number of power units operating at TECO at any time. The Order further noted that fish and wildlife species living in the estuary are accustomed to wide fluctuations in salinity based on tides, river flows and rainfall, and have a wide range of salinity tolerance. Manatees, which frequent TECO’s cooling water discharge canal in the cold winter months, tolerate seawater salinity levels up to 40 ppt.
A key analysis related to the flushing in Tampa Bay, supported by three-dimensional far field and near field hydrographic models provided by TECO and TBW. The petitioner had alleged that salinity levels in their nearby canals would accumulate and become elevated, reducing dissolved oxygen and affecting algal growth and water quality. The judge found that the three-dimensional modelling efforts supported the conclusion that this buildup of salinity would not occur.
TBW began its quest for the desalination plant in 1997 with an RFP for consulting firms. It was meant to be a design, build, own and operate facility. The plant was built and tested in 2003, but was unable to properly operate at capacity due to membrane fouling and expense issues. Over the next several years, other contractors took over the plant, resolved design and construction deficiencies, modified the plant for effective pre-treatment and membrane treatment, and the plant finally became fully operational in 2007, meeting its capacity goals of 25 mgd drinking water for months at a time in 2010. When the plant is operating, the drinking water produced is blended with other sources of treated water at an offsite plant for distribution. By utilizing TBW’s various local surface and ground water supply sources and blending these lower cost systems, the cost to consumers for the treated seawater is greatly reduced. Due to the higher cost of producing water from the facility, it is only used by TBW when other sources are insufficient.
Regulatory Tangle
As discussed above, the Clean Water Act (“CWA”) provides for National Pollution Discharge Elimination System (“NPDES”) permitting. NPDES permits are typically required for the disposal of the R/O concentrate waste products. The NPDES requirements are administered by the U.S. Environmental Protection Agency, which in turn may delegate state agencies with compliant CWA programs. The EPA retains ultimate oversight responsibilities to the program. Similarly, the CWA provides for the U.S. Army Corps of Engineers to permit the disposal of dredge or fill materials to navigable waters, which may be implicated by the construction of a desalination facility.
As discussed above in Tampa Bay, co-location with a power plant relying on cooling water is critical for permitted intake of large quantities of water without facing federal and state permitting concerns for entrainment and impingement for biota, fish and wildlife. The California Carlsbad project is similarly co-located.
Other federal regulations that may be implicated in the construction or operational plan of a desalination plant include: Safe Drinking Water Act, Resource Conservation and Recovery Act, Superfund Amendments and Reauthorization Act, and the Endangered Species Act.
State permitting requirements may vary widely. Zoning and land use restrictions are often controlling factors in locating major utilities, complicating the already challenging task of finding a site that will provide a desalination facility with the resources it needs for both intake and disposal. Furthermore, states are free to set their own environmental regulations that exceed Federal requirements.
Conclusion
As water supplies become increasingly constrained the interest in seawater desalination facilities will continue to increase. However, the technical and environmental challenges these facilities face make their adoption a difficult choice for governments and public water utilities. Innovative siting, funding, and careful planning of concentrate disposal will be needed for future implementation. | http://www.llw-law.com/articles/looking-sea-resurging-interest-seawater-desalination-projects-face-twin-specters-technical-challenges-permitting-complexity/ |
Mediation is useful in all relationships, Whether they be in a marriage, at work, in sports between businesses, children/parents, nations, union/management, developer/environmentalists, employer/employee, two employees, two businesses, or whatever conflict may arise... Sometimes the conflict is resolved by a simple compromise. In other times it is resolved by a win/lose outcome and in some occasions, it is resolved by agreement. In times past, most disputes were resolved by physical combat. As society became more "civilized", disputes were resolved by the monarch king or chancellor. Today, many disputes result in saying, "I'll see you in court."
The legal system has been a dispute resolution medium in our society for much of recorded history. Resolving disputes through the courts can often take a long time, cost a lot of money, create a lot of public attention, and does not maintain goodwill between the parties. There is a losing party and more often than not, the wining party is not satisfied and feels as if they lost. There is an alternative-mediation.
What is Mediation?
Mediation is an intervention into a dispute of an acceptable, impartial, neutral party known as a mediator who assists the parties to reach their own mutually acceptable agreement to the matter in dispute.
Mediation is Communication
For mediation to occur, the parties must begin to communicate. Labour and management must be willing to talk, governments and public interest groups must have a forum for dialogue. Families must be willing to begin face to face dialogue. Often when a dispute has arisen, communication between people has broken down or disputants feel that the existing communication is not effective. The mediator is specially trained to facilitate and improve communication between parties to a dispute.
Acceptable Intervenor
A mediator is an acceptable intervenor. Acceptability refers to the willingness of the parties in dispute to allow an intervenor to assist them to reach a resolution. The thinking behind and involving an intervenor is a conflict is that the intervenor will be able to alter the power and/or dynamics of the conflict relationship. This is done by influencing legal beliefs or behavior of the individual parties by providing knowledge or information or providing a more effective way to communicate and thereby help the parties to settle the issues in dispute. Even the mere presence of a party who is independent to the parties in dispute has been found to be significant in resolving the dispute.
Impartiality and Neutrality
Impartiality means the mediator has an unbiased opinion and is not influenced by any preconceived preference in favor or prejudice against one of the parties. Neutrality refers to the behavior or relationship between the mediator and the parties in dispute. The mediator does not personally expect to gain any benefit or special payment from one or the other party. Compensation for the mediator does not depend on the result. Being impartial and neutral does not mean that the mediator does not have a personal opinion regarding the outcome of what is in dispute but the mediator is trained to put aside his or her personal opinion and of focus on ways to help the parties make their own decision.
No Binding Power
A Mediator is different from a judge or an arbitrator. A judge or an arbitrator like an umpire in a ball game "calls them the way he/she sees them" and the decisions is final. The mediator, on the other hand, works to reconcile people's different and competing interests. The mediator's goal is to assist the parties to look at their concerns, examine their needs, and examine the needs and concerns of the other party such that an exchange of promises that will be mutually satisfactory results.
A Voluntary Process
Parties are not forced to meet with the mediator. They must choose to meet of their own free will. They are free to leave mediation at any time and are not forced to agree to anything unless they want to. Attempting mediation does not mean that one is forced to settle.
Without Prejudice
A key feature of mediation is that everything that is said or happens with the mediator is confidential. If the parties agree, the settlement that results need not be confidential. If the parties do not reach a settlement, then they are free to resolve the dispute through the Courts without the threat of what was said or done in the mediation being used against them. In other words, what happens in mediation is without prejudice to the rights of the parties.
Economical
Mediation is generally less expensive in terms of dollars when compared to Court action. Rapid settlements. If may take along time to get to Court and a long time for the decision after the trial. The decision at trial may be appealed. The decision at the Appeal Court may be again appealed to a higher Court. A dispute resolved in mediation allows the parties to get on with the business of their lives.
Mutually Satisfactory Outcomes
People ore generally more satisfied with solutions that they work on and create themselves and which they mutually agree to. People are less satisfied with decisions imposed on them by an outsider such as a judge, commissioner, arbitrator, and so on.
High Rate of Compliance
Parties who reach their own agreement are generally more likely to follow through with the promises they made. The statistics show that maintenance payments for child support are made more often through agreements than through decisions imposed by a court.
Comprehensive Brainstorming
Mediation settlements can address legal and non legal issues. The parties can tailor their agreement to their particular situation. Brainstorming is not part of the judicial decision making process. In mediation, parties can use problem solving techniques.
Control and Predictability
In mediation, the parties have control of the process and the outcome is more predictable. If they are not happy with the settlement, they do not need to agree to it. neither the other party nor the mediator can impose and agreement.
Empowerment
People who speak for themselves in a controlled setting often feel more powerful than those who use advocates such as lawyers and agents to represent the.. Participants use plain language that they are comfortable with rather than legal mumbo jumbo.
Preserving Ongoing Relationships or Terminating Relationships More Amicably
A mediated settlement often preserves a working relationship in ways that would not be possible in a win/lose decision. Mediation can also make the termination of the relationship more amicable.
Mediation is Developing as an Alternate Dispute Resolution Method
In Saskatchewan, the Law Society has a list of approved Mediators.
Per: John Benesh: "mediator approved by the Law Society of Saskatchewan" | http://benesh.com/index.php/services/mediation |
Divorce mediation long island is a process by which both spouses work through their differences and come to a mutually beneficial agreement. It is not a court procedure, and there is minimal risk involved. The parties can speak freely, and their emotions will be respected. In a mediation, the issues will be narrowed down so that the process can be completed without any unnecessary tension. This process will allow both parties to maintain a sense of emotional calm and appropriate connectedness during the process.
In a typical divorce mediation session, the couple and the mediator will discuss the issues they want to address and decide on the order in which they want to discuss them. The mediator will gather pertinent financial data and opinions of experts, including accountants and appraisers. In addition, the couple will have the opportunity to discuss their own issues and get their spouse’s input. A divorce mediator will have a good idea of where each party is willing to compromise.
The mediator will not let one party dominate the other. If one party is not able to communicate effectively, the mediator will stop the mediation. However, this doesn’t mean that the mediator is not a reliable source of legal advice. Both parties should obtain independent legal advice during the process. In addition, they should hire a lawyer to review the agreement before signing it. If one party doesn’t understand the legal terms of a proposed settlement, the mediator cannot give them legal advice.
When it comes to splitting property, divorce mediation is a great option for families with children. While it is more expensive than a trial, it may still be the best option for some people. Ultimately, mediation is an effective way to reach a divorce settlement. It’s also much less stressful and costly than a trial. Many couples choose to resolve their issues in a less stressful manner than in a courtroom.
Divorce mediation sessions can last anywhere from one to three hours. The time it takes to complete a divorce is determined by the couple’s schedule. A couple may have several sessions a month, while others can only schedule one or two every other week. The duration of the process depends on the couple and their schedule. For instance, some couples cannot attend more than one session per month. While others can schedule multiple sessions, they should ensure that there’s enough time in between each session to reflect and consult with their attorneys.
During the process of a divorce, both parties will work with a mediator to come to a mutually beneficial agreement. They will learn about the process of divorce and how it can benefit them in the long run. They will be able to decide whether they want to work together or separate. It is important to note that divorce mediation is a complex process, and that a divorce can take up to three months. The process takes between one and five sessions.
Divorce mediation involves a mediator who serves as a third-party. The mediator keeps the discussions focused and asks questions to gather more information. The mediator summarizes the discussions to ensure that everyone understands what is being discussed. The mediator will help the couple identify the areas of agreement and disagreement. If the couple is able to agree on many points, mediation can be an effective tool. In some cases, however, the spouses may be unable to reach a mutually beneficial agreement.
Although divorce is a difficult process, it does not have to be a drawn-out legal battle. Through mediation, the parties can make practical decisions regarding the division of property and children. This reduces conflict and promotes cooperation, which can lead to a successful outcome for both parties. Further, it can also be cheaper than a court-ordered divorce. It also allows both parties to avoid embarrassing situations that may occur in a courtroom.
The process of divorce mediation involves the two parties working out agreements outside of the courtroom. In divorce mediation, the parties work out their differences without the assistance of a mediator. In this way, both sides can get a fresh start in their lives. Moreover, they can avoid the hassle of going through a trial. If the spouses cannot agree on an agreement, they will be able to communicate and stay focused. The mediator will facilitate the process of settling the matter and help them reach a mutually beneficial agreement. | https://horizonhumanservices.org/divorce-mediation-what-you-should-know-about-affordable-divorce-mediation-on-long-island/ |
The Crucial Role of Environmental Coronas in Determining the Biological Effects of Engineered Nanomaterials.
- Medicine, Chemistry
- Small
- 2020
Molecular mechanisms triggered by pristine and environmental corona-coated ENMs are compared, including membrane adhesion, membrane damage, cellular internalization, oxidative stress, immunotoxicity, genot toxicity, and reproductive toxicity. Expand
Nanoplastics affect moulting and faecal pellet sinking in Antarctic krill (Euphausia superba) juveniles.
- Medicine, Chemistry
- Environment international
- 2020
The presence of fluorescent signal in krill faecal pellets (FPs) confirmed the waterborne ingestion and egestion of PS-COOH at 48 h of exposure and changes in FP structure and properties were also associated to the incorporation of PS NPs regardless of their surface charge. Expand
Nanoplastics affect moulting and faecal pellet sinking in Antarctic krill (Euphausia superba) juveniles
- 2020
Plastic debris has been identified as a potential threat to Antarctic marine ecosystems, however, the impact of nanoplastics (< 1 μm) is currently unexplored. Antarctic krill (Euphausia superba) is a… Expand
Impact of nanoplastics on hemolymph immune parameters and microbiota composition in Mytilus galloprovincialis.
- Medicine, Biology
- Marine environmental research
- 2020
The results indicate that exposure to nanoplastics can impact on the microbiome of marine bivalves, and suggest that downregulation of immune defences induced by PS-NH2 may favour potentially pathogenic bacteria. Expand
Environmental dimensions of the protein corona
- Medicine
- Nature Nanotechnology
- 2021
The emerging understanding of the importance of the dynamic and evolving protein corona composition in mediating the fate, transport and biological identity of nanomaterials in the environment is presented. Expand
Toxicity of Carbon, Silicon, and Metal-Based Nanoparticles to the Hemocytes of Three Marine Bivalves
- Medicine, Chemistry
- Animals : an open access journal from MDPI
- 2020
The highest aquatic toxicity was registered for metal-based NPs, which caused cytotoxicity to the hemocytes of all the studied bivalve species, and the results highlighted different sensitivities of the used tested mollusc species to specific NPs. Expand
Eco-Interactions of Engineered Nanomaterials in the Marine Environment: Towards an Eco-Design Framework
- Medicine
- Nanomaterials
- 2021
An ecologically based design strategy (eco-design) is proposed to support the development of new ENMs, including those for environmental applications, by balancing their effectiveness with no associated risk for marine organisms and humans. Expand
Behavior and Bio-Interactions of Anthropogenic Particles in Marine Environment for a More Realistic Ecological Risk Assessment
- Environmental Science
- Frontiers in Environmental Science
- 2020
Owing to production, usage, and disposal of nano-enabled products as well as fragmentation of bulk materials, anthropogenic nanoscale particles (NPs) can enter the natural environment and through… Expand
In vivo protein corona on nanoparticles: does the control of all material parameters orient the biological behavior?
- Chemistry
- 2021
Nanomaterials have a huge potential in research fields from nanomedicine to medical devices. However, surface modifications of nanoparticles (NPs) and thus of their physicochemical properties failed… Expand
References
SHOWING 1-10 OF 83 REFERENCES
Cationic polystyrene nanoparticle and the sea urchin immune system: biocorona formation, cell toxicity, and multixenobiotic resistance phenotype
- Chemistry, Medicine
- Nanotoxicology
- 2018
Overall results encourage additional studies on positively charged nanoplastics, since the observed effects on sea urchin coelomocytes as well as the TPP corona formation might represent a first step for addressing their impact on sensitive marine species. Expand
Secreted protein eco-corona mediates uptake and impacts of polystyrene nanoparticles on Daphnia magna.
- Biology, Medicine
- Journal of proteomics
- 2016
It is demonstrated for the first time that proteins released by Daphnia magna create an eco-corona around polystyrene NPs which causes heightened uptake of the NPs and consequently increases toxicity. Expand
Interactions of cationic polystyrene nanoparticles with marine bivalve hemocytes in a physiological environment: Role of soluble hemolymph proteins.
- Biology, Medicine
- Environmental research
- 2016
The influence of hemolymph serum (HS) on the in vitro effects of amino modified polystyrene NPs (PS-NH2) on Mytilus hemocytes was investigated and represents the first evidence for the formation of a NP bio-corona in aquatic organisms. Expand
Amino-modified polystyrene nanoparticles affect signalling pathways of the sea urchin (Paracentrotus lividus) embryos
- Biology, Medicine
- Nanotoxicology
- 2017
It is highlighted that PS-NH2 are able to disrupt sea urchin embryos development by modulating protein and gene profile providing new understandings into the signalling pathways involved. Expand
Species differences take shape at nanoparticles: protein corona made of the native repertoire assists cellular interaction.
- Biology, Medicine
- Environmental science & technology
- 2013
It is found that over time silver nanoparticles can competitively acquire a biological identity native to the cells in situ even in non-native media, and significantly greater cellular accumulation of the nanoparticles was observed with corona complexes preformed of EfCP (p < 0.05). Expand
Comparative ecotoxicity of polystyrene nanoparticles in natural seawater and reconstituted seawater using the rotifer Brachionus plicatilis.
- Biology, Medicine
- Ecotoxicology and environmental safety
- 2017
The findings confirm the role of surface charges in nanoplastic behaviour in salt water media and provide a first evidence of a different toxicity in rotifers using artificial media (RSW) compared to natural one (NSW). Expand
Accumulation and embryotoxicity of polystyrene nanoparticles at early stage of development of sea urchin embryos Paracentrotus lividus.
- Biology, Medicine
- Environmental science & technology
- 2014
In line with the results obtained with the same PS NPs in several human cell lines, also in sea urchin embryos, differences in surface charges and aggregation in seawater strongly affect their embryotoxicity. Expand
Toxicity of metal oxide nanoparticles in immune cells of the sea urchin. | https://www.semanticscholar.org/paper/Proteomic-profile-of-the-hard-corona-of-charged-to-Grassi-Landi/67d8a2109a5ec671f159a64e991d1e4ce241a1ce?p2df |
INCLUDES GUIDED MEDITATION TO HELP DEVELOP COMPASSION
Compassion is what connects us as human beings. It’s what enables us to learn from situations and takes us on a journey to a place of hope for the future.
The other day I was talking to a beautiful lady who’d do anything for you. She mentioned that she had the hardest time doing any type of work that involved showing love and compassion to herself.
Looking at the old adage “As above, so below. As within, so without” – how can we create a society, or a culture based on love and compassion, when we are incapable of showing love and compassion to ourselves?
Firstly, let’s get really clear on what compassion is and then we’ll delve into the why and the how.
According to the Cambridge dictionary, compassion can be defined as: “A strong feeling of sympathy and sadness for the suffering or bad luck of others and a wish to help them.”
But compassion is not necessarily an automatic response to seeing someone in pain or suffering in some way. It is a response that occurs mainly when the situation is perceived as serious, or unjust and most importantly, it has to be relatable to us on an individual level.
I have been through the experience of having lost an immediate member of my family. My dad passed away and left a gaping hole in my life. So, I find it easier to relate to other people who are going through, or have just gone through a similar situation.
That is not to say you don’t feel sadness for people that are going through a different situation. I just don’t think you feel it as strongly as to when you can relate to it through self-experience.
Why be compassionate?
It’s simple- people need people. It is through relationships that we learn more about ourselves, about the human race in general and most importantly, about happiness. We cannot do everything alone, and it’s through feeling compassion for others that we become inspired to help others. It is through a compassionate heart that we connect with others and develop lasting, meaningful relationships.
“If you want others to be happy, practice compassion. If you want to be happy, practice compassion.” -Dalai Lama
All of us want to feel happy, we all try on a subconscious level to stay away and avoid pain or suffering. It is a natural instinct we are all born with. Every action we take is based on the thought of how much pain we will avoid and how much happiness we will experience. This can lead to a selfish attitude based around “I”, “me” but as we grow emotionally, mentally and spiritually, we find the need to look more at the “we”
Here are 7 powerful benefits of compassion:
- It opens your middle hara which helps you to connect to your true intention
- It helps you to become clearer on who you are as you discover the things you have in common with others. You realise that, just like you, they experience suffering. You also become more aware of your strengths and weaknesses with acceptance and love, not judgement.
- Compassion increases your happiness, fulfilment, and wellbeing. With your vibration set to a compassionate level, there is less space for negativity and the “I” factor.
- It enables you to become better connected to both yourself and others, improving your social, ecological, and spiritual relationships.
- Studies show that compassion improves your health by strengthening your immune system. It also helps to normalise your blood pressure, lowers your stress and depression, and helps improves your physical recovery from illness. As you build a compassion-based vibration within your energy fields, your healthy (life giving) energy increases within your body. Within the Japanese Reiki system, this energy is called Genki. And it is Genki that nourishes our organs, tissues and cells and keeps us alive and full of energy. When we have a lot of Genki, our physical health is high, we feel good about ourselves and others, we are confident and able to accomplish things easily, and we are less likely to get sick. When you move out of a state of compassion, into one of selfishness, you deplete your Genki. and fall out of balance, allowing the formation of Byôki (unbalanced, stagnated energy) which can lead to dis-ease (illness).
- Compassion increases the possibilities for peace and reconciliation where there is conflict. Anger only exists in the body for 2 seconds as a chemical reaction. Any anger after this is a choice you have made to keep feeling angry. Anger held within the body only ends up creating imbalance and sickness within yourself. It is better to let the anger go and focus on compassion in order to move forward.
- It is contagious and spreads outwards, inspiring others to greater acts of gentleness and kindness. Imagine you are a lighthouse, firm and steady, shining out your light of compassion. Other “ships” or people in need of some love and compassion can see your steady light and be guided by it, ultimately becoming lighthouses themselves and shining their light out into the world.
HOW TO DEVELOP COMPASSION
To be compassionate we need to remove anger and hatred from our thoughts and our vocabulary. Life happens, but we get so focused on what is happening around us, that we become distracted. The distraction of our own emotions causes a kind of veiling that prevents us from observing ourselves carefully. We do not give ourselves enough time and space to use our innate wisdom to observe ourselves before we act.
According to the Dalai Lama: “The more we care for the happiness of others, the greater our own sense of well-being becomes. Cultivating a close, warm-hearted feeling for others automatically puts the mind at ease. This helps remove whatever fears or insecurities we may have and gives us the strength to cope with any obstacles we encounter. It is the ultimate source of success in life.”
Practise time…
Start your day with intent and your day will flow in the way you desire it to.
My Perfect morning routine:
- Drink water – nourish your cells, flush out toxins accumulated overnight, balance your emotions.
- Journal – write down your dreams, but for me most importantly, list 10 NEW things you are grateful for.
- Practise compassion meditation (have meditation)
- Exercise – I prefer to do Qigong in the morning to get my circulation moving, keep limber, stay healthy, and I also get to keep my meditative mindset going.
- Eat a healthy breakfast – you are breaking the fast from the last meal (dinner), you need food that will nourish and feed you, it is the most important meal of the day.
- Write a to-do list – not only does this avoid overwhelm, but you also get to tick things off the list which makes you feel super productive and inspired.
Remember:
Try to treat whoever you meet as an old friend. This gives you a genuine feeling of happiness. It is the practice of compassion.
The one thing I always try and adhere to is:
“Before you speak, let the words pass through 3 gates:
- Is it true?
- Is it necessary?
- Is it kind? | https://meditationsinmotion.com/7-powerful-benefits-of-compassion/ |
All relevant data are included within the paper.
Introduction {#sec001}
============
The disciplines of animal behaviour and behavioural ecology are essential tools for the implementation of reliable conservation measures \[[@pone.0147404.ref001], [@pone.0147404.ref002]\]. Currently, many species are put at risk by many threats that are often linked to human activity and the lack of knowledge concerning their biology makes it difficult to protect them \[[@pone.0147404.ref003]\]. The development of road and rail networks has increasingly fragmented and degraded natural habitats \[[@pone.0147404.ref004]\]. Populations are thus isolated from each other, impairing the genetic variability and as a consequence their survival \[[@pone.0147404.ref001]\]. We therefore need to understand the different social behaviours, movement patterns and habitat use of species if we wish to effectively predict the impact of environmental changes on populations and limit their effects on them \[[@pone.0147404.ref001]\].
Although large herbivores can have negative effects on the richness and availability of plant species, especially when they overcrowd their environment \[[@pone.0147404.ref005], [@pone.0147404.ref006], [@pone.0147404.ref007]\], they are also known to have a positive influence at both local and regional scales via local disturbances, selective grazing or even effective seed dispersion \[[@pone.0147404.ref008]\]. Despite this major role in the ecosystem, several large herbivores are endangered and registered on the IUCN Red List: in 2002, 84 of an estimated total of 175 ungulate species were critically endangered, endangered or vulnerable \[[@pone.0147404.ref008]\]. The European bison (*Bison bonasus*), the largest herbivorous mammal on the European continent \[[@pone.0147404.ref009]\], is one such species. Historically, it was distributed throughout western, central and south-eastern Europe, as well as the Caucasus \[[@pone.0147404.ref010]\]. Just two wild populations remained at the end of the 19^th^ Century: one in the Bialowieza forest (located on the border between Poland and Belarus) and the other in the western Caucasus Mountains \[[@pone.0147404.ref011]\]. Shortly after World War I, the species was considered extinct in the wild and the captive population consisted of just 54 specimens (29 males and 25 females). Thirteen of these surviving individuals were used for breeding purposes, hence creating a new basic genetic reservoir from which all the current populations of European bison were reconstituted \[[@pone.0147404.ref011], [@pone.0147404.ref012]\]. Despite increasingly numerous conservation efforts, European bison populations remain vulnerable, and are still listed as a protected species in the Berne Convention and the Habitat Directive \[[@pone.0147404.ref011]\].
As underlined by the European Bison Conservation Center, it is essential to gain a better understanding of the ecology of European bison through the study of their daily and seasonal activity, their movement patterns and their preferred habitats \[[@pone.0147404.ref011]\]. Although great progress has been made in this domain, the majority of studies carried out to date concerns bison populations in Białowieża Forest \[[@pone.0147404.ref013], [@pone.0147404.ref014]\], in regions of Eastern Europe \[[@pone.0147404.ref015], [@pone.0147404.ref016], [@pone.0147404.ref017], [@pone.0147404.ref018]\] or in the Carpathian Mountains \[[@pone.0147404.ref019], [@pone.0147404.ref020], [@pone.0147404.ref021]\]. It therefore appears essential to conduct systematic studies on populations located in areas further south in order to collect as much information as possible about bison ecology and life habits, in different habitats with different environmental conditions. This will not only improve the effective conservation and management of bison, but also their coexistence with humans, in particular by avoiding damage to agricultural and forestry plots after reintroduction through a better selection of the most suitable habitats for their biological needs \[[@pone.0147404.ref003], [@pone.0147404.ref022]\].
The impact animals can have on their environment is particularly dependent on their patterns of distribution and their abundance \[[@pone.0147404.ref023], [@pone.0147404.ref024], [@pone.0147404.ref025]\]. For example, the group size can influence their capacity to explore the environment and thus lead to group cohesion constraints \[[@pone.0147404.ref026]\]. Furthermore, differences between males and females often result in sexual segregation and fission-fusion dynamics in species showing high sexual dimorphism for nutrient requirements, activity budgets or exposure to predation \[[@pone.0147404.ref027], [@pone.0147404.ref028], [@pone.0147404.ref029], [@pone.0147404.ref030]\]. This fission-fusion dynamics, which can be also influenced by food availability, has been described in European bison \[[@pone.0147404.ref031]\], American bison (*Bison bison*) \[[@pone.0147404.ref025], [@pone.0147404.ref032]\] and African buffalo (*Syncerus cafer*) \[[@pone.0147404.ref033]\], but further studies are required to better understand its coevolution alongside environmental conditions (e.g. weather, habitat types, management choice, etc).
In this study, we focused on the daily activity of a semi-free-ranging herd of European bison in order to understand how its life habits and movement patterns change according to environmental and management conditions. European bison lives in herds composed of up to thirty individuals, and the average size and structure of these herds are environment dependent \[[@pone.0147404.ref011]\]. Groups can merge or split according to the season, the availability and distribution of resources or even the presence of predators \[[@pone.0147404.ref031], [@pone.0147404.ref034]\]. Mixed groups are composed of cows and calves with the temporary presence of adult males, while male groups contain about two adult males \[[@pone.0147404.ref031], [@pone.0147404.ref034]\]. However, more than half of the males are solitary and only join the main group during the rut period. Our main aim was to evaluate how the fission-fusion dynamics, habitat use and daily activity of our bison herd would be influenced by the provision of hay and by environmental conditions. When natural food resources become scarce during winter, the reserve places artificial feeding patches, dispersed across the territory. We would therefore expect bison to form separate groups of variable size around the artificial feeding sites during this season \[[@pone.0147404.ref011], [@pone.0147404.ref034], [@pone.0147404.ref035]\]. With the arrival of spring, the melting of snow gives better access to the grass and consequently increases the abundance of natural food resources. We would therefore expect to observe the bison regrouping in the meadow \[[@pone.0147404.ref034]\]. The animal movement patterns, namely Brownian or Lévy walks \[[@pone.0147404.ref036], [@pone.0147404.ref037], [@pone.0147404.ref038]\], would be expected to be mainly connected with feeding activity, playing an essential role in ensuring the optimum use of food resources. For many researchers, the Lévy walk is an optimal food search strategy used in heterogeneous environments with a low density of food patches, whereas the Brownian walk is mainly associated with the presence of abundant food resources \[[@pone.0147404.ref036], [@pone.0147404.ref038], [@pone.0147404.ref039], [@pone.0147404.ref040]\]. Our second objective was therefore to better understand the food search strategy of bison by studying their movement pattern. We hypothesize that bison use a Lévy walk movement given the relatively heterogeneous distribution of food patches, especially during the winter period.
Materials and Methods {#sec002}
=====================
Ethics Statement {#sec003}
----------------
The reserve has an approval to possess and breed European bison (certificate number: FR 00004165). This study was carried out by directly observing the animals, which were not subjected to any handling or invasive experiments. Our study was carried out in full accordance with the ethical guidelines of our institution (*Institut Pluridisciplinaire Hubert Curien*) and of the reserve, and complied with European animal welfare legislation. Every effort was made to ensure the welfare of the animals and minimize disturbance by the researchers.
Study area {#sec004}
----------
The study was carried out from 16 February to 26 April 2013 at the *Reserve Biologique* des *Monts-d'Azur*, located at the *Domaine du Haut-Thorenc* (43°48'20"N, 6°50'43"W), in the *Alpes-Maritime* region of France. This fenced reserve of about 700 hectares, located at an altitude of around 1200m, comprises seven different habitats: pine forest (51%, predominantly composed of Scots pine trees), meadow (33%, composed mainly of herbaceous vegetation, predominantly grasses), boxwood (12%), wet meadow (2%, grassed area flooded for part of the year), limestone grassland (2%, characterized by herbaceous perennial plants growing on a calcareous soil) and two water sites including an artificial lake. The *Reserve Biologique des Monts-d\'Azur* is part of the original range of the European bison \[[@pone.0147404.ref011]\], and its climate and plant species make it suitable for the reintroduction of these animals.
Study subjects {#sec005}
--------------
The semi-free-ranging herd of bison living on the reserve consists of 43 individuals: 3 adult males aged from 6 to 9 years, 13 sub-adult males aged from 1 to 3 years (one of which died in March 2013), 10 adult females aged from 7 to 13 years, and 8 sub-adult females aged from 1 to 3 years. The herd also includes 9 juveniles approximately one year old (3 males and 6 females). These bison arrived at the reserve in 2005 and 2006 following selection by the coordinator for the European Breeding Program for European Bison in order to ensure a natural sex ratio.
As the fenced area of the reserve is not large enough to provide sufficient resources during winter and given the possibility of deep snow covering the ground during this period, bison were supplemented with hay twice a week, from November 2012 to April 2013, to decrease their mortality rate and ensure their survival in the cold season. In particular, 10 racks were set up as sources of extra food (2 hay bales per rack). They were located along the edge of the pine forest from east to west.
Data collection {#sec006}
---------------
We observed the bison herd for an average of 4 hours per day from 10 a.m. to 12 p.m. and from 4 p.m. to 6 p.m., for a total observation time of approximately 153h, with a comparable number of sessions in the morning (n = 42) and the afternoon (n = 41) for the entire study period. The observer (A.R.) was located approximately 20--25 m from the animals. This distance ensured that the animals, which were habituated to human presence, did not express stress behaviours \[[@pone.0147404.ref041]\]. Before each observation session, the outdoor temperature (°C) was evaluated using the thermometer of a DC360BL digital compass (Fotronic Corporation, Melrose, Massachusetts), and when snow was present, snow depth was measured in the meadow (average = 28.2±18.3, cm).
The daily activity of bison was recorded using the instantaneous sampling method \[[@pone.0147404.ref042]\] to collect the following parameters every 10 minutes: (1) the number of moving individuals, (2) the number of foraging individuals, (3) the number of standing individuals (with no specific activity), (4) the number of lying down individuals, (5) the number of isolated individuals (located more than 5 m from any congener), and (6) the number of individuals in social interaction. The social interactions that were taken into account included battling (usually males), suckling, social play and sexual behaviours. To better understand the social behaviour of bison, we also determined the dispersion state of the herd every 10 minutes. The bison were considered to be grouped when individuals were less than 5 m apart and dispersed when two thirds of the individuals were more than 5 m apart. When the minimum distance between individuals belonging to sub-groups was over 5 m (but individuals inside sub-groups were less than 5 m from each other) the state was considered as \"sub-grouped.\" If the dispersion state did not correspond to one of the three states described above, the herd dispersion was considered as non-defined. This criterion of 5 m was estimated based on bison body length. A similar criterion was used for studies in other social ungulates species, especially cattle \[[@pone.0147404.ref043]\]. We noted the type of habitat (including hay rack sites) occupied by the majority of bison (i.e., the main group), and its GPS position (altitude, longitude and latitude). Finally, we used GPS points to study step length (distance over a period of 10 minutes, in km), the time between two successive movements (sec) and the angles between successive movements (degrees). Step lengths were calculated according to the interval between the coordinates of two consecutive points. Thus, the step length value was zero when there was no movement. The time between two movements (waiting time) was the time during which individuals had not moved (sum of same GPS points performed between two movements). The angle between two consecutive movements (turning angle) corresponded to the angle formed between the two vectors of successive movements, the latter being determined using three successive GPS points.
All the previous cited data were collected using Cyber Tracker 3.0 software (Cyber Tracker Conservation, Bellville, South Africa) in conjunction with a Personal Digital Assistant Trimble Juno^®^ 3B (Trimble Navigation Limited, Westminster, United-States).
Statistical analysis {#sec007}
--------------------
In order to meet our primary objective, i.e. to understand how additional feeding and environmental conditions influence the daily activity, habitat use and sociality of bison, we used Fisher's exact tests to highlight any differences in the percentage of time spent in the different activities, the different frequented habitats, and the different states of dispersion of the herd during the morning and the afternoon and between weather conditions (i.e. the presence and absence of snow). For each type of activity, each type of frequented areas, and each state of dispersion, we ran pairwise comparisons between the two groups of conditions (i.e., morning vs. afternoon and "snow" vs. "no snow") with Bonferroni corrections for multiple testing \[[@pone.0147404.ref044]\]. For each condition, Kruskal-Wallis tests were conducted to determine whether the time spent (% time budget) in each activity, habitat and state of dispersion was homogeneously distributed or not. If the test results were significant, Dunn multiple comparisons tests were run to determine which activities, habitats and states of dispersion differed significantly. These analyses were carried out on 993 scans.
To evaluate habitat use, all GPS points were used to estimate the distribution of the herd in the reserve throughout the study period using the Kernel Density Estimation method (R software, package adehabitatHR) \[[@pone.0147404.ref045]\].
We then analysed the sociodemographic factors of the herd. The Typical Group Size (TGS), which quantifies group size as experienced by the individual, was calculated as the sum of the squares of the number of individuals in each group, divided by the total number of animals sampled \[[@pone.0147404.ref046]\]. The TGS emphasises how the members of a population associate; this information is not revealed by the arithmetic mean of the size of the groups. The sex ratio of groups was calculated by dividing the number of males (adult and sub-adult) by the total number of adult and sub-adult individuals.
In order to assess how animals move in their territory and identify their foraging strategy, Kolmogorov-Smirnov testing was used to test the uniformity of angle distribution between two movements after correcting to allow for the use of this test on angles, i.e. absolute frequencies. A curve estimation test (linear, polynomial and exponential) was then performed to assess the movement distribution. We investigated results for functions that best explained the data distribution. Two hypotheses, the Lévy walk and the Brownian walk, were tested for step length distribution and time between two movements. The Lévy walk hypothesis is characterized by a power curve and distributions that show many short step lengths and some long step lengths; this pattern indicates some optimization of movements or food searching behaviour \[[@pone.0147404.ref036], [@pone.0147404.ref038]\]. The Brownian walk hypothesis is characterized by an exponential curve and by random and non-optimized movements. We tested for the predominance of either a Lévy walk (power distribution, equation 1: y = a.x^μ^) or a Brownian walk (exponential distribution, equation 2: y = a.exp^l.x^) in European bison. We checked the form of the distributions via the maximum likelihood method (MLE) \[[@pone.0147404.ref047], [@pone.0147404.ref048]\], which involves calculating the exponent of the distribution (i.e., power or exponential in the case of the current study) to calculate the log likelihood of the distribution. Log likelihoods for the exponential or power distribution can then be compared for different step lengths using the Akaike Information Criterion (AIC). One AIC value was calculated for each hypothesis (exponential or power), and we retained the hypothesis with the lowest AIC. A detailed method for calculating MLE and AIC are described in Sueur et al. \[[@pone.0147404.ref038]\]. AIC and the different estimates for parameters associated with power (exponent μ) and exponential (exponent *l*) were obtained using the fitdistr() and mle() functions respectively for the MASS and stats4 packages of the statistical software program R.
All statistical analyses were performed using R 3.0.1 software (R foundation for Statistical Computing, 2013, Vienna, Austria). For daily activity, habitat use and dispersion state, the histograms represent the mean ± standard error. The results of pairwise comparisons tests (between two conditions) are only shown in the figures when the Fisher's exact tests are significant.
Results {#sec008}
=======
Daily activity {#sec009}
--------------
No significant difference of bison activity was observed between the morning and the afternoon (Fisher's exact test: *P* = 0.731). The time budget, however, is not homogeneously distributed among the different activity categories (Kruskal-Wallis tests: H~morning~ = 1326, df = 3, *P* \< 0.001; H~afternoon~ = 1364, df = 3, *P* \< 0.001, [Fig 1A](#pone.0147404.g001){ref-type="fig"}). In the morning, bison spent significantly more time in resting activity than foraging, moving and being involved in social interactions (Dunn\'s nonparametric multiple comparisons test, *P* \< 0.05). We obtained the same tendency for the afternoon, but with equal amount of time spent foraging and resting.
{#pone.0147404.g001}
Although the presence or absence of snow was not seen to make any significant difference to bison activities (Fisher's exact test: *P* = 0.957), the time budget allocated to each activity is not uniformly distributed (Kruskal-Wallis tests: H~snow~ = 910, df = 3, *P* \< 0.001; H~no\ snow~ = 1756, df = 3, *P* \< 0.001, [Fig 1B](#pone.0147404.g001){ref-type="fig"}). Indeed, when there was snow, individuals spent more time resting than foraging, moving, and displaying social interactions (Dunn\'s nonparametric multiple comparisons test, *P* \< 0.05). In the absence of snow, we observed that the time spent foraging was equivalent to the time spent resting.
Habitat use {#sec010}
-----------
The overall distribution of animals throughout the study period as determined by kernel density estimation shows that bison spent more than 50% of their time at the feeding racks (core area) and in the plots of meadow located nearby ([Fig 2](#pone.0147404.g002){ref-type="fig"}).
{#pone.0147404.g002}
One significant difference in the habitats frequented by bison between the morning and the afternoon (Fisher's exact test: *P* \< 0.001, [Fig 3](#pone.0147404.g003){ref-type="fig"}) is that individuals visited the feeding racks significantly more often during the afternoon than in the morning (Mann-Whitney test: U~hay~ = 66.50, n~morning~ = 42, n~afternoon~ = 41, *P* ≤ 0.006). No significant difference in habitat use was observed for the other habitats (Mann-Whitney tests: U \< 861, n~morning~ = 42, n~afternoon~ = 41, *P* \> 0.083). In addition, the time spent in each habitat is not homogeneously distributed, either in the morning or the afternoon (Kruskal-Wallis tests: H~morning~ = 79, df = 4, *P* \< 0.001; H~afternoon~ = 83, df = 4, *P* \< 0.001). In the morning, the animals spent significantly more time in the meadow than in the pine forest, at the feeding racks, in the wet meadow or at the water sites (Dunn\'s nonparametric multiple comparisons test, *P* \< 0.05). During the afternoon, individuals were mainly observed close to the racks and in the meadow rather than in the pine forest, the wet meadow, or at the water sites (Dunn\'s nonparametric multiple comparisons test, *P* \< 0.05).
{#pone.0147404.g003}
Bison used different habitats according to "snow" or "no snow" conditions (Fisher's exact test: *P* \< 0.001). They were more frequently observed near the feeding racks in the presence of snow (Mann-Whitney test: U~hay~ = 152, n~snow~ = 16, n~no\ snow~ = 32, *P* = 0.023). Concerning the other habitats, no significant difference was found between the "snow" and "no snow" conditions (Mann-Whitney Tests: U \< 248 = 202, n~snow~ = 16, n~no\ snow~ = 32, *P* \> 0).
Dispersion state {#sec011}
----------------
The time spent in each state is not distributed homogeneously during the day (Kruskal-Wallis tests: H~morning~ = 57, df = 2, *P* \< 0.001; H~afternoon~ = 82, df = 2, *P* \< 0.001): bison spent more time in sub-groups (70 to 80% of their time) than in "dispersed" and "grouped" states ([Fig 4A](#pone.0147404.g004){ref-type="fig"}). We found no difference between the morning and the afternoon (Fisher's exact test: *P* ≤ 0.153). However, the states of dispersion differ between the "snow" and "no snow" conditions (Fisher's exact test: *P* \< 0.001, [Fig 4B](#pone.0147404.g004){ref-type="fig"}). The animals were more often grouped when there was no snow than in the presence of snow (Mann-Whitney test: U~grouped~ = 169, n~snow~ = 16; n~no\ snow~ = 32, *P* = 0.035) but no significant difference was observed between these two conditions for the "sub-grouped" and "dispersed" states (Mann-Whitney tests: U \< 248, n~snow~ = 16; n~no\ snow~ = 32, *P* \< 0.110).
{#pone.0147404.g004}
Typical group size and sex ratio {#sec012}
--------------------------------
Bison formed groups of highly variable sizes over time ([Fig 5A](#pone.0147404.g005){ref-type="fig"}) with an average of 24.9 ± 8.8 individuals. The distribution of group sizes follows a parabolic function (R² = 0.7, F~3,42~ = 81.8, *P* \< 0.001, y = -0.084x^2^ + 4.119x - 14.214) and does not follow a normal distribution (t = 9.042, *P* \< 0.001). Groups composed of low and high numbers of individuals were rarely observed compared to groups of intermediate size. The TGS was 28 individuals. High sexual segregation was revealed, since the distribution best follows a polynomial function of degree 4 (R² = 0.9, F~1,9~ = 139, *P* \< 0.001, y = 0.0012x^4^--0.028x^3^ + 0.221x^2^--0.693x + 0.759) showing three peaks ([Fig 5B](#pone.0147404.g005){ref-type="fig"}). We observed higher frequencies for exclusively male (sex ratio = 1) and female (sex ratio = 0) groups. Mixed groups were also observed, but with intermediate frequencies (average relative frequency: 0.05 ± 0.04), indicating a fission-fusion phenomenon.
{#pone.0147404.g005}
Patterns of movements {#sec013}
---------------------
The distribution of step lengths (absolute frequencies, km) follows a power function better than an exponential function (R²~power~ = 0.9, R²~exponential~ = 0.6, AIC~power~ = 5482.16 \< AIC~exponential~ = 9208.33, y = 0.546x^-1.396^, y = 18.864e^-4.429x^, [Fig 6A](#pone.0147404.g006){ref-type="fig"}). Individuals seem to follow a Lévy walk movement rather than a Brownian walk movement type. Similarly, the distribution of stationary time between two movements (sec) follows a power function (R²~power~ = 0.9, R²~exponential~ = 0.8, AIC~power~ = 727.95 \< AIC~exponential~ = 992.78, y = 11414x^-2.032^, y = 61.217e^-0.046x^, [Fig 6B](#pone.0147404.g006){ref-type="fig"}).
{#pone.0147404.g006}
The distribution of the angles between two movements (measured in degrees) is not uniform (Kolmogorov-Smirnov test: Z = 6.27, *P* \< 0.001). Indeed, it follows a parabolic function (R² = 0.53, F~1.180~ = 65.21, *P* \< 0.001, y = 0.0007x² - 0.098x + 6.310, [Fig 7](#pone.0147404.g007){ref-type="fig"}), with a higher frequency for angles close to 0° and 180°.
{#pone.0147404.g007}
Discussion {#sec014}
==========
The main goal of this study was to acquire a better understanding of the movement patterns and space use of European bison according to the provision of extra food resources and environmental conditions. We therefore focused on the daily activity of individuals, their preferential use of habitat, the dispersion state of the herd and the pattern of their group movements. This data will provide a database that is essential to improving the reintroduction and the long-term management of European bison in Europe reserves.
We first studied the daily activity of bison by comparing the time budget for various activities between the morning and the afternoon and between \"snow\" and \"no snow\" conditions. We found that individuals spent most of their time resting and foraging, while they invested less time moving or interacting. These results are confirmed by literature showing that the daily activity of European bison is mainly characterized by the alternation of resting and foraging phases \[[@pone.0147404.ref035], [@pone.0147404.ref049]\]. Similar results have also been found for American bison where these two activities represented on average 87.9% of the time budget of a group \[[@pone.0147404.ref050]\]. In domestic cattle, domestic goat (*Capra aegagrus hircu*) and the chamois (*Rupicapra rupicapra*), foraging and resting phases represent 89%, 80% and 92% respectively of their daily time budget \[[@pone.0147404.ref051]\]. This pattern of activity seems to be observed in the majority of ruminants \[[@pone.0147404.ref035], [@pone.0147404.ref052]\] where the resting and foraging phases are interspersed by some movements allowing individuals to change sites to forage and rest or to avoid predators \[[@pone.0147404.ref046], [@pone.0147404.ref051]\].
The activity budget we observed was maintained both in the presence and the absence of snow cover, with a greater time spent resting than spent foraging in the presence of snow. Our results are somewhat different to findings by other studies \[[@pone.0147404.ref011]\]. Indeed, Caboń-Raczyńska et al. found that European bison allocated approximately 60% of their time budget foraging and 30% resting during winter grazing, while they observed the contrary during summer grazing \[[@pone.0147404.ref035], [@pone.0147404.ref049]\]. A similar seasonal influence on these two behaviours has also been described for American bison, for which the time spent foraging increased from summer to winter while resting time decreased \[[@pone.0147404.ref053]\]. These findings conflict with our results. This, and the lack of notable difference between the presence and the absence of snow in our study, could be explained by the observation period, limited to winter and early spring, and also by the food supplement that bison received during a large part of the study. Indeed, the supplementary fodder provision may affect some natural behaviours, especially those linked to foraging. This is confirmed by Rutley et al., who showed that the foraging and resting cycles of American bison were less distinct with food supplement \[[@pone.0147404.ref053]\]. Caboń-Raczyńska et al. have also shown that winter supplementary fodder provision caused an increase in resting time and a decrease in foraging \[[@pone.0147404.ref035], [@pone.0147404.ref049]\]. Finally, snow cover and food supplementation can also be responsible for the low mobility of bison during winter \[[@pone.0147404.ref034]\]. Indeed, our results are similar to those of Caboń-Raczyńska et al., for which bison spent on average 10% of their winter time roaming when their food was supplemented \[[@pone.0147404.ref035], [@pone.0147404.ref049]\].
Bison frequented different habitats in the morning and the afternoon. Individuals were observed more often at the hay racks in the afternoon than in the morning. It is interesting to note that the meadow was one of the busiest habitats, while individuals were rarely observed at water sites whatever the period of day. The general distribution map of animals shows that they predominantly attended the supplied hay sites and surrounding areas of meadow. Our results can mainly be explained by the provision of hay, which was more frequent in the afternoon than in the morning. The influence of food provision seems to be confirmed when the snow melts and the food supply decreases and stops; animals then spent significantly less time near the racks and were mainly observed in the meadow. This is unusual because the European bison is often described as a forest species showing a preference for deciduous or mixed forests \[[@pone.0147404.ref022], [@pone.0147404.ref054]\]. However, recent studies suggest that the European bison originally lived in relatively open areas and that its survival in our contemporary forests would therefore be an adaptation to environmental changes and human pressure \[[@pone.0147404.ref003], [@pone.0147404.ref055], [@pone.0147404.ref056]\]. This would define the European bison primarily as a grazer living in a suboptimal habitat \[[@pone.0147404.ref003], [@pone.0147404.ref056]\].
The majority of our data was collected over the winter season and revealed the use of snow by bison for their water needs, hence explaining the low attendance of water sites. This type of behaviour has already been observed in this species \[[@pone.0147404.ref011], [@pone.0147404.ref035]\] and in American bison \[[@pone.0147404.ref041]\]. Additionally, the fact that melting snow saturated the meadow with water during spring probably had a negative influence on the presence of bison at the lake.
By studying the state of dispersion of the herd, we showed that bison spent the majority of their time in sub-groups regardless of the conditions. The results could be explained by the distribution of the racks, which were distributed throughout the reserve and formed separate feeding sites. However, previous studies of the European bison led to similar results. Individuals have been found to form mixed groups of variable size (according to the period of year) and small peripheral groups of males \[[@pone.0147404.ref031], [@pone.0147404.ref034]\]. The presence of this organization in the American bison \[[@pone.0147404.ref041], [@pone.0147404.ref057]\] and in many species of cattle and deer \[[@pone.0147404.ref058]\] also supports our results. Furthermore, previous studies described how these groups frequently meet and split, with some individuals changing groups according to the season \[[@pone.0147404.ref058]\]. This fission-fusion dynamics is observed in many social species; when food resources are limited or unpredictable, groups often divide and decrease competition. The fusion or fission of groups can also be a response to predation pressure or individual nutrient requirements, which can lead to sexual segregation \[[@pone.0147404.ref059]\]. This phenomenon is particularly pronounced in some ungulate species, in which males are considerably larger than females and are therefore less vulnerable to predation and have higher nutritional needs \[[@pone.0147404.ref030]\]. Thus, males and females often move in separate groups, except during the breeding season, when the nutritional requirements does not differ so much between males and females due to the gestation and feeding of calves \[[@pone.0147404.ref030]\]. Sexual dimorphism is not the only hypothesis proposed in the literature to explain sexual segregation, which can be also explained by innate preference for same-sex peers \[[@pone.0147404.ref060]\] or sexual differences in activity budgets \[[@pone.0147404.ref030]\]. For some species, the grouping of congeners with similar needs remains a way to minimize the possible costs of synchrony \[[@pone.0147404.ref061]\].
We finally analysed group movement patterns to investigate whether bison optimize their use of food resources. The distribution of step lengths (km) and time between two movements followed a power function, meaning that bison use a Lévy walk movement pattern. It is characterized by a high number of small movements and few long movements. Indeed, bison made a lot of small movements around the principal food sites (racks) and made longer movements to move from one rack to another because racks were dispersed throughout the reserve. The distribution of data concerning the angles between two movements follows a parabolic curve (many values of 0° and 180°), indicating that bison perform linear movements with many u-turns. The linearity of movements is probably also related to the location of hay provision sites, with animals moving mainly from one rack to another and making very few small movements in the meadow or the forest.
Studying how bison use their habitat via the observation of herd distribution and the evaluation of impacts this species has on the environment could be key elements in the creation of measures for their protection and the improvement of the cohabitation with humans. Conflicts with human populations result in one of the greatest threats to the persistence and survival of many animal species in the wild \[[@pone.0147404.ref022], [@pone.0147404.ref062]\]. Human pressure on the natural environments pushes animals out of their territories, often resulting in damage to industrial and agricultural lands \[[@pone.0147404.ref022], [@pone.0147404.ref056]\]. The development of artificial food patches in natural habitats to keep animals away from private lands is a possible solution to reduce the conflicts between human activities and wildlife \[[@pone.0147404.ref022]\]. This method has been tested in South Africa to keep chacma baboons (*Papio ursinus*) away from urban areas \[[@pone.0147404.ref063]\]. Artificial food patches influence the ranging behaviour of species \[[@pone.0147404.ref064], [@pone.0147404.ref065]\] and are therefore attractive management tools to help prevent animal populations from leaving their natural range and dispersing \[[@pone.0147404.ref022]\]. Our study clearly shows that bison remain close to the racks when supplied with hay. However, supplementary feeding alone cannot represent a long-term solution, because it causes bison aggregation and leads to higher parasitic transmission, negatively affecting body condition \[[@pone.0147404.ref066], [@pone.0147404.ref067]\]. This study also reveals new elements that contribute to our understanding of space use and movement patterns in bison. Kernel estimation allows to indicate the surface needed by the population when food supply is present. Combined with other studies, these results can help to evaluate the surface necessary for a herd to live and survive without human intervention. However, as the majority of European reserves are totally fenced in, alternative solutions to decrease population size increase should be found. Our ultimate goal would be to predict all these elements to efficiently reintroduce this emblematic species in the most favourable habitats, which are now mostly anthropogenic.
The authors thank Baptiste Vivinus for his help during bison observations. C.S. gratefully acknowledges the support of the Fyssen Foundation.
[^1]: **Competing Interests:**The authors have declared that no competing interests exist.
[^2]: Conceived and designed the experiments: AR CS OP. Performed the experiments: AR. Analyzed the data: AR CS CP OP. Contributed reagents/materials/analysis tools: AR CS CP. Wrote the paper: AR CS OP. Delivered permission for bison observation: PL.
| |
The running of SchoolKit Clinics is guided by a set of six foundation principles. These principles address the expectations and obligations of the many different parties involved. It is essential to the success of SchoolKit Clinics that all professional personnel involved fully understand the foundation principles and are committed to them.
Foundation Principle #5 – Evaluation
Evaluation and feedback structures can be put in place from when SchoolKit Clinics first begin and be used to continuously improve clinics for the benefit of all participants.
In the health services context, evaluation of a project, and how well it meets its stated aims and objectives, is typically accomplished as a summative evaluation at the project’s conclusion. But a more valuable use of evaluation is to make it a key component of the project right from the initial, or formative, phases.
This serves to reinforce the things that are working. It also creates the opportunity to pre-empt and address any problems that could compromise the success of the project and may otherwise not be identified until much later.
A developmental evaluation approach offers ongoing interaction between whoever is responsible for evaluation and the rest of the project team. It enables co-design of some of the activities associated with the work (i.e. carers and parents are actively involved in deciding the best way of proceeding). As part of the overall project team, the evaluator is involved in team meetings and discussions, incorporating evaluation and feedback (such as surveys) as an essential part of the project.
Early identification of what is working and what should be improved is central to this method of evaluation. This enables the evaluation to be tailored to specifically meet the needs of the project. The key benefit of this process is that refinements and corrections can be made in an ongoing manner throughout the life of the project. Also, evaluation can be used to support team learning about evaluation processes and the translation of research into practice.
Ongoing Evaluation
Evaluation structures incorporated into the development of SchoolKit Clinics provide valuable insight into the strengths and weaknesses in the clinic process.
Feedback gathered informally can lead to changes being made to clinic process which have an immediate impact and improve family satisfaction.
Also, surveys are used to obtain information (both quantitative and qualitative) to evaluate the experience of both parents and carers and school staff participating in the clinics. The surveys can capture feedback that identifies both what is working well and what adjustments need to be made to help improve clinics in future.
For more about informal and formal evaluation processes go to Ongoing Evaluation. | http://schoolkit.org.au/foundation-principles/evaluation/ |
Having explored the human brain for his fall collection, Christopher Kane turned his attention away from the natural sciences to the technological ones in his rich and varied resort lineup. Prints, laces, broderie anglaise and patterns were inspired by the long, intersecting lines of computer graphics from the Seventies and Eighties — the kind used to create 3-D images. Kane wove those lines into curving, lace flower embellishments on sleeveless shifts and sweatshirts, and worked lacy images of the human body into white cotton blouses. He also dipped into his fall 2010 archive for a trove of diamond buttons, large and small, used to decorate denim jackets and cocktail dresses in black or Pop Art colors. Also back from the archive was a body-con dress in black with thick bright red, blue and yellow stripes — like duct tape — wrapped around it. | https://wwd.com/runway/resort-2014/london/christopher-kane/review/ |
Decreased liver and pancreas functioning can lead to immune system problems. Insulin removes sugar from the blood to use as energy. B lymphocytes are like the body's military intelligence system — they find their targets and send defenses to lock onto them. People may have chronic gum disease (gingivitis) and frequent ear and skin infections. Inflammation is part of your body's immune response. Primary central nervous system lymphoma (PCNSL, 14. )Immune deficiency disorders result from defects that occur in immune mechanisms. But if you’re getting enough sleep and still suffering from exhaustion, it’s worth considering if your immune system is trying to tell you something.
Get a disease that weakens your immune system. Some conditions caused by an overactive immune system are: These cells trigger the B lymphocytes to produce antibodies, which are specialized proteins that lock onto specific antigens. Lymphocytes are found in your blood and also in specialised lymph tissue such as lymph nodes, your spleen and your thymus. We organize the discussions of immune system disorders in three categories:
How do we document and evaluate immune deficiency disorders, excluding HIV infection?
An allergen causes an itchy rash known as atopic dermatitis. Because primary immune disorders are caused by genetic defects, there's no way to prevent them. AIDS dementia complex, HIV dementia, HIV encephalopathy, and major neurocognitive disorder due to HIV infection.
Hereditary And Congenital Deficiencies
Our immune system acts as a guardian for our body, protecting it from damaging forces from the environment and is critical for our survival. SCID is a serious immune system disorder that occurs because of a lack of both B and T lymphocytes, which makes it almost impossible to fight infections. It’s not a coincidence that you tend to get sick after a big project at work or following an emotional situation at home. Frontiers, ” The weighted Kappa was 0. You're currently using an older browser and your experience may not be optimal. Repeated manifestations of an immune deficiency disorder, with at least two of the constitutional symptoms or signs (severe fatigue, fever, malaise, or involuntary weight loss) and one of the following at the marked level:
Many of these severe innate T cell deficiencies can be cured today using stem cell transplants. CGD is caused by a decreased production of nicotinamide adenine dinucleotide phosphate (NADPH) oxidase by neutrophils. The two basic types of leukocytes are: ” The answer is that in almost every case, having a PI makes only part of the immune system weak. Refrigerate promptly. Respiratory (pleuritis, pneumonitis), cardiovascular (endocarditis, myocarditis, pericarditis, vasculitis), renal (glomerulonephritis), hematologic (anemia, leukopenia, thrombocytopenia), skin (photosensitivity), neurologic (seizures), mental (anxiety, fluctuating cognition (“lupus fog”), mood disorders, organic brain syndrome, psychosis), or immune system disorders (inflammatory arthritis).
Pursuant to §§ 404.
Diagnosis And Treatment Of Immune System Diseases
Many people have fevers and chills and lose their appetite and/or weight. Systemic sclerosis (scleroderma) (14. )In addition, there will be two satellite symposia on innate immunodeficiencies, one on Sunday afternoon, September 13, funded by CSL-Behring; the other on Tuesday afternoon, September 15, funded by Baxter. If your bone marrow isn’t producing enough lymphocytes, your doctor might order a bone marrow (stem cell) transplant. You may also have limitations because of your treatment and its side effects (see 14. )Sometimes, symptoms of PIDD do not arise until adulthood, and knowing the family medical history may indicate that your child should be tested, even if symptoms are not apparent. This will help keep your immune system in fighting shape, and also protect you from other health issues including heart disease, diabetes, and obesity.
Lymph nodes are small, bean-shaped clumps of immune system cells that are connected by lymphatic vessels. Eight simple ways to keep your immune system in top shape. The immune system is an integrated network that’s hard-wired into your central nervous system, Dr. One of the easiest ways for a person with a weak immune system to stay healthy is by practicing good hygiene, which includes washing the hands frequently. Generally, but not always, the medical evidence will show that your SLE satisfies the criteria in the current “Criteria for the Classification of Systemic Lupus Erythematosus” by the American College of Rheumatology found in the most recent edition of the Primer on the Rheumatic Diseases published by the Arthritis Foundation.
HIV-associated dementia (HAD). 11F, we require one measurement of your absolute CD4 count (also known as CD4 count or CD4+ T-helper lymphocyte count). If you have an immune system disorder, learn as much as you can about it. Resistant to treatment means that a condition did not respond adequately to an appropriate course of treatment. Along with healthy eating habits and regular exercise, getting a good night’s sleep keeps us alert, active, and in good health during cold and flu season and all year long. Marked limitation means that the signs and symptoms of your immune system disorder interfere seriously with your ability to function. Stress is a normal response to threatening situations, but chronic stress that doesn’t go away can weaken your immune system and make you more susceptible to illness and disease. On the opposite end of the spectrum, autoimmunity results from a hyperactive immune system attacking normal tissues as if they were foreign bodies, according to the University of Rochester Medical Center.
What are T cells?
THE Human Immunodeficiency Virus (HIV) is a virus that cripples the immune system by destroying white blood cells called CD4 helper lymphocyte cells (CD4 cells). You might go from one illness to another, without ever recovering in between. Even if you do come down with a case of seasonal sniffles, you’ll be able to bounce back faster if your body is well rested. Medical procedures such as organ transplant can even require actively suppressing your immune system to prevent an organ rejection. Cancer treatments and the disease process of cancer make you more susceptible to many types of infections. Reduced inhibitions associated with alcohol consumption can also lead to sexually transmitted infections, especially HIV or HSV-2, which can suppress the immune system and cause the body to become more susceptible to other infections. For instance, X-linked severe combined immunodeficiency (SCID) is caused by a mutation in a signaling receptor gene, rendering immune cells insensitive to multiple cytokines.
But sometimes it fails: The immune system is the body's defense against infections. What turns the volume of your immune system up or down? Usually, respiratory infections (such as sinus and lung infections) develop first and recur often. Eczema is an itchy rash also known as atopic dermatitis.
The yellow tissue in the center of the bones produces white blood cells. We knew that they have brain inflammation. If you’re not spending time in the sun, consider taking 2,000 IU as a maintenance dose. According to the Mayo Clinic, adults need about eight hours of sleep per night. We may evaluate these impairments under the affected body system. Many of these manifestations (for example, vulvovaginal candidiasis or pelvic inflammatory disease) occur in women with or without HIV infection, but can be more severe or resistant to treatment, or occur more frequently in a woman whose immune system is suppressed. Espinosa and his colleagues next plan to study how these drugs could treat or alleviate risks and impairments associated with Down syndrome—not only those related to cancer but many other aspects of the condition as well.
- During winter, take 3,000-5,000 IU of vitamin D3 to help reduce your chances of contracting a cold.
- Allergies are tricky enough as it is, with different definitions of what an allergy is, to different types of allergies, triggers and responses.
- Proper handwashing significantly reduces illnesses.
- Medicines called antihistamines can relieve most symptoms.
- Inflammatory arthritis involving the axial spine (spondyloarthropathy).
- Most of the time, immune deficiencies are diagnosed with blood tests that either measure the level of immune elements or their functional activity, Lau said.
Health Encyclopedia
When someone might have bacterial infection, doctors can order a blood test to see if it caused the body to have lots of neutrophils. Your immune system is made up of special cells, tissues, and organs that work together to protect you. It is thought that some forms of autoimmune disease (where the body attacks itself) may be due to problems with this process. 7 ways to improve your immune system that are better than coronavirus face masks. Generally, but not always, the diagnosis of inflammatory arthritis is based on the clinical features and serologic findings described in the most recent edition of the Primer on the Rheumatic Diseases published by the Arthritis Foundation. Other types of phagocytes do their own jobs to make sure that the body responds to invaders.
The precise meaning, such as the extent of response or remission and the time periods involved, will depend on the specific disease or condition you have, the body system affected, the usual course of the disorder and its treatment, and the other facts of your particular case. 3 vitamins that are best for boosting your immunity – health essentials from cleveland clinic. Only for some genetic causes, the exact genes are known. Many people with primary immunodeficiency are born missing some of the body's immune defenses or with the immune system not working properly, which leaves them more susceptible to germs that can cause infections.
Diseases Of The Immune System
There are skin tests and blood tests which help make the diagnosis, but they are only guides and never should be used on a fishing expedition to look for some magic allergen that is the root of all the problems—that rarely ever happens. They even did more damage than parasites that had evolved in vaccinated mice. They can be used to regulate parts of the immune response that are causing inflammation, Lau said. The gastric marginal zone b-cell lymphoma of malt type, thus, it is possible that with the development of antibodies to detect CD11a, teleost TRMs may be described in the future. “That's because your body is trying to conserve energy to fuel your immune system so it can fight off germs,” explains Dr. Impaired respiration due to intercostal and diaphragmatic muscle weakness. Complications arise when the immune system does not function properly. However, having many colds does not necessarily suggest an immunodeficiency disorder.
Free.
It can also happen to people following organ transplants who take medicine to prevent organ rejection. How this works. Here's how it works: Bacteria that cause food poisoning multiply quickest between 40°F and 140°F. Such an integrated research and treatment Center for Chronic Immunodeficiency (CCI) is currently being established with the support of the Federal Ministry for Education and Research (BMBF) at Freiburg University Hospitals. Which cells should you boost, and to what number? Chronic sleep loss even makes the flu vaccine less effective by reducing your body’s ability to respond. The search textbox has an autosuggest feature.
Signs of a Problem
However, in combination with extra-articular features, including constitutional symptoms or signs (severe fatigue, fever, malaise, involuntary weight loss), inflammatory arthritis may result in an extreme limitation. If you're ill often, severely, or for long periods of time, it's time to make a doctor's appointment. Primary immunodeficiency disorders (PIDDs) are a group of inherited conditions affecting the immune system, due to a lack of, or dysfunction of white blood cells, which have important roles in fighting infections. When you enter three or more characters, a list of up to 10 suggestions will popup under the textbox. In order for us to consider your symptoms, you must have medical signs or laboratory findings showing the existence of a medically determinable impairment(s) that could reasonably be expected to produce the symptoms.
Treatments that make it more difficult for the body to fight off illness, such as steroids and chemotherapy, also can increase the chance of Listeria infection. 15 foods that boost the immune system, increase B6 by eating a sweet potato yogurt almond butter breakfast parfait. But would the opposite hold true? Most of these defences are present in your blood, either as specialised white blood cells or as chemicals released by your cells and tissues.
References:
Small, bean-shaped structures that produce and store cells that fight infection and disease and are part of the lymphatic system — which consists of bone marrow, spleen, thymus and lymph nodes, according to "A Practical Guide To Clinical Medicine" from the University of California San Diego (UCSD). 11H, we require three hospitalizations within a 12-month period that are at least 30 days apart and that result from a complication(s) of HIV infection. Vaccines that boost your immunity, use fresh ingredients and check that your spices aren’t stale. However, we will not purchase electromyography or muscle biopsy.
Click on NEXT to find out the top 11 things that can weaken your immune system. For example, we will accept a diagnosis of HIV infection without definitive laboratory evidence of the HIV infection if you have an opportunistic disease that is predictive of a defect in cell-mediated immunity (for example, toxoplasmosis of the brain or Pneumocystis pneumonia (PCP)), and there is no other known cause of diminished resistance to that disease (for example, long-term steroid treatment or lymphoma). Top 10 immune system boosting foods for kids (with ideas and recipes!). Severe fatigue means a frequent sense of exhaustion that results in significantly reduced physical activity or mental function.
Too Much Exercise
Some people are born with intrinsic defects in their immune system, or primary immunodeficiency. Chemotherapy and other cancer drugs are prime examples, along with immunosuppressant drugs used to treat autoimmune diseases. If you undergo stem cell transplantation for your immune deficiency disorder, we will consider you disabled until at least 12 months from the date of the transplant. In general, an overactive immune system leads to many autoimmune disorders — because of hyperactive immune responses your body can’t tell the difference between your healthy, normal cells and invaders. When this happens, the immune system can work against us, causing allergic reactions or at its worst, autoimmune disorders, such as lupus and multiple sclerosis. 06B, for undifferentiated and mixed connective tissue disease; 14. Stress increases the body’s levels of cortisol—a stress hormone that impairs the functioning of cells that fight infection. Optimize your sleep and commit to at least 8 hours a night.
Other underlying medical conditions can compromise your body's 'security system' as well. Without the growth and activation signals delivered by cytokines, immune cell subsets, particularly T and natural killer cells, fail to develop normally. If this outer defensive wall is broken (as through a cut), the skin attempts to heal the break quickly and special immune cells on the skin attack invading germs. Most people are well again after a week, but if it takes longer, your body might be having trouble fighting infections. Killer T-cells are a subgroup of T-cells that kill cells that are infected with viruses and other pathogens or are otherwise damaged. Taking probiotics helps our bodies usher out toxins we encounter from our food and water. Thus, it’s critical to detect, diagnose and then treat the PID before it becomes a serious problem. The thymus is an important lymphatic organ.
Steps you can take to protect your immune system include: Persistent inflammation or persistent deformity of: Join the conversation on Children’s Vector blog. There are over 150 PIDDs, and almost all are considered rare (affecting fewer than 200,000 people in the United States). 09A, the criterion is satisfied with persistent inflammation or deformity in one major peripheral weight-bearing joint resulting in the inability to ambulate effectively (as defined in 14. )Presence of multiple uncleared viral infections due to lack of perforin are thought to be responsible. | https://designedrealestate.com/poor-immune-system-news-category |
Soil contamination is a worldwide problem, whether it is from heavy hydrocarbon spills, industrial activity or agricultural chemicals, and there are many different strategies for treatment. One thing of utmost importance, no matter what treatment strategy is utilized, is the prevention of downward and outward migration of pollutants at treatment sites. Whereas successful engineered solutions to horizontal contaminant transport exist, this is not the case with vertical containment of contaminants, which is poorly understood. This results in limitations to complete isolation of contaminants and reduces the effectiveness of soil and groundwater remediation.
Researchers at Arizona State University in conjunction with collaborators at Chevron have developed a method and system for temporal isolation of defined volumes of shallow soil. This system can be used for the in situ containment of soil for contaminant removal, remediation, resource extraction, emergency responses to surface spills and more. A major advantage to this technology is its ability to shield deep soil strata, groundwater and potentially drinking water resources from further contamination during manipulation and treatment of shallow soils. The need for costly soil displacement is eliminated by creating a continuous impermeable horizontal and vertical containment barrier around soil that is both easy to install and easy to remove.
This system overcomes a fundamental challenge in soil containment and requires less manpower than existing technologies while remaining cost-effective and less damaging to the surrounding environments. This has the potential to be an essential supporting technology for novel soil remediation treatments of near surface contamination
Potential Applications: | https://skysonginnovations.com/technology/methods-and-systems-for-in-situ-temporary-containment-of-soils-for-remediation-or-other-treatments/ |
DETROIT — Hamilton Anderson Associates, a Detroit architecture and design firm, announced that one of its architectural designers, Nicole Gerou, has been accepted into the inaugural class of the Christopher Kelley Leadership Development Program.
The American Institute of Architects Detroit chapter selected 16 young professionals for its first year of the leadership program for emerging architecture and design professionals in the Detroit area.
Gerou will participate in the scholar program with other architect and design young professionals to explore how architects can collaborate with each other and discuss the best approach to the design and architecture challenges facing the Detroit area, while also engaging the community. The program will guide scholars through an intensive, team-driven series of workshops on topics selected to develop and to refine a broad set of leadership skills essential to early career advancement and broader community impact.
Nicole Gerou (Hamilton Anderson Architects photo)
“We are excited for this acclaimed program to make its way to Detroit and have one of our very own included in the inaugural class,” said HAA founder and CEO Rainy Hamilton Jr. “Programs created for young professionals to lead the growth of design and architecture in Detroit is how we invest in our city and lay the groundwork for a bright future.”
Founded in 2013, the leadership program was developed by AIA Washington D.C.’s Emerging Architects in memory of the late Christopher Kelley, a recipient of the Young Architects Award in 2010. The program honors his legacy and supports the next generation of leading architects.
Since joining HAA in April, Gerou has worked extensively on construction administration and project architect support for the firm’s work in Detroit’s Greektown neighborhood. Her involvement with AIA began when she interned with the Detroit chapter in the summer of 2012. Gerou also served as the Midwest director of the American Institute of Architect Students (AIAS), one of the five collaterals that make up the profession of architecture. Additionally, she served as the AIAS representative of the 2015 honor awards jury — made up of design professionals from all over the world — to select the year’s top architecture projects for the Institute Honor Awards.
Gerou has her bachelor’s and master’s degrees in architecture from Lawrence Technological University. She is also certified in building information modeling and computer visualization and technical and professional communications.
Hamilton Anderson Associates, a minority-owned business, was founded in Detroit in 1994. It provides services in architecture, landscape architecture, planning, urban design, strategic planning, interior design, and graphic design. | https://www.techcentury.com/2017/09/11/ltu-alumna-named-to-architecture-leadership-program/ |
California’s Democratic Governor Gavin Newsom released his proposed budget this week; asking for upwards of $260 million to expand Medicaid to undocumented workers and illegal immigrants throughout the state.
“A Democratic proposal to extend government-funded medical coverage to low-income people who reside in California illegally would cost the state $260 million, according to the governor’s budget released Thursday,” reports the Washington Examiner.
The revised healthcare program would take effect July 2019 and is expected to expand Medicaid and other government subsidies to an additional 138,000 low-income individuals.
Healthcare should be a basic human right. Republicans in DC are already attacking our efforts to provide quality, affordable healthcare to everyone who calls CA home. We cannot accept the status quo. We must keep demanding better care for ALL Californians. https://t.co/C6d32ffNrV
— Gavin Newsom (@GavinNewsom) January 10, 2019
“Newsom has argued that his plan should be enacted because California residents are already paying for healthcare at the back end through fees to emergency departments,” adds the author.
Read the full report at the Washington Examiner.
CALIFORNIA CHAOS: Gavin Newsom Calls for Free Healthcare for Region's ILLEGAL IMMIGRANTS
Democratic candidate for Governor of California Gavin Newsom raised eyebrows across the state this week; calling for a universal healthcare program to cover millions of illegal immigrants residing in the region.
Newsom was speaking on the left-leaning “Pod Save America” podcast when he weighed-in on his fellow Democrats proposal for a “Medicare for All” system; saying a universal healthcare scheme should apply to every person -regardless of immigration status- living in California.
“I did universal health care when I was mayor– fully implemented, regardless of pre-existing condition, ability to pay, and regardless of your immigration status. San Francisco is the only universal health care plan for all undocumented residents in America. Very proud of that,” Newsom said. “I’d like to see that extended to the rest of the state.”
Listen to Newsom’s comments above.
GAVIN’S GOALS: Newsom Releases His List of ‘BASIC HUMAN RIGHTS’ for All Americans
Former Mayor of San Francisco and Democratic candidate for Governor of California Gavin Newsom published his official list of “basic human rights” this week, saying all Americans deserve an “affordable” home and “healthcare.”
Newsom posted his manifesto on social media, saying everyone in the United States should have access to clean water, food, healthcare, education, and housing.
Clean air.
Clean water.
Food to feed your family.
Healthcare that doesn’t bankrupt you.
An education that prepares you for the future.
An affordable, safe place to call home.
These should not be outlandish ideas.
These should be basic human rights.
— Gavin Newsom (@GavinNewsom) August 15, 2018
Newsom’s list comes months before Democrats make their push to retake Congress this November, with liberal candidates calling for more federal programs; including expanded healthcare, housing, and free college. | https://hannity.com/media-room/the-cost-gov-newsom-demanding-260-million-for-medicaid-expansion-to-illegal-immigrants/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.