content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
3rd Dingan Duanwu Food Cultural Festival to kick off in June
The 2nd Dingan Duanwu Food Cultural Festival was held successfully last June.
The year’s Duanwu Food Cultural Festival will take place from June 21 to June 23 at Ding’an County, Hainan Province.
According to the organizers, visitors can enjoy many artistic and cultural activities such as the most beautiful Dingan photographic competition, Qiong Opera, selection of most popular restaurants and dragon boat race.
The festival’s highlights are presentations on food preparation at booths, which will give diners and tourists a chance to experience the diverse dining styles from local different regions along with the entertainment activities.
The festival is hosted annually since 2009 to promote dining cultural values of the county. Every year the event attracts over 150,000 visitors to enjoy the local popular cuisine as well as attractive sceneries. | https://www.whatsonsanya.com/post/9713/3rd-dingan-duanwu-food-cultural-festival-to-kick-off-in-june/ |
How are all of the species living on Earth today related? How does understanding evolutionary science contribute to our well-being? In this course, participants will learn about evolutionary relationships, population genetics, and natural and artificial selection. Participants will explore evolutionary science and learn how to integrate it into their classrooms.
Evolution: A Course for Educators
About this Course
Learner Career Outcomes
67%
33%
25%
Shareable Certificate
100% online
Flexible deadlines
Approx. 13 hours to complete
English
Learner Career Outcomes
67%
33%
25%
Shareable Certificate
100% online
Flexible deadlines
Approx. 13 hours to complete
English
Offered by
American Museum of Natural History
The American Museum of Natural History is one of the world’s preeminent scientific, educational and cultural institutions. Since its founding in 1869, the Museum has advanced its global mission to discover, interpret, and disseminate information about human cultures, the natural world, and the universe through a wide-ranging program of scientific research, education, and exhibition.
Syllabus - What you will learn from this course
Course Introduction
Introduction and Darwin's First Great Idea - The Tree of Life
The first module of the course introduces Charles Darwin’s revolutionary concept of a “tree of life” depicting the evolution of all life from a common ancestor; how evolutionary trees depict relationships among organisms; and how new species are formed. You will explore resources for discovering and addressing student misconceptions about evolution.
Darwin's Second Great Idea - Adaptation via Natural Selection
You will learn about Darwin’s second breakthrough: that adaptation via natural selection is the basic mechanism of evolution. You’ll go behind the scenes with Dr. Cracraft to see how evolutionary biologists use the Museum’s collections. Lastly, you’ll choose a topic from the course and explain how to use it as empirical evidence that supports common ancestry and biological evolution.
The History of Life
You will learn about the role of extinction in evolution, and find out what the relatedness of major groups of living things reveals about the history of life. You’ll also watch videos of scientists at work and learn how to use them in your classroom.
Human Evolution
This module explores the rich variety of hominids on the tree of life, along with how and when different human species - including Homo sapiens - migrated around the world. You’ll also learn strategies for teaching evolution in culturally diverse classrooms.
Course Conclusion
Reviews
4.7
- 5 stars
- 4 stars
- 3 stars
- 2 stars
TOP REVIEWS FROM EVOLUTION: A COURSE FOR EDUCATORS
A very interesting and comprehensive course. I would reccomend you write down your answers to the quizzes, because there were some slight glitches. Overall a very good course.
Very informative course to understand evolution. Normally evolution terms and subject give more confusion but through this course, we definitely understand the easy way.
Very interesting, Loved the detail and course work, cant wait to do another on from the AMNH. Thank you for a well put together course.
It is a very interesting course and is presented by great teachers. Would definitely recommend to any one interested in evolution.
Excellent course with great insights on pedagogic methods and ways to handle this difficult scientific issues in the classroom
Loved the variety of content, instructors & format. I also loved checking in regarding our own responses to the information.
Excellent! A great primer on the importance of evolution and how to teach this theory to others. Great links and resources!
Very informative course and the topics were explained with interesting examples. the course should be made longer.
Frequently Asked Questions
When will I have access to the lectures and assignments?
Once you enroll for a Certificate, you’ll have access to all videos, quizzes, and programming assignments (if applicable). Peer review assignments can only be submitted and reviewed once your session has begun. If you choose to explore the course without purchasing, you may not be able to access certain assignments.
What will I get if I purchase the Certificate?
When you purchase a Certificate you get access to all course materials, including graded assignments. Upon completing the course, your electronic Certificate will be added to your Accomplishments page - from there, you can print your Certificate or add it to your LinkedIn profile. If you only want to read and view the course content, you can audit the course for free.
What is the refund policy?
Is financial aid available?
More questions? Visit the Learner Help Center. | https://www.coursera.org/learn/teaching-evolution |
---
abstract: 'Dark soliton solutions in the one-dimensional classical nonlinear Schrödinger equation has been considered to be related to the yrast states corresponding to the type-II excitations in the Lieb-Liniger model. However, the relation is nontrivial and remains unclear because a dark soliton localized in space breaks the translation symmetry, while yrast states are translationally invariant. In this work, we construct a symmetry-broken quantum soliton state and investigate the relation to the yrast states. By interpreting a quantum dark soliton as a Bose-Einstein condensation to the wave function of a classical dark soliton, we find that the quantum soliton state has a large weight only on the yrast states, which is analytically proved in the free-boson limit and numerically verified in the weak-coupling regime. By extending these results, we derive a parameter-free expression of a quantum soliton state that is written as a superposition of yrast states with Gaussian weights. The density profile of this quantum soliton state excellently agrees to that of the classical dark soliton. The dynamics of a quantum dark soliton is also studied, and it turns out that the density profile of a dark soliton decays, but the decay time increases as the inverse of the coupling constant in the weak-coupling limit.'
author:
- Eriko Kaminishi
- Takashi Mori
- Seiji Miyashita
bibliography:
- 'darksoliton.bib'
title: 'Construction of quantum dark soliton in one-dimensional Bose gas'
---
*Introduction*.—
Ultracold bosonic atoms form a Bose-Einstein condensate (BEC). All the particles in the condensate can be well described by a single wave function, which is called a “macroscopic wave function” [@London_text; @Ruprecht1995]. The macroscopic wave function obeys the classical nonlinear Schrödinger/ Gross-Pitaevskii equation [@Pitaevskii1961; @Gross1961]. Such a nonlinear wave equation may possess solitary wave solutions whose shape do not change in the time evolution. Indeed, in one dimension, the nonlinear Schrödinger equation possesses bright [@Shabat1972] and dark [@Tsuzuki1971] soliton solutions for attractive and repulsive interactions, respectively.
The one-dimensional classical nonlinear Schrödinger equation corresponds to the classical field approximation of the Lieb-Liniger model [@Lieb1963], which is a representative of quantum integrable models. Thus, it has been desired to understand the relation between many-body energy eigenstates of the Lieb-Liniger Hamiltonian and the classical soliton solutions of the nonlinear Schrödinger equation.
The problem is that in the periodic boundary condition, an individual energy eigenstate has a flat single-particle probability density due to the translation invariance, and hence, we should consider some nontrivial superposition of energy eigenstates to construct a symmetry-broken state with a solitonic density profile. As for the attractive interactions, such a quantum-classical correspondence has been established. The attractive Lieb-Liniger model has bound states with momenta $\{P\}$, and it has been analytically shown that a quantum state corresponding to a classical bright soliton at position $X$ can be constructed by performing the Fourier transform of those bound states [@Wadati-Sakagami1984].
On the other hand, this problem has not been settled in a satisfactory manner for repulsive interactions. It has been argued that a set of yrast states (energy eigenstates corresponding to Lieb’s type-II excitations), each of which is the energy eigenstate with the lowest energy at a given momentum, in the Lieb-Liniger model is related to the family of classical dark solitons with momenta $P\in[-\pi\rho_0,\pi\rho_0]$, where $\rho_0$ is the particle number density [@Ishikawa-Takayama1980; @Kanamoto2009; @Kanamoto2010]. Indeed, the dispersion relation of yrast states is similar to that of the classical dark solitons in the weak-coupling regime [@Ishikawa-Takayama1980]. As pointed out above, however, a single energy eigenstate cannot represent a dark soliton at a fixed position. It is not at all obvious how one can construct a many-body symmetry-broken state that corresponds to a classical dark soliton.
Sato et al. [@Sato2012; @Sato2016] tried to construct such a quantum many-body state guided by an analogy with the case of attractive interactions. They considered the Fourier transform of yrast states $\ket{N,P}_\mathrm{yr}$ as an $N$-particle quantum dark soliton state at position $X$, i.e., $\ket{N,X}=\sum_Pe^{iP(X-L/2)}\ket{N,P}_\mathrm{yr}/\sqrt{N}$, where the sum is taken over $P=2\pi M/L$ with integer $M\in[0,N-1]$, and numerically found that the density profile in this state is similar to that in a classical dark soliton solution $\varphi_{P_0}(x-X)$ with a certain momentum $P_0$ at position $X$, $$\bra{N,X}\hat{\psi}^\dagger(x)\hat{\psi}(x)\ket{N,X}\approx|\varphi_{P_0}(x-X)|^2.$$ However, their construction is heuristic and there is no theoretical justification to consider the Fourier transform of $\ket{N,P}_\mathrm{yr}$. More seriously, this method only reproduces a classical dark soliton at a certain momentum $P_0$. It remains unclear how one can construct a quantum dark soliton state with a different momentum $P\neq P_0$, e.g., a completely “black” soliton with $P=\pi\rho_0$.
In this work, we study the quantum-classical correspondence of dark solitons, and construct a many-body quantum dark soliton state. We start from the idea that a quantum soliton state should be interpreted as the Bose-Einstein condensation to the “single-particle dark soliton state”. It turns out that this BEC-like state is approximately expressed as a superposition of yrast states, which is concluded analytically in the free-boson limit, and checked numerically in a weak but finite coupling constant. Based on this result, we propose a construction of the quantum dark soliton state by superposing yrast states. It turns out that the quantum dark soliton is not obtained by the Fourier transform, but by the Gaussian superposition of type-II excitations. The center of the Gaussian distribution determines the velocity of the dark soliton, and the width is found to be proportional to $c^{1/4}$ (see Eq. (\[eq:P\_variance\]) below). The density profile of the quantum dark soliton state constructed in this way shows an excellent agreement with that of the classical dark soliton solution. Moreover, it turns out that this state has a lifetime longer than the soliton state constructed previously [@Sato2012; @Sato2016].
*Setup*.—
We consider interacting bosons in a one-dimensional ring (i.e., the periodic boundary condition is imposed). Such a system is described by the Lieb-Liniger model, $$\hat{H}=\int_{-L/2}^{L/2}dx\left[-\hat{\psi}^\dagger(x)\partial_x^2\hat{\psi}(x)+c\hat{\psi}^\dagger(x)\hat{\psi}^\dagger(x)\hat{\psi}(x)\hat{\psi}(x)\right]$$ with $c>0$ (i.e., repulsive interactions), where $\hat{\psi}(x)$ and $\hat{\psi}^\dagger(x)$ are annihilation and creation operators of a boson at $x$, which satisfies the commutation relations $[\hat{\psi}(x),\hat{\psi}^\dagger(y)]=\delta(x-y)$. The number of particles is given by $N=\int_{-L/2}^{L/2}dx\,\hat{\psi}^\dagger(x)\hat{\psi}(x)$, and the average number density is denoted by $\rho_0=N/L$.
In the Lieb-Liniger model, the Bethe ansatz offers exact energy eigenstates and eigenvalues [@Lieb1963]. Each energy eigenstate $\ket{\{I_j\}_N}$ is characterized by a set of quantum numbers $I_1<I_2<\dots<I_N$, where $I_j$ is integer for odd $N$ and half-odd integer for even $N$. For a given $\{I_j\}$, a set of quasi-momenta $k_1<k_2<\dots<k_N$ is obtained by the following Bethe ansatz equations: $$k_j=\frac{2\pi}{L_j}I_j-\frac{2}{L}\sum_{l(\neq j)}^N\arctan\left(\frac{k_j-k_l}{c}\right).
\label{eq:BAE}$$ The total momentum $P$ and the energy eigenvalue $E$ are obtained by $P=\sum_{j=1}^Nk_j$ and $E=\sum_{j=1}^Nk_j^2$.
The ground state corresponds to the quantum numbers $\{I_j\}=\{-(N-1)/2,-(N-1)/2+1,\dots,(N-1)/2\}$. Yrast states are obtained by removing one of $I_j$ and adding $(N-1)/2+1$ (or $-(N-1)/2-1$), whose energy spectrum in a large finite system has been obtained [@Kaminishi2011]. It has been argued that yrast states correspond to the dark solitons since the dispersion relations coincide with each other in the weak-coupling limit after the thermodynamic limit [@Ishikawa-Takayama1980].
The classical nonlinear Schrödinger equation is obtained as the classical limit of the Heisenberg equation for $\hat{\psi}(x)$; we replace the quantum field operator $\hat{\psi}(x)$ and $\hat{\psi}^\dagger(x)$ by the classical field $\psi(x)$ and $\psi^*(x)$, respectively, where $\psi^*$ is the complex conjugate of $\psi$: $$i\partial_t\psi(x)=-\partial_x^2\psi(x)+2c\psi^*(x)\psi(x)\psi(x)-\mu\psi(x),
\label{eq:NLS}$$ where $\mu$ is the chemical potential, which is given by $2\rho_0c$ in the thermodynamic limit, but there is a correction in a finite system [@Sato2016]. Equation (\[eq:NLS\]) has dark soliton solutions $\varphi_P(x-vt)$, where $P$ is the total momentum of a soliton and $v$ is the velocity that depends on $P$ through $v=dE/dP$. It is noted that $|P|\leq\pi\rho_0$ and $|v|\leq |v_c|$, where $v_c$ is called the critical velocity. In the thermodynamic limit, $v_c=2\sqrt{\rho_0c}$, but there is a correction in a finite-size system [@Sato2016]. The absolute square of $\varphi_P(x-x_0)$ corresponds to the particle number density of a dark soliton localized at $x_0$, and hence the normalization is given by $\int_{-L/2}^{L/2}dx\,|\varphi_P(x)|^2=N$.
An explicit expression of $\varphi_P(x)$ in a finite system is complicated but given in Ref. [@Sato2016]. In the thermodynamic limit, $\varphi_P^\infty(x)$ is given by $$\varphi_P^\infty(x)=\sqrt{\rho_0}\left[\gamma\tanh\left(\gamma\sqrt{c\rho_0}x\right)+i\frac{v}{v_c}\right],$$ where $\gamma=\sqrt{1-v^2/v_c^2}$ and $v$ is related to $P$ by the equation $$P(v)=2\rho_0\left\{\frac{\pi}{2}-\left[\frac{v}{v_c}\gamma+\arcsin\left(\frac{v}{v_c}\right)\right]\right\}
\label{eq:Pv}$$ for $v\geq 0$, and $P(-v)=-P(v)$ [@Ishikawa-Takayama1980]. In particular, the dark soliton at rest ($v=0$) corresponds to $P=\pi\rho_0$ and its wave function is given by $$\varphi_{\pi\rho_0}^\infty(x)=\sqrt{\rho_0}\tanh(\sqrt{c\rho_0}x),$$ which is completely black, i.e., $\varphi_{\pi\rho_0}^\infty(0)=0$.
*Quantum soliton state*.—
The success of the classical field approximation in BECs stems from the fact that it is a good picture that a macroscopically large number of particles occupy the same single-particle state with a wave function $\varphi(x)$. If all the particles are in this state, the corresponding $N$-particle state is given by $$\frac{1}{\sqrt{N!}}\left(\int_{-L/2}^{L/2}dx\,\varphi(x)\hat{\psi}^\dagger(x)\right)^N\ket{\Omega},$$ where $\ket{\Omega}$ denotes the vacuum. It is then natural to guess that an $N$-particle quantum soliton state corresponding to a classical dark soliton solution $\varphi_P(x-X)$ is given by the following BEC state: $$\ket{N,X;P}=\frac{1}{\sqrt{N!}}\left(\int_{-L/2}^{L/2}dx\,\frac{\varphi_P(x-X)}{\sqrt{N}}\hat{\psi}^\dagger(x)\right)^N\ket{\Omega}.
\label{eq:Q_soliton}$$ This quantum state has a desired property; it exactly reproduces the classical dark soliton density profile by $$\bra{N,X;P}\hat{\psi}^\dagger(x)\hat{\psi}(x)\ket{N,X;P}_\mathrm{BEC}=|\varphi_P(x-X)|^2,$$ as well as the wave function itself as $$\hat{\psi}(x)\ket{N,X;P}=\varphi_P(x-X)\ket{N-1,X;P}.
\label{eq:soliton_wave}$$
An important problem is to figure out the relation between $\ket{N,X;P}$ and the energy eigenstates $\ket{\{I_j\}_N}$ of the Lieb-Liniger model; we can always write $$\ket{N,X;P}=\sum_{\{I_j\}}C_{\{I_j\}}\ket{\{I_j\}_N},$$ but we want to understand the structure of expansion coefficients $C_{\{I_j\}}=\braket{\{I_j\}_N|N,X;P}$.
First, we numerically calculate overlaps by using the determinant formulas for form factors [@Gaudin1983; @Korepin1982; @Slavnov1989; @Kojima1997; @Caux2007]. For simplicity, we focus on the quantum soliton state with $P=\pi\rho_0$. Overlaps for $N=L=8$ ($\rho_0=1$) and $c=0.1$ are presented in Fig. \[fig:overlap\]. Since the yrast state $\ket{N,P}_\mathrm{yr}$ with the momentum $P$ corresponds to the eigenstate with the lowest energy eigenvalue for the fixed momentum $P$, Fig. \[fig:overlap\] shows that the quantum soliton state has weights concentrated on yrast states.
![The overlap $|C_{\{I_j\}}|^2=|\braket{\{I_j\}_N|N,X;P}|^2$ for $\rho_0=1$, $c=0.1$, and $N=8$. Each circle represents an eigenstate $\{I_j\}$ with the momentum $P=\sum_{j=1}^Nk_j$ and the energy $E=\sum_{i=1}^Nk_j^2$. The eigenstate with the lowest energy for a given momentum $P$ corresponds to an yrast state $\ket{N,P}_\mathrm{yr}$.[]{data-label="fig:overlap"}](soliton_state.eps){width="8cm"}
Next, we analytically calculate the expansion coefficients in the free-boson limit, i.e., $c\to 0$ at a fixed $L$ [^1]. In this limit, classical dark soliton solutions $\{\varphi_P(x)\}$ with $0\leq P\leq\pi\rho_0$ become $$\varphi_P^\mathrm{free}(x)=\sqrt{\rho_0}\left(\sqrt{1-\frac{P}{2\pi\rho_0}}-\sqrt{\frac{P}{2\pi\rho_0}}e^{i\frac{2\pi}{L}x}\right),
\label{eq:soliton_free}$$ which can be checked by taking the free-boson limit of the explicit expression of $\varphi_P(x)$ given in Ref. [@Sato2016]. On the other hand, the free-boson limit of yrast states $\ket{N,P}_\mathrm{yr}$ yields $$\ket{N,P}_\mathrm{yr}^\mathrm{free}=\ket{n_0=N-M,n_{2\pi/L}=M},
\label{eq:II_free}$$ where $M$ is an integer given by $P=2\pi M/L$, and the right-hand side means the state in which $N-M$ particles occupy the mode with the wave number $k=0$ and $M$ particles occupy the mode with $k=2\pi/L$ [^2]. By using Eqs. (\[eq:Q\_soliton\]), (\[eq:soliton\_free\]), and (\[eq:II\_free\]), we can analytically calculate the expansion coefficients. The result is that expansion coefficients are nonzero only for yrast eigenstates $\{\ket{N,P}_\mathrm{yr}^\mathrm{free}\}$, and the quantum soliton state is expanded solely by yrast states as $$\ket{N,X;P}^\mathrm{free}=\sum_{P'}e^{iP'(X-L/2)}e^{-\frac{N}{2}\varg_P^\mathrm{free}(P')}\ket{N,P'}_\mathrm{yr}^\mathrm{free},
\label{eq:Q_soliton_free}$$ where the sum over $P'$ is taken for $P'=2\pi M/L$ with $M=0,1,\dots,N$. The real function $\varg_P^\mathrm{free}(P')$ is given for large $N$ by $$\varg_P^\mathrm{free}(P')\approx\left(1-\frac{P'}{2\pi\rho_0}\right)\ln\frac{2\pi\rho_0'-P'}{2\pi\rho_0-P}+\frac{P'}{2\pi\rho_0}\ln\frac{P'}{P}.$$ It takes the minimum value at $P'=P$ and is expanded as $$\varg_P^\mathrm{free}(P')\approx-\frac{(P'-P)^2}{2\sigma_\mathrm{free}^2}$$ with $$\sigma_\mathrm{free}^2=\frac{1}{N}\frac{2\pi\rho_0}{(2\pi\rho_0)^{-1}+P^{-1}}\propto\frac{1}{N}.$$ The soliton BEC state $\ket{N,X;P}$ is therefore expressed as a superposition of yrast states $\ket{N,P'}_\mathrm{yr}^\mathrm{free}$ with a Gaussian weight of mean $P$ and variance $\sigma_P^2$. The quantum dark soliton constructed in the previous work [@Sato2012] corresponds to the uniform superposition, i.e., $\varg_P(P')=\mathrm{const.}$, but this simple calculation in the free boson limit indicates the importance of the Gaussian weight for constructing a quantum soliton state.
*Gaussian superposition of yrast states*.—
From the numerical result for $c>0$ and the analytical result for the free-boson limit, it is expected that a quantum soliton state $\ket{N,X;P}$ is generically expressed as a superposition of yrast states. By *assuming* it, we can find how yrast states should be superposed to construct a quantum soliton state. In the thermodynamic limit with a fixed $c$, the mean and the variance of the total momentum operator $\hat{P}=\int_{-L/2}^{L/2}dx\,\hat{\psi}^\dagger(x)(-i\partial_x)\hat{\psi}(x)$ are given by $$\lim_{N\to\infty}\braket{N,X;P|\hat{P}|N,X;P}=P$$ and $$\lim_{N\to\infty}\braket{N,X;P|(\hat{P}-P)^2|N,X;P}=\frac{4}{3}\gamma^3\rho_0\sqrt{\rho_0c}\equiv\sigma_P^2,
\label{eq:P_variance}$$ respectively. In the BEC state $\ket{N,X;P}$, the momenta $\{p_j\}_{j=1}^N$ of $N$ particles can be considered to be independent random variables, and thus the central limit theorem implies that the total momentum $\sum_{j=1}^Np_j$ has a Gaussian distribution. Therefore, if the quantum soliton state consists of the yrast states, the former is given by a Gaussian superposition of the latter for large system sizes: $$\begin{aligned}
\ket{N,X;P}&\approx\mathcal{N}^{-1/2}\sum_{P'}e^{iP'(X-L/2)}e^{-(P'-P)^2/(2\sigma_P^2)}\ket{N,P'}_\mathrm{yr}
\nonumber \\
&\equiv\ket{N,X;P}_\mathrm{yr},
\label{eq:Q_soliton_yrast}\end{aligned}$$ where $\mathcal{N}$ is a normalization constant, and the sum is taken over $P'=2\pi M/L$ with $M=0,1,\dots,N$.
Equation. (\[eq:Q\_soliton\_yrast\]) is greatly simplified compared to Eq. (\[eq:Q\_soliton\]) since the former is restricted to the yrast states. It is therefore possible to calculate some observables using the Bethe ansatz method for large system sizes. In Fig. \[fig:density\], we compare the density profile in the quantum soliton state of Eq. (\[eq:Q\_soliton\_yrast\]), $\braket{N,0,P|\hat{\psi}^\dagger(x)\hat{\psi}(x)|N,0,P}$ with its classical counterpart $|\varphi_P(x)|^2$ for $P=\pi\rho_0$ (black soliton) and $P=(\pi/2-1)\rho_0$ (gray soliton) for $N=L=100$ and $c=0.01$. Quantum and classical solitons excellently agree with each other.
![Comparison of the density profiles of quantum soliton states $\ket{N,0;P}_\mathrm{yr}$ given by Eq. (\[eq:Q\_soliton\_yrast\]) and those of classical dark solitons for $P=\pi\rho_0$ (top) and $P=(\pi/2-1)\rho_0$ (bottom). The system size is set as $N=L=100$.[]{data-label="fig:density"}](black.eps "fig:"){width="7cm"}\
![Comparison of the density profiles of quantum soliton states $\ket{N,0;P}_\mathrm{yr}$ given by Eq. (\[eq:Q\_soliton\_yrast\]) and those of classical dark solitons for $P=\pi\rho_0$ (top) and $P=(\pi/2-1)\rho_0$ (bottom). The system size is set as $N=L=100$.[]{data-label="fig:density"}](gray.eps "fig:"){width="7cm"}
![Dynamics of the density profile starting from a quantum soliton state $\ket{N,0;\pi\rho_0}_\mathrm{yr}$ of Eq. (\[eq:Q\_soliton\_yrast\]) for $N=L=100$ and $c=0.01$.[]{data-label="fig:dynamics"}](dynamics.eps){width="8cm"}
*Time evolution*.—
Since energy eigenvalues are obtained by solving Eq. (\[eq:BAE\]) and using $E=\sum_{j=1}^Nk_j^2$, we can compute the dynamics of the density profile in a numerically exact manner. Figure \[fig:dynamics\] shows the time evolution of the expectation value of $\hat{\psi}^\dagger(x-vt)\hat{\psi}(x-vt)$, i.e., the density in the moving frame at the soliton velocity, starting from a quantum soliton state $\ket{N,0;\pi\rho_0}_\mathrm{yr}$, whose velocity is given by $v=2\pi/L$. Since the quantum soliton state is not an eigenstate of the Hamiltonian, the solitonic density profile collapses. The timescale $\tau$ of the collapse is evaluated by the Mandelstam-Tamm quantum speed limit [@Mandelstam1945] as $\tau\geq\pi/(2\Delta E)$, where $\Delta E^2$ is the variance of the energy in an initial state. In the quantum soliton state of Eq. (\[eq:Q\_soliton\_yrast\]), $\Delta E$ is given by $$\Delta E\approx\left|\frac{\partial E}{\partial P}\Delta P+\frac{1}{2}\frac{\partial^2 E}{\partial P^2}\Delta P^2\right|
=\left|v\Delta P+\frac{1}{2}\frac{\partial v}{\partial P}\Delta P^2\right|.$$ In the moving frame at the soliton velocity $v$, the first term $v\Delta P$ vanishes, so we have $\Delta E\approx|(\Delta P^2/2)\partial v/\partial P|$. By using Eqs. (\[eq:Pv\]) and (\[eq:P\_variance\]) we obtain $\Delta E\approx\gamma^2\rho_0c/3$, and hence $\tau\approx 3\pi/(2\gamma^2\rho_0c)$. The decay time of a quantum dark soliton is inversely proportional to $c$ in the weak-coupling regime, $\tau\sim 1/c$, which is confirmed numerically and consistent with Ref. [@Sato2016].
It is noted that the dependence of $\tau\sim 1/c$ has been reported by Sato et al. [@Sato2016], but quantitatively, the decay time of our quantum soliton state is much longer than that of the dark soliton state constructed in Ref. [@Sato2016]. Following Ref. [@Sato2016], if the decay time is defined by the time when the smallest value of the density notch reaches the value of 0.5, the decay time for $c=0.01$ is about 600 in our dark soliton state, while it is about 100 in the dark soliton state proposed in Ref. [@Sato2016].
*Conclusion and Discussion*.—
In this work, we have discussed the property of a quantum soliton state given by Eq. (\[eq:Q\_soliton\]), which is interpreted as a Bose-Einstein condensation to a single-particle wave function $\varphi_P(x-x_0)/\sqrt{N}$, where $\varphi_P(x)$ is the dark soliton solution of the classical nonlinear Schrödinger equation. It has turned out that this quantum soliton state almost consists of the yrast states, which is confirmed numerically for small $c>0$ and analytically for $c=0$. This result offers a direct confirmation that a classical dark soliton with a localized position corresponds to a superposition of yrast states of the Lieb-Liniger model. In addition, we have revealed that a quantum soliton state $\ket{N,X;P}$ is well approximated by the state $\ket{N,X;P}_\mathrm{yr}$ that has a Gaussian weight on each yrast state $\ket{N,P'}_\mathrm{yr}$ with mean $\braket{P'}=P$ and variance $\sigma_P^2=4\gamma^3\rho_0\sqrt{\rho_0c}/3$ in the weak-coupling regime. Numerical calculations show excellent agreements between the density profile of the Gaussian superposition of the yrast states and that of the classical dark soliton. We have also discussed dynamics of a quantum dark soliton, and it has been shown that the decay time is proportional to $c^{-1}$.
A remaining open problem is to understand the relation between the quantum soliton state constructed in this work, i.e., Eq. (\[eq:Q\_soliton\]) or Eq. (\[eq:Q\_soliton\_yrast\]), and the recent theoretical observation by Syrwid and Sacha [@Syrwid2015] that a dark soliton emerges in successive measurements of particle positions starting from a single yrast state. Successive measurements of particle positions were originally considered in the context of interference of two independent BECs to mimic a simultaneous measurement of particle positions [@Javanainen1996]. The result by Syrwid and Sacha indicates that an yrast state should be interpreted as a state in which dark solitons are present but their positions are uncertain. Although the expectation value of $\hat{\psi}^\dagger(x)\hat{\psi}(x)$ in a single yrast state is uniform, a single simultaneous measurement of all the positions yields a soliton density profile.
By using Eq. (\[eq:soliton\_wave\]), it can be shown that if the quantum soliton state $\ket{N,X;P}$ is chosen as an initial state, the state after $N-N'$ measurements is identical to $\ket{N',X;P}$ with probability one. In other words, the sequence of $\{\ket{N',X;P}\}_{N'=1}^N$ is an exact solution of the measurement dynamics.
In the free-boson limit, $c=0$, the relation is clearer; it can be analytically shown that a quantum state obtained after $N-N'$ measurements of particle positions on an yrast state $\ket{N,P}_\mathrm{yr}$ is given by a quantum soliton state $\ket{N',X;P}$ with probability very close to one when $N\gg N'$, where the position of a dark soliton $X$ is random and strongly depends on the realization of measurement outcomes [^3]. However, it is still open to prove the corresponding result for a small but finite coupling $c>0$.
The present work was supported by JSPS KAKENHI Grants No. JP16J03140 and Grants-in-Aid for Scientific Research C (No. 17K05508, No. 18K03444) from MEXT of Japan.
[^1]: It should be noted that the free-boson limit is different from the weak-coupling limit $c\to 0$ *after* the thermodynamic limit.
[^2]: This state is indeed the eigenstate with the lowest energy for a given momentum $P$, and is therefore interpreted as an yrast state.
[^3]: E. Kaminishi, T. Mori, and S. Miyashita, in preparation
| |
The Guardians of the Breath were the ancient Kashi mystics who were able to control the Force. However, as the Kashi believed it was blasphemy to keep records and journals of these skills in written form, they were instead kept in oral traditions: in the "Breath."
Organisation
While many tens of thousands of Kashi sought to join the order each year, only a handful were selected for training. Increasing levels of ability and training granted the Guardians higher and higher "Levels of Attainment."
Offworlders who petitioned for membership almost always were imprisoned for life in the dark prisons of the Kashi Mer Dynasty.
The Guardians specialized in uses of the Force that promoted the cell growth of crops and healing, and were considered to be blessed with a strong connection to the Living Force. They also used the "Breath" to grant them precognitive visions.
History
From time to time, the Guardians exiled members of their order from Kashi. Some of these Guardians fell to the dark side. Xendor, the first Dark Jedi, may have been among these, one of the survivors of the supernova which wiped out the Kashi Mer culture circa 25,000 BBY. The Guardians still existed by the time of the Ruusan Reformation, though they were scattered across the galaxy.
Behind the scenes
The Tales of the Jedi Companion describes the Guardians as still extant, with their world in 4000 BBY, a fact which is contradicted in later sources. Although, it's possible these could be descendants of those same exiled members. | https://starwars.fandom.com/wiki/Guardians_of_Breath |
Medicine men or healers are usually called arbularyo, albularyo or hilot in the Philippines. They may prescribe herbs, perform treatments or massages, certain protective prayers for curses, or even employ magic. Highly sophisticated arbularyos would correspond to Filipino shamans or witch doctors.
Arbulario comes from the Spanish word for herbalist. They are experts on folk medicine. Many, for example, know how to reset dislocated shoulders, sprained ankles etc. Thousands of plants and herbs in the Philippines are undocumented and need to be reasearched for the medical properties. Many arbularios have learned and handed down medical applications of these herbs.
Medical doctors are very expensive for indigent folk like farmers and fishermen. Arbularyos are a very cheap alternative and many times the only option for many indigent folk. They are typically found in barrios or small barangays with people lined up outside. Arbularyos do not necessarily ascribe to or believe in magic, but they are the poor man's alternative for medical relief in deeper rural areas. Diagnosis and treatments are made not only on the physical level but on an emotional or spiritual level as well. | https://psychology.fandom.com/wiki/Albularyo |
Presentation is loading. Please wait.
Published byMilo Hall Modified over 6 years ago
1
Chapter 3.2: How Government Promotes Economic Strength
Ch 3 Essential Question: What role should government play in a free market economy? Redistribute income? 1
2
Objectives Explain why the government tracks and seeks to influence business cycles. Describe how the government promotes economic strength.
3
GDP and the Business Cycle
One measure of the nation’s economic well-being is gross domestic product (GDP). Doesn’t say anything about the spread of income (level of inequality) Answer: Approximately 4 trillion dollars
4
GDP and the Business Cycle
GDP goes up (expansion) and down (contraction) Extreme contraction is a recession This pattern of a period of expansion followed by a period of contraction is called a business cycle. Because of business cycles, people get hired and fired, spend more or less and invest more or less. This is one reason we need safety nets
5
GDP and the Business Cycle
Why do we track GDP and other economic data? Monitor total output, prices, employment data Government policy will be different depending on whether we are in expansion or in contraction Expansion: raise taxes, less government spending Contraction: lower taxes, more government spending Economists/politicians can use data to forecast the future and aid expansion/hurt contraction Will Congress expect more or less unemployment in the future? Does the government expect we will have more or less taxes in the future?
6
How Government Promotes Economic Strength?
Employment Growth Stability Goal More jobs Low unemployment More growth and higher standard of living Stable prices and banking sector Action Job training Unemployment compensation Hiring incentives More innovation (patents, research funds) College financial aid Less taxes/More spending Manage inflation (Federal Reserve) Regulate banks for economic security
7
How Government Promotes High Employment
Employment (more on this in Ch 9) The government strives to make sure there are enough jobs for everyone who is able to work Government provides job training and unemployment compensation to achieve this goal Unemployment Rate Today: 8% (4-6% Goal)
8
How Government Promotes High Growth
Economic Growth More growth leads to higher standard of living To promote growth, government targets innovation, technology and education Funding research and development projects at universities Establishing their own research institutions, like NASA, Department of Defense (DARPA) Granting patents and copyrights, which are an incentive to innovation
9
How Government Promotes Economic Stability
One indicator of economic stability is the general level of prices Inflation is the change in the level of prices High inflation means prices are rising rapidly Low inflation means prices are stable Deflation means prices are falling (rare) Hyperinflation is such an extremely high inflation its almost too hard to count (extremely rare)
10
How Government Promotes Economic Stability
The government wants stable prices for the economy The US has stable and low inflation Many volatile countries with unstable governments or war have really high inflation (Zimbabwe, Iran, Pakistan, Brazil, Iraq to name a few) Households and firms want to expect stable prices, so they can plan their budgets and expenses If I can’t expect prices for my budget items to stay stable, I can’t plan for today or the future. When there is uncertainty, firms do not make investments to grow their businesses 10
11
How Government Promotes Economic Stability
Other indicators of stability are financial institutions such as banks and the stock market. Regulations try to keep these institutions stable (SEC, FDIC, Federal Reserve) Government also protects these institutions as “too big to fail” – mega-controversial The bailouts of the banks and financial institutions in Many taxpayers don’t view government as spending equally on its voters (Occupy Wall Street/Chicago) 11
12
Objectives Explain why the government tracks and seeks to influence business cycles. To monitor important parts of the economy. To make forecasts of the future and try to influence the business cycle (more expansion/less contraction) Describe how the government promotes economic strength. promote high employment, high growth and stable prices/banks.
13
Key Terms Expansion: periods where GDP is increasing
Contraction: periods where GDP is decreasing Recession: 2 (or more) straight quarters of GDP falling. At least 6 months. gross domestic product (GDP): the total income (value) of all final goods and services produced in a country in a given year. GDP per capita measures income per person business cycle: a period of macroeconomic expansion, or growth, followed by one of contraction, or decline patent: a government license that gives the inventor of a new product the exclusive right to produce and sell it copyright: a government license that grants an author exclusive rights to publish and sell creative works
Similar presentations
© 2021 SlidePlayer.com Inc.
All rights reserved. | https://slideplayer.com/slide/7028542/ |
Photographers exert an awful lot of time and anxious energy fretting over how to light a subject; specifically, we all tend to fear shadows. Harsh shadows can ruin nearly any photograph. Whether it’s a portrait, a macro, or a product shot, the aim is typically soft, even lighting.
The fact is, however, we all need a break from the norm; we need to take advantage of opportunities to do things a different way. Those beautifully lit portraits that you’re obsessed with are important; they may even be the staple of your photographic repertoire, but there is another way to think about lighting — a way that is, perhaps, counter-intuitive but one that can be equally, if not more, evocative as any “perfectly” lit shot.
air time by sean, on Flickr
Portrait photographers know of rembrandt lighting, split lighting, butterfly lighting, all styles of lighting designed to light the front of the subject. But what about backlighting?
The Beauty of a Silhouette
A silhouette: that dark, featureless subject outlined against a bright background — is a photographic phenomenon created by backlighting. A photographic phenomenon that is capable of conveying mood, drama, mystery, and emotion in ways unmatched by more conventional styles of portraiture. Silhouettes are incredibly simple in form, yet they possess a great deal of aesthetic and atmospheric power, and they impart that power to the viewer by giving us the freedom to further use our imagination as we interpret those images.
Fortunately, silhouettes are also relatively easy to create. The tips that follow will help you get started.
Progress by kevin dooley, on Flickr
How to Photograph Silhouettes
1. Choose a Suitable Subject – “Suitable” is admittedly a rather open-ended term; indeed, anything can be made into a silhouette. But you will find that some things work better than others. Bold, distinctly shaped forms separated from their surroundings tend to work best.
2. Set Up Your Lighting – If you’re using natural light — the sun — then there’s actually nothing to set up here. Your flash is useless in this capacity; in fact, all that stuff you learned previously in your photography education about how to light a subject is off the table. The goal is to put the light completely behind your subject. You don’t want any light falling on their frontside. The very best way to go about this is to position your subject in front of a rising or setting sun. Using the sun isn’t an absolute must, however; any light source that is large enough and bright enough will do…which means you can use a flash or two. Just set them up behind your subject.
3. Pose Your Subject and Frame the Shot – If you’re working with a person, be sure to pose them in an open area or in such a way that doesn’t cause them to blend in with surrounding objects. Furthermore, try to avoid photographing people straight-on; instead, photograph their profile. This will make their features more distinct, more recognizable.
We will stay forever by Kamal Zharif, on Flickr
4. Prepare Your Camera for the Shot – Shooting in manual mode is the ideal method of creating a silhouette. Again, you will probably find that the process goes against the grain of what you’re used to as you expose for the background and not the subject. Quite simply, this is how you get a dark foreground and a bright background — a silhouetted image. If you find that light is spilling over onto the front of your subject, just underexpose the background a bit and your subject will become increasingly darker.
You can accomplish this in auto mode as well. The trick is to keep the camera from doing what it tends to do so well: metering a scene for even lighting. To prevent this, point your camera at the brightest portion of the sky and press the shutter button half way to initiate the metering process. Then, with your finger still on the shutter button, move the camera back to your subject and take the shot. This method works the majority of the time, but if you want to be a bit more precise you should use “spot metering” if your camera possess that feature. Spot metering will allow you to meter for a very specific part of the frame (the background in this instance), thereby increasing the accuracy of your desired exposure.
Either approach can lead to successful silhouettes. Use the one that you’re most comfortable with or whichever works out best for you. Given that all you’ve got is a memory card to erase as opposed to rolls of film to buy, you can experiment as much as you want or need to.
5. Focus – The subject needs to be in sharp focus in order for your silhouette to have definition. How to get the subject in focus may not be so straightforward considering the metering steps discussed above. The camera will have to focus on the background in order to take a meter reading, which means that focus won’t be accurate when you move back to your subject. There are a couple of ways to tackle this problem:
- Use manual focus to re-focus on your subject after you’ve metered the shot, or pre-focus before you meter.
- Set a small aperture (larger f-number, f/8 or f/11, for example) to increase depth of field, which will increase the odds that your subject is clearly defined.
6. Post Processing – As always, what you do in this phase is entirely up to you. It’s likely, however, that you will want to boost shadows, blacks, contrast, and clarity; these adjustments will help define the subject more clearly. A small bump to vibrance and saturation might be useful as well, just to add some pop to the background, especially if the background is a colorful sunrise or sunset.
Hut Sunrise – Hello from Kish Island by Hamed Saber, on Flickr
7. Optional Tools – Making a silhouette really only requires your camera, your subject, and a bright background. But if you happen to be working in a particularly low light situation where a slow shutter speed is necessary, then a tripod will certainly come in handy. And, if you like, you can use a circular polarizer to create more contrast and saturation for the background.
portal zone, another dimension by Paulo Brandão, on Flickr
Conclusion
While the basic process will remain the same, your specific camera settings will probably be different for each silhouette you shoot because subject and lighting conditions won’t be identical on each occasion; so again, we see the luxury that digital photography has afforded us — we can adapt and experiment with no limits.
As you begin to think about and prepare for shooting shooting silhouettes of your own, I’ll leave you with some images to inspire you. | https://www.lightstalking.com/capture-striking-silhouettes-in-7-easy-steps/ |
This is the first of a series of articles that will explore the history, principles, and implications of behavioral economics.
This article will examine the importance of behavioral economists, their role in economics, and how they have impacted economic research.
In this article, I will begin by looking at what behavioral economics is and what it can do for us.
Behavioral economics is a branch of economics that deals with the behavior of people.
In short, it is a field that analyzes how people interact with each other, how they act, and what they are thinking about.
This field is an important one, but the field is also highly misunderstood.
For example, many people mistakenly think behavioral economics only looks at the “real world.”
They are wrong.
Behavioral economists look at the entire human condition.
This includes our relationships with others, our beliefs and behaviors, and our behavior as well.
Behavioral Economics in Action (BAA) This is a popular way to understand behavioral economics: BAA is a very general term for behavioral economics that includes everything from behavior and self-efficacy to social interaction and personal preferences.
BAA can be applied to many different areas of economics.
For instance, BAA applies to behavioral economics in education, health care, and the workplace.
For many years, BAFT (Behavioral Finance Theory and Practice) applied behavioral economics to finance.
BAFTs focus on behavioral economics because it is easier to explain and apply.
The BAFt framework was the foundation for many other behavioral economics research.
BAG (Behavioural Asset Pricing) BAG is a subset of behavioral finance called “behavioral asset pricing.”
BAGs focus on how individuals allocate resources.
BGA (Behavagative Finance Association) BGA is a behavioral finance association that focuses on how behavioral finance can help financial institutions.
Behavioral finance has been very important in the development of behavioral and asset pricing models.
Behavioral Finance Modeling (BFM) BFM is a type of behavioral model that focuses more on behavioral analysis.
BFM modelers focus on the interactions between individuals and their assets, such as stocks, bonds, and other financial instruments.
BFX (Behaving Finance) BFX is a group of behavioral researchers that focus on “behavior”.
Behavioral economists focus on what people do in the real world.
Behavioral analysis focuses on the processes that lead people to make choices and decisions.
The most important part of behavioral analysis is called “bias”.
Bias is the difference between an observer’s judgment and the behavior that people are acting out.
For behavioral economists to understand how we behave, we need to understand bias.
This is where behavioral economics can help.
Bias can be very complex.
It is often difficult to define and understand a lot of things in behavioral economics, but it is important to understand what is happening.
Behavioral science has helped us to better understand how people make decisions, as well as what is important in their decisions.
BIF (Behevisual Influencing) BIF is a discipline that focuses mostly on the relationship between economics and psychology.
Bif is a way of studying how people think, feel, and act.
BIf focuses on two main topics: what is the psychology of decision making, and where does the economics come into play?
Behavioral economics and the behavioral sciences are constantly evolving and learning from each other.
BICE (Behinomics) BICE is a broad umbrella term that encompasses a lot more than just behavioral economics but includes many other fields.
Behavioral genetics and epigenetics are examples of fields that incorporate behavior and genetics.
Behavioral genetic methods allow researchers to analyze the genetics of individuals and to understand their choices and behavior.
BFI (Behinsight for Finance) The BFI is a large academic group that focuses mainly on behavioral finance and behavioral economics (BFI).
BFI focuses on understanding how finance affects people, how behavior can influence financial markets, and whether it can have a positive impact on our financial well-being.
The group is also a big advocate for behavioral finance.
It recently released a new book, BFI: A Modern Approach to Financial Management.
BIFF (Behindbio) BIFF is a psychology and behavioral research organization that focuses primarily on behavioral science.
Biff focuses on both psychology and behavior, and focuses on behavioral scientists.
BIFA (Behirafisia) BIFA is a division of BIF that focuses a lot on behavioral genetics. | https://all4usearch.com/2021/09/23/what-is-behavioral-economics-and-why-is-it-important/ |
Neuroticism is one of the ‘Big Five’ factors in the study of personality in psychology. It is measured on a continuum, ranging from emotional stability (low neuroticism) to emotional instability (high neuroticism).
Big Five Personality Traits
A neurotic personality is characterised by persistent, often disproportionate, worrying and anxiety. A person may strive to be a perfectionist during their everyday activities, and experience stress as a result of events that are beyond their control.
Neuroticism can lead an individual to focus on, and to dwell on, the negative aspects of a situation, rather than the positives. They experience jealousy and become envious of other people when they feel that they are in an advantaged position over themselves. They may be prone to becoming frustrated, irate or angry as they struggle to cope with life stressors.
Personality psychologists Robert McCrae and Paul Costa describe how people with high neuroticism levels cope with such stress:
“They may more frequently use inappropriate coping responses like hostile reactions and wishful thinking because they must deal more often with disruptive emotions. They may adopt irrational beliefs like self-blame because these beliefs are cognitively consistent with the negative feelings they experience. Neuroticism appears to include not only negative affect, but also the disturbed thoughts and behaviors that accompany emotional distress.” (McCrae and Costa, 1987).
In contrast, people with low levels of neuroticism find it easier to remain calm and are less affected by stressful events. They are able to maintain a more proportionate perspective on events, which results in them often worrying less and experiencing lower levels of stress.
Research has found that the behavioral tendencies associated with neuroticism can have wide-ranging effects. For example, in a survey of German couples, partners of people with high neuroticism scores were found to be less happy than those whose partners with low levels of the trait (Headey et al, 2010).
Background
Neuroticism as an aspect of human personality in various forms has been the subject of study for thousands of years.
In Ancient Greece, personality types were categorized into four basic temperaments: choleric, melancholic, phlegmatic and sanguine, with the ‘melancholic’ personality matching many traits associated with neuroticism.
Greek doctor Hippocrates (460-370 B.C.E.) suggested a biological basis for personality, claiming that melancholy was caused by excessive amounts of black bile in the body.
More recently, neuroticism has formed the basis of numerous models of personality.
German-born psychologist Hans Eysenck (1916-1997) also believed that personality traits were attributable to biological factors. He developed a model which quantified personality according to two dimensions: extraversion and neuroticism.
A third dimension, psychoticism, was later added to form the PEN model of personality (Eysenck, 1967). In conjunction with his wife, Sybil Eysenck - also a personality psychologist - he developed the Eysenck Personality Questionnaire as a means of assessing levels of these personality traits (Eysenck and Eysenck, 1976).
‘Big Five’ Personality Factor
Neuroticism is considered by many to be one of the most significant dimensions of personality. Alongside four other traits - openness, conscientiousness, extraversion and agreeableness - it is cited as one of the ‘Big Five’ personality factors.
Psychologists Robert McCrae and Paul Costa included neuroticism in their five-factor model of personality, whilst Lewis Goldberg recognized the significance of such factors, describing them as the ‘Big Five’. Further personality theories also recognize traits relating to neuroticism, such as emotionality, which features in Michael Ashton and Kibeom Lee’s HEXACO model (Ashton and Lee, 2004).
Self-report measures such as questionnaires remain a favored method of assessing personality traits such as neuroticism.
Using question inventories such as the Revised NEO Personality Inventory (NEO PI-R), psychologists invite a subject to assess how accurately a series of adjectives or statements describe his or her own personality (Costa and McCrae, 1987).
Individual Differences
Why are some individuals’ personalities more neurotic than those of others? A number of factors, including age and gender, have been found to mediate neuroticism levels.
Neuroticism scores have been found to gradually decrease as a person ages, as they become more comfortable with their life circumstances (Scollon and Diener, 2007).
These scores are influenced further depending on an individual’s gender.
A study published in the journal Frontiers in Psychology, confirming the findings of previous surveys, found that the neuroticism levels of women are generally higher than those of men. However, as we grow older, this disparity between genders decreases (Weisberg et al, 2011).
Gray’s Biopsychological Explanation
A British psychologist, Jeffrey Alan Gray, proposed a ‘biopsychological theory’ to explain differences in personality. Gray described two ‘systems’, each supported by activity in different regions of the brain.
The behavioral inhibition system (BIS) influences a person’s behavior with a view to avoiding potential punishments.
For example, a person will stop walking as they approach a closed door in order to avoid injuring themselves by walking into it.
A second system, the behavioral activation system (BAS) drives behavior to seek reward. Pro-social behavior, for instance, may be motivated by a need for positive self-presentation and a desire to be liked by one’s peers.
According to Gray, neuroticism is linked to a more responsive behavioral inhibition system, and this correlation was reversed with respect to the behavioral activation system.
The behavior of a person with a high level of neuroticism may therefore be motivated by a need to avert negative outcomes, more than a drive to achieve positive rewards (Gray, 1970). | https://www.psychologistworld.com/personality/neuroticism-personality-trait |
The head coach for Men's and Women's Cross Country will also serve as the Associate Men's and Women's Track and Field Associate Coach with a strong focus of coaching all middle distance and distance runners. This coach will provide guidance and direction to student athletes, organizes practices and competitions, and maintain positive interactions with student-athletes, alumni, other department coaches and staff, university colleagues and benefactors. This is a full-time, 10-month position classified as Administrative Personnel. The annual appointment is from August 1 through May 30.
Reports To: This position reports directly to the Director of Athletics.
Responsibilities:
- Responsible for organizing and overseeing all aspects of intercollegiate athletics team.
- Recruit top quality student-athletes to Viterbo University.
- Plan practices, prepare team for competition, coordinate off-season training program.
- Schedule competition, coordinate travel, and manage budget.
- Monitor and foster academic progress of student-athletes.
- Coordinate team involvement with community outreach or service projects.
- Generate booster club support and assist with departmental fund-raisers.
- Hire, train, supervise, and evaluate assistant coaches.
- Support and adhere to all institutional policies outlined in the employee handbook.
- Teach and embrace the five core values of the NAIA's "Champions of Character" initiative.
- Ensure that all program operations are in compliance with Viterbo, MCC, and NAIA rules
- Attend all required departmental, university, and conference meetings.
- Carry out other responsibilities as delegated or assigned by the Director of Athletics.
Qualifications: Bachelor's degree required; Master's preferred. Background in education, coaching or athletics administration desired. Experience as a successful high school or college coach preferred. demonstrated knowledge of the sport and ability to teach fundamentals, skill development and competition strategies; commitment to the academic achievement of student-athletes; good written, oral, and interpersonal skills; and proficiency in computer use including Microsoft Office. Understanding and appreciation of Catholic and Franciscan values; Willingness to work flexible hours, nights and weekends.
About the University: Viterbo University is a Catholic, Franciscan, liberal arts institution with an enrollment of nearly 3000 students. Viterbo is located in scenic La Crosse, Wisconsin, which has been rated as one of the top places to live in the US. The region features an attractive cost of living, beautiful bluffs and coulees, three major rivers including the Mississippi River, world class health care and education systems, and easy access to major cities in Wisconsin, Minnesota and Illinois.
To Apply: Complete on-line employment application and send letter of interest, resume, and a list of 4 professional references.
Contact Information: Barry Fried, Athletic Director, Viterbo University, 900 Viterbo Drive, La Crosse, WI 54601.
-
Apply Now
-
-
Sign Up For Job Alerts! | https://viterbo.applicantpro.com/jobs/604637.html |
Many people think that college life is all about parties and having fun, thanks to pop culture. However, it is not always the case.
Mostly, college years are about you struggling with deadlines and trying to fit all deadlines in one schedule.
Academic overload is huge today, and many students don’t have the necessary skills to get things done on time. It is especially complicated for those who have a job and want to enjoy a vibrant social life.
Yet, there is a way to meet all the deadlines and have a life. It is all about time management, and this guide will help to get better at it.
Page Contents
Prioritization Is Key
The first thing about time management is the smart prioritizing of tasks. After all, it is impossible to do everything at once. That’s why it is essential to figure out what you have to do.
It starts with the academic load – choosing classes and extracurricular activities for a semester. Remember that it should be manageable.
Consider what you can accomplish without draining yourself. If it is your first semester in college, it is better to go light on yourself as you don’t know yet how you will be coping.
The second part is prioritizing tasks when you have them on the plate. Write down all assignments with their deadlines as well as other things you need to do.
Simply put everything that is to be done and categorize by priority. It can be urgent, necessary, postponed, etc.
Start with the urgent and necessary tasks. If there is still too much to handle, note that you can always rely on essaypro and their professional help when it comes to academic assignments. Make a Schedule (And Stick to It)
Now that you have priorities and deadlines, it’s time to make a schedule. The secret is that it has to include everything, not only academic assignments.
Write in doctor appointments, chores, or social gatherings. All of these things take time, so they need to be scheduled as well.
Put all the deadlines on the calendar. The best idea is to use an online or digital planner that has a pending feature. It can be Google Calendar, Trello, or any other app you like.
Set reminders for all crucial tasks – not only on the day of the deadline. Leave some spare time beforehand. This way, even if you forget something, you’ll get a reminder and opportunity to complete it in time.
Start Early
It might sound dull and not fun, but the earlier you start, the better. Do not postpone assignments – especially if you don’t know how long it will take you to complete it.
As soon as you’ve got the assignment, put it in the schedule, analyze the approximate execution time, and determine when you will start working on it.
Set Goals and Rewards
Another vital thing is college life is motivation; it is extremely complicated to get through without it. One of the major things to keep inspiration and motivation up is getting the feeling of accomplishment — set rewards for each achievement, whether it is big or small.
Rewards directly influence students’ motivation. Thus, plan the goals for a month, semester, and a year. They don’t have to be very ambitious; getting everything done in time is a great one in itself.
Clearly identified goals will help you focus on what you are working for and future benefits that come from education.
Set rewards for each task completed on time. Depending on the complexity of the task, it can be something small, like going for a walk or playing a video game.
Big ones can get rewards like going out, movies, buying a new item or anything else that speaks to you.
Combat Procrastination
This might be the hardest thing on the list, but it is essential for success. Try not to get distracted when you study in order not to lose concentration.
Several things can help one achieve this aim:
- Have a designated study area where you only concentrate on learning. Do not do homework on your bed; it reduces concentration;
- Turn off the sound on the smartphone and do not check notifications while you are busy;
- Do not put on music or TV on the background, if it is not helping;
- Make 10 minutes break every hour – relaxation matters.
The best rule is to do only academic-related tasks when you are at your working table. All other things, like scrolling Instagram, cannot take place there.
In Summary
The secret to successful time management is in figuring out necessary tasks, setting deadlines, and starting early. This way, you have buffer time in case something goes wrong.
You can also use lots of helping software, from Pomodoro to Grammarly. Remember to keep yourself motivated and focused when studying. | https://www.pensacolavoice.com/how-to-beat-all-the-deadlines-have-vibrant-life-at-college/ |
Purpose: The purpose of this paper is to provide an empirical investigation of the idea that imperfectly informed consumers use simple signals to identify the characteristics of wine, for example, the geographical denomination. The reputation of a denomination will thus be an important guide for consumers when assessing individual wines. Design/methodology/approach: The price–quality relationship is studied in a fairly homogenous geographical area where a large number of wine types is present. This is done by using a simple ordinary least squares (OLS) analysis on a database of more than 2,000 different red wines produced in a period of just four years in only one Italian region. Findings: The results show that some denominations have a lower average quality score and that price differentials between denominations are linked to differences in average quality, although consumers tend to exaggerate the quality gap between prestigious denominations and others. Research limitations/implications: A producer in a prestigious denomination benefits from a substantial mark-up relative to an equally good producer from another denomination. Furthermore, denomination neutral wines have a stronger price–quality relationship than denomination specific wines. Practical implications: Consumers should not be misled by what is on the bottle, but should rather consult wine guides to become better informed before purchasing. Social implications: The fact that quality and sensory characteristics often play a minor role in determining the price of a commodity is not immediately compatible with the postulate that consumers are well informed. Originality/value: Unlike previous work, this paper investigates a limited area (Tuscany) and only red wines, thus making it possible implicitly to control for many other factors which might otherwise confound the price–quality relationship.
|Original language||English|
|Journal||International Journal of Wine Business Research|
|Volume||34|
|Issue number||2|
|Pages (from-to)||257-277|
|ISSN||1751-1062|
|DOIs|
|Publication status||Published - 5. May 2022|
Bibliographical notePublisher Copyright:
© 2021, Emerald Publishing Limited. | https://portal.findresearcher.sdu.dk/en/publications/the-cost-of-ignorance-reputational-mark-up-in-the-market-for-tusc |
effect was obtained at longer measurement delays. Pratkanis, Greenwald, Leippe, and Baumgardner (1988) used this principle of “differential decay of impact” to specify the conditions under which a sleeper effect (delayed increase in the impact of a message) would obtain. Second, Haugtvedt and Wegener (1994) have an attitude strength model of the primacy effect; primacy effects will obtain when the first message produces a strong attitude. In two experiments, Haughtvedt and Wegener found that primacy effects occurred under conditions when the message was of high personal relevance (inducing message elaboration resulting in a strong attitude on the first message) and that recency effects occur when message relevance is low.
Forewarning of persuasive intent
In this chapter, I have presented a number of ways to increase compliance and influence. This tactic and the next three tactics are designed to decrease the impact of an influence attempt. Forewarning of persuasive intent means providing the target of an influence attempt with a warning that what will happen next is designed to persuade. Such an appeal can be effective in increasing resistance to influence (but not always; for reviews see Cialdini & Petty, 1979; Sagarin & Wood, this volume; Wood & Quinn, 2003). For example, Milgram and Sabini (1978) asked subway riders in New York City for their seat and found that over 68% of the rider would yield their seat. However, if these subway riders overheard a conversation warning them that they were about to be asked to give up their seats, only 36.5% of the riders gave up their seat. There are two factors that limit the effectiveness of forewarning for increasing influence resistance. First, the forewarning should induce the target to prepare to counterargue the persuasive appeal; without such preparation, forewarning does not increase resistance. As such forewarning is most effective in increasing resistance when the topic is involving, the target has time to marshal defenses and is motivated to counterargue a discepant message. Second, a forewarning can result in attitudinal politics – moderating one’s position to a neutral stance or mid-point of a scale when faced with having to discuss the issue with another person (Cialdini, Levy, Herman, & Evenbeck, 1973).
Inoculation
Another tactic for preventing persuasion is inoculation - a target receives a brief, opposing message that can be easily refuted and thus immunizes against a subsequent attack. This technique was pioneered by McGuire (1964) in a series of research investigations. In these experiments, McGuire
created effective messages capable of changing attitudes about various cultural truisms (e.g., one should brush after every meal and get a routine chest x-ray). He then developed effective inoculation messages in which he taught possible responses (counterarguments) to these attack messages, with the result that the target of the communication could resist a latter, stronger influence attempt (see An & Pfau, 2004 for a recent application to political communications).
Stealing thunder
Another tactic for mitigating or reducing the impact of an opponent’s persuasive message is the technique of stealing thunder or revealing potentially damaging information before it can be stated by an opponent. The effectiveness of this tactic was demonstrated in two experiments by Williams,
Bourgeois, and Croyle (1993). In these experiments, mock jurors received trial transcripts in which negative information was presented by the opposing side about the defendant (Experiment 1) or a witness (Experiment 2). This information had strong, negative effects on the target. However, for some of the mock jurors the “thunder was stolen” by having the negative information presented by the defendant’s attorney or the witness himself (before it was given by the opposing side). In such cases, the negative effects of the information was mitigated (Experiment 1) and eliminated (Experiment 2; for a summary of stealing thunder research see Williams & Dolnik, 2001). | https://lektsia.com/12xaf54.html |
How to Detect and Prevent Malnutrition in Seniors
As we get older, the more important good nutrition is to our health. Unfortunately, many older adults are at risk of (or are already suffering from) inadequate nutrition. Thus, it is important to what causes nutrition problems, what are their symptoms, and what steps can you take to prevent them.
Problems caused by malnutrition
Malnutrition in older adults may lead to a number of medical conditions including:
- Fatigue
- Depression
- Weak immune system, which increases the risk of infections
- Low red blood cell count (anemia)
- Muscle weakness, which can lead to falls and fractures
- Digestive, lung and heart problems
- Poor skin integrity
It is particularly important for older adults who are seriously ill, those who have dementia or have lost weight to have good nutrition. These older adults are more likely to be admitted to a hospital, and are also vulnerable to post-surgical complications and other problems related to poor nutrition.
How malnutrition happens
When you think of malnutrition, you think: not enough to eat, a diet lacking in nutrients, digestion problems that come with aging. But in fact, malnutrition is caused by a combination of physical, psychological, and social factors. The MayoClinic cites the following examples:
- Health problems. Older adults usually have health problems such as chronic illness, use of certain medications, trouble chewing due to dental problems, trouble swallowing or difficulty absorbing nutrients that may lead to loss of appetite or trouble eating. Being hospitalized may come with loss of appetite or other nutrition problems. A weakened sense of taste and smell also decreases appetite.
- Limited income and reduced social contact. Because they are retired, some older adults may have difficulty affording groceries, especially if they’re taking expensive medications. Those who eat alone, like widows/widowers for instance, may not enjoy meals, causing them to lose interest in cooking and eating.
- Depression. Grief, loneliness, failing health, lack of mobility and other factors may contribute to depression – causing loss of appetite among older adults.
- Alcoholism. Alcoholism is a leading contributor to malnutrition – decreasing appetite and vital nutrients and frequently serving as a substitute for meals.
- Restricted diets. Older adults often have dietary restrictions, including limits on salt, fat, protein and sugar. Although such diets can help manage many medical conditions, they can also be bland and unappealing. | http://ushealth.com/2009/09/25/how-to-detect-prevent-malnutrition-in-seniors/ |
Canadian author Geoff Ryman has won 15 awards for his stories and ten books, many of which are science fiction. His novel Air (2005), won a John W. Campbell Memorial Award, the Arthur C. Clarke Award, the James W Tiptree Memorial Award, the Canadian Sunburst Award and the British Science Fiction Association Award. It was also listed in The Guardian’s series ‘1000 Novels You Must Read’. In 2012 his novelette ‘What We Found’ won the Nebula Award in its category and his volume of short stories Paradise Tales won the Canadian Sunburst Award. Much of his work is based on travels to Cambodia such as ‘The Unconquered Country’ (1986), winner of the World Fantasy Award and British Science Fiction Association Award. His novel The King’s Last Song (2006) was set both in the Angkor Wat era and the time after Pol Pot and the Khmer Rouge. His other mainstream fiction includes Was (1992), a novel about the American West viewed through the history of The Wizard of Oz. His hypertext web novel 253: a novel for the Internet in Seven Cars and a Crash, in which 253 people sit on a London tube and are each described in 253 words, won the Philip K. Dick Memorial Award for best novel not published in hardback. The published Print Remix of the same novel (1998) is his most popular book. In 2011, Geoff Ryman won the Faculty Students’ Teaching Award for the School of Arts, History and Culture.
Alma Katsu
Alma Katsu is the author of seven novels. She has been twice nominated for both the Stoker and Locus Awards for best horror novel, nominated numerous times for Goodreads’ Reader Choice awards, and appeared on the Observer, Apple Books, Amazon, Barnes & Noble and other outlets’ best book lists. The Hunger, which the NYT called it “supernatural suspense at its finest”, was named one of NPR’s 100 favorite horror stories, and won Spain’s Kelvin 505 award for best scifi/fantasy/horror novel and the Western Heritage Award for best novel. Red Widow, her first espionage novel, was a NYT Editor’s Choice pick, has been nominated for the International Thriller Writers’ award for best novel, and is currently in development for a TV series. Her debut novel, The Taker, was one of Booklist’s top ten debut novels of 2011.
Anjali Sachdeva
Anjali Sachdeva’s short story collection, All the Names They Used for God, was named a Best Book of 2018 by NPR, Refinery 29, and BookRiot, longlisted for the Story Prize, and chosen as the 2018 Fiction Book of the Year by the Reading Women podcast. The New York Times Book Review called the collection “strange and wonderful,” and Roxane Gay called it, “One of the best collections I’ve ever read. Every single story is a stand out.” Sachdeva is a graduate of the Iowa Writers’ Workshop and has taught writing at the University of Iowa, Augustana College, and Carnegie Mellon University. She also worked for six years at the Creative Nonfiction Foundation, where she was Director of Educational Programs. She currently teaches at the University of Pittsburgh and in the MFA program at Randolph College. She has hiked through the backcountry of Canada, Iceland, Kenya, Mexico, and the United States, and spent much of her childhood reading fantasy novels and waiting to be whisked away to an alternate universe. Instead, she lives in Pittsburgh, which is pretty wonderful as far as places in this universe go.
Sam. J. Miller
Sam J. Miller is the Nebula-Award-winning author of The Art of Starving (an NPR best of the year) and Blackfish City (a best book of the year for Vulture, The Washington Post, Barnes & Noble, and more – and a “Must Read” in Entertainment Weekly and O: The Oprah Winfrey Magazine). A recipient of the Shirley Jackson Award and the soon-to-be-renamed John M. Campbell Award, and a graduate of the Clarion Writers’ Workshop, Sam’s short stories have been nominated for the World Fantasy, Theodore Sturgeon, and Locus Awards, and reprinted in dozens of anthologies. He lives in New York City, and at samjmiller.com.
Christopher Rowe
Christopher Rowe is the author of the acclaimed story collection, Telling the Map (Small Beer Press), as well as a middle grade series, the Supernormal Sleuthing Service, co-written with his wife, author Gwenda Bond. He has also published a couple of dozen stories, and been a finalist for the Hugo, Nebula, World Fantasy and Theodore Sturgeon Awards. His work has been frequently reprinted, translated into a half-dozen languages around the world, and praised by the New York Times Book Review. His story “Another World For Map is Faith” made the long list in the 2007 Best American Short Stories volume, and his early fiction was collected in a chapbook, Bittersweet Creek and Other Stories, also by Small Beer Press. His most recent stories are “Jack of Coins” and “Knowledgeable Creatures” at Tor.com, selected by editor Ellen Datlow, and “Nowhere Fast” in Candlewick’s young adult anthology, Steampunk!, edited by Kelly Link and Gavin Grant.
He has an MFA from the Bluegrass Writers Workshop and lives in a hundred-year-old house in Lexington, Kentucky, with his wife and their many pets. Izzy the Dog, and Puck the Dog.
Gwenda Bond
Gwenda Bond is the New York Times bestselling author of many novels. Among others, they include the Lois Lane and Cirque American trilogies. She wrote the first official Stranger Things novel, Suspicious Minds. She and her husband author Christopher Rowe co-write a middle grade series, the Supernormal Sleuthing Service. She also created Dead Air, a serialized mystery and scripted podcast written with Carrie Ryan and Rachel Caine, and is a co-host of Cult Faves, a podcast about the weird world of cults and extreme belief.
Her nonfiction writing has appeared in Publishers Weekly, Locus Magazine, Salon, the Los Angeles Times, and many other publications. She has an MFA in writing from the Vermont College of Fine Arts. She lives in a hundred-year-old house in Lexington, Kentucky, with her husband and their unruly pets. There are rumors she escaped from a screwball comedy, and she might have a journalism degree because of her childhood love of Lois Lane. She writes a monthlyish letter you can sign up for at www.tinyletter.com/gwenda.
Shelley Streeby (Faculty Director)
Shelley Streeby, Professor of Literature and Ethnic Studies at UC San Diego, has been the Faculty Director of the Clarion Workshop since 2010. Professor Streeby received her Ph.D. in English from the University of California, Berkeley and her B.A. in English from Harvard University. Her books include Imagining the Future of Climate Change: World-Making through Science Fiction and Activism (UC Press), Radical Sensations: World Movements, Violence, and Visual Culture (Duke University Press), and American Sensations: Class, Empire, and the Production of Popular Culture (University of California Press, American Crossroads Series, 2002), which received the American Studies Association’s 2003 Lora Romero First Book Publication Prize. She is also co-editor (with Jesse Alemán) of Empire and the Literature of Sensation: An Anthology of Nineteenth-Century Popular Fiction (Rutgers University Press, Multi-Ethnic Literatures of the Americas Series, 2007). | http://clarion.ucsd.edu/2022-faculty/ |
There had been speculation that many ethnic groups used to inhabit the islands during the Jomon period, roughly ten thousand years B.C. to about 300 B.C. (ending earlier depending on the region). What has been found confirms a much older hypothesis that there was basically one ethnic group, the Jomon, who resided on all the islands. The skeletal remains that date back before the Yayoi period, when the Japanese speakers began their expansion, have been of one dominant population group whether in Western Japan, the Tohoku or Hokkaido. The one cultural trait they shared was the type of pottery they produced marked by distinctive rope-like patterns called jomon doki that gave them their name.
In the latest research it is found that the transformation of the Jomon population occurred gradually as the Yayoi population identified with the Japanese speakers spread from northern Kyushu eastwards. Intermixing of the populations was widespread as intermediate skeletal types emerge in areas where the two populations came in contact with each other. However, over time, where these contacts occurred the Jomon population gradually changed to become more Yayoi in character indicating that the Yayoi began to outnumber the local Jomon population. Thus, the intermixing was heavily weighted towards the Yayoi population during the historical period particularly in western Japan since the Jomon people there were not as numerous.
One exception is in the southern Kyushu areas dominated by the Kumaso during ancient times. To this day the area around present-day Kagoshima has a population that is relatively unchanged since the Jomon, and have characteristics that are more related to the Ainu and Okinawans than to modern Japanese. This underlines the peril of making generalizations about the Japanese population in regards to the nation as a whole since there were local population histories that seem to defy them. Pockets where the Jomon were the majority remained into the historical period.
The modern Japanese population is thus mainly Yayoi in character with visible traces of Jomon mostly among recently absorbed populations such as the Ainu, and to a lesser extent, the people of the Tohoku. In the early years of Yayoi cultural diffusion there was not as sharp a division between the Yayoi and Jomon populations. This condition would only be possible if there was a cultural and economic change not a sudden ethnic one, and contradicts the notion that the incipient Yayoi culture was carried over by an immigrant group.
The physical evidence seems to point to cultural diffusion rather than an invasion, otherwise, there would be a replacement of one population over the other rather than a gradual change. Through time, as more East Asian immigrants were either absorbed or took over, these ethnic differences became more pronounced, and by the time of the Yamato kings ethnic conflict became endemic particularly with the Jomon in western Japan, and the Tohoku (see The Treatment of Natives in the Nihon shoki: the case of western Japan, above).
By the end of the Yayoi period and the beginnings of the Kofun period in the third century AD, the line between the Yayoi and the Jomon had increased particularly between the Kinai and eastern Japan northwards. The Yayoi population represented by the formative Japanese state known as Yamato had become mainly modern East Asian in appearance and ethnicity, similar to other Chinese, Koreans and northeast Asians while the Jomon population stayed the same, conserving much older ethnic traits. Why this happened is still inconclusive, but points to a much larger existing Jomon population in eastern and northern Japan that only increased in northern Japan through migration. In the east, the Kofun states of the Kanto had people midway between both Yayoi and Jomon populations. This shows that the settled population here, based on agriculture, most likely were a Jomon majority who accepted Yayoi settlers in their midst, and over time had absorbed them. This was also the case in the agriculturally based Tohoku Kofun states. In western Japan the Yayoi population became dominant because the Jomon had become exhausted both culturally and in numbers.
This trend in the Tohoku was accentuated most likely by a migration that took place from the north about the same time that the Kofun culture began to penetrate the Tohoku from the south (Kumagai 2004). This migration of Jomon peoples took place according to Kumagai between the third and fifth centuries AD, and its origin is most likely southern Hokkaido. In the midst of the gradual change in the population that was taking place in the Kanto and Tohoku, a fresh injection of Jomon peoples spread the Epi-Jomon culture into the Tohoku. In the complex picture that emerges then we have the gradual waning of the Final Jomon culture due to the growing influence of rice cultivation and the Kofun culture carried by Japanese speakers--clans headed by the great families (gozoku) in the fourth and fifth centuries, and then the waning of the power of these incipient states in the face of a fresh migration from the north of Jomon peoples, and the Epi-Jomon culture (also known as Latter Jomon) that spread back into the areas that previously saw to the retreat of the Jomon culture.
This latter culture was different from the earlier form, but its spread was due to the resurgence of the Jomon people. The Epi-Jomon culture in the Tohoku and Hokkaido is the culture that archaeologists have identified with what historians have called the Emishi, Ebisu and Ezo, and ancestral to the Satsumon culture. The Epi-Jomon saw to the spread of a type of pottery that was unique to southern Hokkaido and the Tohoku region. Hunting implements were also made by these people that indicated a diet consisting of Salmon, venison and berries. The Hokkaido region relied more on Salmon runs while the Tohoku region relied on hunting deer.
This culture differs from the earlier Final Jomon culture. The Final Jomon was by the material evidence quite rich and saw to the spread of lacquer ware as well as the creation of unusual clay figurines for religious purposes. The Final Jomon culture was centered on the area of present day Aomori, and is also known as "Kamegaoka" named for the region where many of these artifacts were first found. This culture dates to a time right before the advent of the Yayoi culture while the Epi-Jomon is thought to have occured during the Tohoku Yayoi. The Epi-Jomon culture is unusual in that instead of retreating north in the face of the Yayoi culture moves back towards the south during the Kofun period.
Kidder, Jr., J. Edward, "The earliest societies in Japan", in Brown, Delmer M., ed., The Cambridge History of Japan, volume1: Ancient Japan. Cambridge: Cambridge University Press, 1993.
Kumagai, Kimio. Emishi no Chi to Kodai Kokka. Tokyo: Yamakawa, 2004. | http://emishi-ezo.net/jomon.html |
This is the first in a short series on black holes and we start with the falling in.
To set up the scene we need two characters (or observers as is the language of theoretical physics). I’ve always fantasised about interstellar travel and this site has two authors so i’ll indulge myself. Joe and Mekhi are orbiting a black hole at a safe distance in their spaceship. By safe distance I mean their spaceship is at a stable radius, far enough away from the center of the black hole such that it does not spiral inwards as a result of the hole’s immense gravitational pull. Mekhi has decided she wants to die in the most spectacular way possible, being engulfed by a black hole and Joe, for science’s sake, is very keen to see what happens to Mekhi as she falls in. So, she puts on her spacesuit on to stop her exploding prematurely, waves goodbye to her best friend and exits the spaceship.
Right, let’s flash back to special relativity for a moment (get it?). Remember one of the consequences of moving extremely fast? Time dilation. (You can read more about it here). This means, an observer, who watches an other observer travel at a high proportion of the speed of light measures the time between their events to be longer than the travelling observer does. For example Mekhi is about run a race at a high proportion of light speed and she has a stopwatch in her pocket. She starts the stopwatch when she starts running and stops it when she finishes the lap. Joe does the same, he starts the stopwatch when Mekhi starts running and stops it when she finishes the lap, all the while sitting still watching. The time measured on Mekhi’s stop watch will be less than the time measured on Joe’s. Remarkable but true, this is time dilation – a consequence of moving very fast. Now let’s get back to general relativity, which is the big theory in the case of Black Holes. Here’s the relevance – time dilation is also a consequence of a gravitational fields. Time runs slower in stronger gravitational fields. If you’re close to a large gravitational mass, the field will be subsequently stronger and your clock will run slower than somebody’s who is not. For example if you spent all your life at the top of a skyscraper, your clock would run slightly (almost negligible but slightly) faster than someone on the ground (closer to the Earth’s center of mass i.e. a stronger gravitational field) – general relativity decides your pay off for enjoying such a wonderful view would be having a slightly shorter life.
So as Mekhi approaches the Black Holes she gets closer and closer to the Black Hole an object with a colossal mass and as such a colossal gravitational field. Think of each tick on Mekhi’s clock (that she carries with her) as an event. When Mekhi and Joe were together in the spaceship the time between Mekhi’s ticks would coincide with the time between Joe’s clicks. But as she gets closer and closer to the black hole the time between Mekhi’s ticks/events, as measured by Joe, get further and further apart – time dilation. Now there is this special radius that every black hole has called the event horizon. It the radius (or distance) from the center of the black hole at which the gravitational pull is so strong that not even light can escape. The event horizon is determined purely by the mass of the black hole and is given by the equation: r = 2GM/c^2 Where r is the radius (distance), G is the gravitational constant, a fundamental universal constant, c is the speed of light and M is the mass of the black hole. So remember the ticks on Mekhi’s clock coincide with events in Mekhi’s experience for example the blinking of her eyelids or the kicking of her feet as she realises what she has done. As she gets closer and closer to the Black Hole the time Joe measures betweens these events gets longer and longer, it as though Joe essentially sees all Mekhi’s movements go in to hyper slow motion. Now how is Joe receiving this information? He is seeing her and how is he seeing her – he is seeing her through the transmission of photons which are carrying light from her towards him. Now here is the link with the event horizon part – when Mekhi crosses this radius during her fall the photons can no longer escape the gravitational pull and make it back to Joe. Joe cannot see the Mekhi who crosses the event horizon. Now, counterintuitively, it’s not that Mekhi disappears suddenly. The time as measured by Joe between her events just before she crossed the horizon became so so very dilated that he essentially sees her frozen image just before she crossed the point of no return. The light waves stretch to lower and redder frequencies and the image of Mekhi slowly dims and fades, over and out.
Now what about Mekhi’s experience of this whole thing, after all she’s the one doing the travelling. Well Mekhi sails through the event horizon without experiencing anything different at all, she probably couldn’t even tell it was happening. In fact as she crosses she can still look back and see the spaceship and the region outside the black hole horizon as normal and she can probably also just about make out Joe’s horrified face through the spaceship window. Now if the black hole is a small one tidal forces can come into play here and the force on her feet (which are closer to the center and hence experience a stronger gravitational pull) may be significantly stronger than the force on her head and thus she would experience spaghettification – one of my favourite words in theoretical physics – where she gets stretched so much that she becomes elongated like a piece of spaghetti until eventually she gets torn in two. If the hole is small and this effect is quite large she will see a lot of warping of the light around her as well as she undergoes this gruesome process. However if the black hole is a larger one these tidal forces will be much weaker and she will go on her merry way sailing down to the center of the black hole without noticing much difference. Moral of the story if you’re going to go off galavanting in a black hole, choose a large one! So on she goes down the center where most likely she will be torn apart before she reaches the singularity. The singularity is the point at the center of the black hole where the spacetime has been warped so immensely, density and the gravity have become infinite and the laws of physics as we know them have disintegrated. If Mekhi remarkably reaches the singularity without being spaghettified her only reward will be being crushed to an infinite density… probably, the truth is we don’t have a clue what actually happens at the singularity because our laws of physics completely break down. There are theories in modern day theoretical physics that believe the gravitational field does increase as your get closer to the black hole’s core but then eventually reduce as if you’re coming out the other end of the black hole, into what could be a new universe, this hypothetical exit region has imaginatively been named a white hole. Though it’s very likely a meek human being would not survive the crushing forces of gravity before this point occurred.
That is incredible. Is there somewhere I can read a bit more about this project?
That email address doesn’t seem to work Mekhi. It just bounces back, saying its undeliverable.
Hi Rhodri, sorry we are having total email nightmare with our RTU mappings! Can you give [email protected] a go and let me know if you have any luck with that?
Thank you – sorry for the hassle!
@RhEvans | I wrote about the EHT in 2013 [ http://wp.me/p1GWFb-jN ]. How is the project progressing, in 2017? I’m curious.
Oh, I’ll check out your post. How is it progressing? I’m just joining the project, and about to join the lecturing staff at the University of Namibia, with the EHT being my main research responsibility. So, I hope that it’s about to progress very well. I can probably update you better in May, when UNAM plans to host an EHT workshop.
It must be very exciting to be on a project like this. Congratulations! I wish you and your team all the best.
I’m glad you enjoyed the article, thank you. And thanks for filling me in on those details. I will keep an eye on your blog for updates. Best of luck.
As you say, as you approach a black hole you will speed in orbital velocity to the extent of time dilation. If you somehow have enough fuel to increase your momentum to gain a larger orbit and return to normal space, you will arrive sometime in the future you would not have traveled to in normal space. Therefore can a black hole be used as a time machine, and is that trip far enough into the future to make the experience worthwhile? Or would the centrifugal force in the orbital spin be fatal?
Theoretically speaking this idea is possible yes – which is why time dilation was one of three theoretically consistent possibilities in my time travel post. To get rid of the complicated nature that comes with a black hole (where physics can get rather speculative) this same idea can be simulated with a very very heavy planet with a large gravitational pull. If two people sit in a spaceship orbiting the planet and one goes down to the surface (i.e a region where the gravitational field is much stronger) for what is a month as measured on his watch, when he returns back up to the spaceship he will find his companion has aged many more months or even years depending on the strength of the gravitational field! In a sense travelling into the future with respect to another observer… spooky stuff.
The reason this interested me was that some things today might be useful to send with speed into the future if the far future becomes available. People with incurable diseases freeze themselves cryogenically in the hopes that in the future may provide cures. Stores of seeds or DNA are now preserved in the hopes a future culture will find them useful, Scientific hopes and speculations are put aside today for lack of information but if a time capsule could be managed to ship these hopes more rapidly into the far future secure from the devastating loss through time this stab into the darkness of time may have some kind of use for preserving knowledge. Descending into the destructive gravity of a huge planet or a black hole sounds a bit futile but if some procedure were discovered to send things more quickly far into the future it may be useful.
The concept of access to the future even on a one way trip does have its fascinations. Way back in 1980 when I was back in New York a guy visited a new Apple users club when computers were just starting and offered to sell members Apple stock at a very low price. Nobody knew what would happen to the new computer market so nobody bought any but I often thought of how I might have become a millionaire if I had spent a couple of hundred dollars on the new stock. Just suppose a good business man made some chancy investments and then took a quick trip near a black hole to return quickly a couple of hundred years later to become astoundingly rich as his investment matured.
Mekhi, you obviously absorbed the film Interstellar’s plot, ‘hook, link and sinker’ 🙂 Shame about the lead actor’s awful mumbling; spoiled it for my wife.
Thanks for an excellent summation.
Thank you very much indeed Richard! Yes mumbling is a rather annoying quality especially in cinema.
I am OBSESSED with space travel and the universe and could read/talk/watch tv about this ALL DAY. It’s so interesting to think about these theories and concepts, I will be looking forward to reading the next post of this!
Very glad it enthuses you as much as me! Thanks for your support – I hope to get the next one up as soon as possible. Keep up the obsession – it’s a good one!
The fact that you were so humourous throughout the post made it even more enjoyable for me! I doubt I’ll forget all this in a hurry.
Why did YOU have to go, Mekhi? No, I think you should both stay safe inside that orbiting space-ship.
I love that word ‘spaghettification’. Was that a Hawking “invention”?
Could not agree more bookheathen! I think we have come to the mutual decision that we will only mess with singularities from a theoretical standpoint. We shall leave the experiments to others!
So neither of you will be going into that chamber with the radioactive isotope and the half-dead cat?
Great word isn’t it?! I believe it is, from Hawking’s phrase when he describes an astronaut being stretched like spaghetti. Another term is the ‘noodle effect’ which I like too but not quite as much.
As an artist I tend to think visually and the standard way to think of the effect of gravity visually is to use the rubber sheet model where a mass attraction is indicated, as in your diagram, as a weight resting on the rubber sheet and causing a depression in the sheet so that anything moving on that surface moves in curves restricted to the surface of the sheet. No doubt this conveys the proper feeling of what happens by reducing our three dimensional space to two dimensions which can be graphically represented. But what bugs me is that space as we know it is three dimensional and my mind refuses to think four dimensionally. That dent caused by gravity has a direction vertical to the surface of that sheet and I am curious, thinking four dimensionally, what is that vertical direction in actuality? Is it time or is it another direction altogether? Does a black hole point in a time direction and is the depth of the dent towards the future or the past or is it another dimension altogether?
Yes I have always found that visualisation extremely helpful when thinking about spacetime and gravity and have reached the same troubles you have. Don’t beat yourself up about your mind refusing to think four dimensionally, the human mind resides in the three-dimensional world and to that world it is trapped. These thoughts however are excellent and I agree in thinking that time is the key here. In the mathematics of black holes the four coordinates are the three of space and the fourth of time, now the maths gets very weird once we go past the event horizon, in fact in some coordinate set-ups the spatial coordinates and the time coordinates actually swap roles. The mathematics tells us that the singularity at the centre of the black hole is actually not a place but a time, all particles will end up reaching it because they are moving ever forward to this point ‘the singularity’ in the future – so perhaps the depth of the dent can represent the direction towards the future! I confess I need to do some more reading about this – thank you for getting me thinking!
When I was a teenager back in 1940 I read a book by Sir James Jeans called “The Mysterious Universe¤” about much of Einstein’s theories and I was much frustrated by my high school science courses at Stuyvesant in Manhattan which mentioned none of the fascinating ideas. My best references were through some of the science fiction stories in Astounding SF by guys like Heinlein and Asimov. I began to assemble some sort of model from the birth of the universe towards the future as a kind of four dimensional onion where each skin layer was an instant in time and as one moved in normal time one more or less moved on one layer but as one sped up towards the speed of light a kind of centrifugal force threw one towards the future. Mass had a kind of fourth dimensional inertia so it slowed time. On that basis black holes were deep pockets into the past that moved very slowly into the future. The arrow of time is a kind of fourth dimensional wind that blows things into the future. Whether any of this has any relation to reality I cannot say. On the 4D block viewpoint our movement into the future is an illusion so my construction probably is way off course.
On the 4D block viewpoint events can still be categorised as past & future given the reference point of the observer so the construction is not necessarily way off at all – in fact the block viewpoint of time may not even be correct as it is only the view of the universe as given by Special Relativity! Quantum Mechanics and even General Relativity in part, refute this idea and attempt to salvage a universal flow of time (I’ve been meaning to do a follow up post on this for a long time forgive me it will come!) Your model is very interesting – I would love to see a diagram of this four-dimensional onion idea to help me visualise is (though depicting a four-dimensional idea in two-dimensions isn’t an easy task I know!) Could you explain to me a little more the principles behind your idea?
and maybe……some big enough mass……someday would fall in the gravitational web of the black hole……and while the black hole devours the delicious matter……so of mekhi particles will be emitted…….and maybe form a mekhi planet in thousands of years…….
if someone desires to be a planet someday……this could be tried…..
and pardon my ignorance and courage……i obviously do not know what i am talkking about…….
you post is excellent and takes me back 50 years to my physics degree at Uni!! | https://rationalisingtheuniverse.org/2016/11/24/black-holes-1-falling-in/ |
At the heart of game theory, one of the foundations of modern economics, lies the foundational concept of “Nash equilibrium,” named after the late mathematician and Nobel laureate John Nash. Nash showed that for any competitive situation or “game,” there exists a set of strategies upon the use of which, players cannot further improve their winnings. His work continues to appear in important new research today, as described in Erica Klarreich’s recent article “In Game Theory, No Clear Path to Equilibrium” and Emily Singer’s 2015 article “Game Theory Calls Cooperation Into Question.” Anyone interested in gaining simple insights about the world — the very purpose of this column — will want to get acquainted with the fundamental principles of game theory and Nash equilibrium. This month we explore these concepts by playing a variant of an ancient game called Morra, fighting over cake, and reexamining the role of cooperation in natural selection.
In Odds and Evens one of the players is designated Odd, and the other Even. In our variant both players may choose to show either a single outstretched index finger, or the entire palm with a folded thumb, thus showing four fingers. The sum of the number of fingers shown by both players decides who wins and the number of points the winner gets. Thus, if the first player shows one finger and the other shows four, the sum is five, so Odd wins and gets five points, and so on.
Odds and Evens is a competitive, zero-sum-game, beloved by the kinds of people who see the world as divided into winners and losers. But many real-world scenarios invite cooperation and have the possibility of win-win scenarios. Next, let’s explore a win-win when it comes to sharing cake.
D) As an aside, the mother’s behavior in this example is interesting. How would you quantify the value she places on various factors like fostering trust, reward and punishment and her own fondness for cake?
As readers familiar with game theory will recognize, this is just a version of one of the staples of elementary game theory, the famous “Prisoner’s Dilemma,” albeit one that is framed in the language of reward and cake rather than punishment and jail sentences. The answer to Problem 1 and part A of Problem 2 is the Nash equilibrium, and the answer to part B of Problem 2 is a “Pareto optimal” solution that also maximizes equality and the common good — it results in the twins’ consuming the most cake. Unlike the Nash equilibrium, which only ensures that the other player cannot do anything to decrease your payoff, a Pareto optimal solution is one where neither player can improve his or her situation without the other player’s getting a worse deal. Pareto optimality is named after the Italian engineer and economist Vilfredo Pareto, considered one of the founders of modern mathematical economics. Pareto was also responsible for the well-known “Pareto principle” (or 80/20 rule) which states that roughly 80 percent of effects come from 20 percent of causes. It is important to note that although Pareto optimality is a minimal notion of efficiency and cooperation, it does not guarantee equality or the maximization of the common good: In our example above, the two situations in which just one twin rats out the other are also, technically, Pareto-optimal.
Parts A and B of Problem 2 above are just basic game theory, but part C tells us that if we know the history of the payoffs that players actually received, we can determine how much cooperation or betrayal actually took place. Can we apply this to the question of whether genes are selfish or cooperative?
As I described in the solution of “Are Genes Selfish or Cooperative?,” the Hardy-Weinberg principle predicts that if there are two genes A and a (“alleles”) that are vying for the same sites in a genome and their frequency is p and q respectively, then, in the absence of a change in natural selection, the population stabilizes at the ratios p2 for AA, 2pq for Aa and q2 for aa (see the original column for a more detailed explanation of genes and alleles). The fact that this equilibrium exists implies that this ratio is optimal and has the highest selection value or fitness. This happens under conditions in which both genes have an equal chance (50 percent) of going into the germ cells, as is true for the vast majority of genes — this is like the twins’ both getting half of the cake.
One of our readers, Lee Altenberg, mentioned that a gene can cheat using “meiotic drive,” or “segregation distortion” — a rare phenomenon that could happen in an Aa individual, where one of the alleles, say A, finds a way to get a higher proportion than 50 percent of A’s into the germ cells, and therefore, its offspring. This is analogous to the case where just one twin rats out the other and therefore gets a larger piece of cake. But natural selection does not take this lying down. Unlike the mom, evolution tries to optimize the fitness of the organism, which is analogous to maximizing the total amount of cake eaten by the twins. The new allele ratio will render the population slightly less fit, and the mechanisms that maintain the ratio in nature will try to restore it. Let’s put some numbers on this.
Imagine a pair of alleles A and a that exist in equilibrium at a ratio of 0.6 to 0.4 under normal conditions, in a species that lives for a year and reproduces once a year. The allele A is dominant, so both AA and Aa individuals have similar physical characteristics. A constant allele ratio is generally maintained in the long run by “push-pull” mechanisms in nature. There may be some environmental factors that favor individuals carrying the A allele (AA’s and Aa’s) and would if unchecked, increase its proportion, whereas other factors would tend to favor the a allele and resist A’s increase. For simplicity, let us assume that such factors occur serially. Assume that under normal circumstances, without any segregation distortion, you have three years during which the environment is such that the A allele is favored. Both AA and Aa individuals have a certain survival/reproductive advantage over aa individuals, and this causes the A allele to increase its proportion by 10 percent in the first year, rising to 0.66. The same degree of advantage is present in the second and third years, allowing the proportion of the A allele to rise further. However, in the fourth year the conditions change and the allele ratio falls back again to the equilibrium value. This happens because aa individuals are favored in the fourth year, and extra copies of the a allele survive and find their way to the next generation. The advantage to aa individuals in the fourth year is proportional to the square of the difference in their numbers from the equilibrium value of 0.16. As an example, if the proportion of aa individuals is 0.12 at the start of the fourth year, the advantage they possess will be four times what they would have had if their proportion had been 0.14. Thus the “force” pulling the gene ratio back to equilibrium increases up to a maximum, the more the ratio deviates from it.
The moral of the above story is that evolution enforces cooperation among genes quite strongly. But what about cooperation among individuals? Can genes and culture enforce that? As Samuel Bowles and Herbert Gintis state in their 2011 book “A Cooperative Species: Human Reciprocity and Its Evolution”: “Why do humans, uniquely among animals, cooperate in large numbers to advance projects for the common good? Contrary to the conventional wisdom in biology and economics, this generous and civic-minded behavior is widespread and cannot be explained simply by far-sighted self-interest or a desire to help close genealogical kin.” The late anthropologist Robert Sussman found that even primates, some of the most aggressive mammals, spend less than 1 percent of their day fighting or otherwise competing. In his article “Why Humans and Other Primates Cooperate” in the September 2014 issue of Scientific American, the primatologist Frans de Waal, one of the deepest thinkers on this subject, theorizes that the parental genes that enforce caring and empathy (“mother love” genes) because such emotions are required for the survival of children when they are young, are co-opted broadly to produce wider intraspecies cooperation.
The roots of cooperation can be endlessly debated, but some part of it is hinted at in the two game theory articles referred to above. Game theory emerged from the analysis of competitive games in which the natural posture adopted by both players is that of conflict and mistrust — where you do your damnedest to defeat the other player and not lose yourself. In such situations, the Nash equilibrium is important. But most situations in human life are not like that. Trust, Pareto optimal equilibria and win-win situations abound. If we could not trust most of the people around us to a large extent, we would not be able to live peacefully at all. In her article, Klarreich quotes the game theorist Roger Myerson, who described how a different concept called “correlated equilibrium” can give more positive societal outcomes than Nash equilibria, as stating, “If there is intelligent life on other planets, in a majority of them they would have discovered correlated equilibrium before Nash equilibrium.” And this brings me to a qualitative question for readers: Do you think that in applying game theory and its mathematical techniques to biology and human behavior, scientists have focused too much on competition rather than cooperation?
As I’ve mentioned before, I think that we apply mathematics far too glibly to very complicated subjects like biology, human psychology and sociology. Mathematics cannot even solve the problem of three or more gravitating bodies analytically. The real world is far messier than our simple models. We cannot expect to find easy solutions to problems that may have hundreds or more complexly interacting variables. Although we have made considerable progress, we have a long, long way to go, especially in modeling win-win scenarios. I’d love to hear your comments.
That’s all for now. Happy puzzling! | https://www.quantamagazine.org/how-to-triumph-and-cooperate-in-game-theory-and-evolution-20171109/ |
Training effectiveness is the degree to which trainees are able to learn and apply the knowledge and skills acquired in the training programme. It depends on the attitudes, interests, values and expectations of the trainees and the training environment. A training programme is likely to be more effective when the trainees want to learn, are involved in their jobs, have career strategies. Contents of a training programme, and the ability and motivation of trainers also determine training effectiveness.
According to Hamblin
Evaluating training is any attempt to obtain information(feedback) on the effects of a training programme and to assess the value of the training in the light of that information.
According to Warr
Evaluation is the systematic collection and assessment of information for deciding how best to utilize available training resources in order to achieve organizational goals.
Need for Evaluation
1. To determine whether the specific training objectives are accomplished or not.
2. To ensure that any changes in trainee capability are due to the training programme and not due to any other conditions.
3. To identify which trainees benefit most or least from the programme.
4. To determine the financial benefits and costs of the training programme.
5. Credibility of training and development is greatly enhanced when it is proved that the organization has been benefited tangibly from it.
6. Evaluation is critical not for assessing quality of training, but also to see that future changes in training plan should be made to make it more effective, and achieve goals of the organization.
7. To compare the costs and benefits of training versus non- training investments like work redesign etc.
Principles of Evaluation
Evaluation of the training programme must be based on the following principles:
• Training faculty must be clear about the goals and purposes of evaluation.
• Evaluation must be continuous.
• Evaluation must be specific.
• Evaluation must provide the means and focus for trainers to be able to appraise themselves, their practices and their products.
• Evaluation must be based on objective (quantitative) methods and standards.
• Realistic target dates must be set for each phase of the evaluation process.
• Evaluation must be cost effective.
Evaluation Criteria
Evaluation of training effectiveness is the process of obtaining information on the effects of a training programme and assessing the value of training in the light of that information. Evaluation involve controlling and correcting the programme. The basis of evaluation and mode are determined when the training programme is designed. According to Hamblin training effectiveness can be measured in terms of the following criteria:
• Reactions
A training programme can be evaluated in terms of the trainee’s reactions to the objectives, contents and methods of training. In case the trainees considered the programme worthwhile and like it, the training can be considered effective.
• Learning
Learning measures assess the degree to which trainees have mastered the concepts, knowledge and skills of the training.
• Behaviour
Improvement in the job behavior of the trainees reflects the manner and extent to which the learning has been applied to the job.
• Results
The ultimate results in terms of productivity improvement. quality improvement, cost reduction. accident reduction, reduction in labour turnover and absenteeism are the best criteria for evaluating training effectiveness.
The following figure provides a broad framework for evaluation in terms of types, levels and methods. However, it may not always be possible to employee a comprehensive evaluation system due to organizational constraints e.g.
1. Lack of clear training policy
2. Inadequate infrastructure
3. Unwillingness of the management to change human resource policies
4. Performance appraisal system
5. Organizational processes on the basis of feedback
Methods of Evaluation
Several methods can be employed to collect data on the outcome of training. Some of these are:
1. The opinions and judgments of trainers, superiors and peers,
2. Asking the trainees to fill up evaluation forms,
3. Using a questionnaire to know the reactions of trainees,
4. Giving oral and written tests to trainees to ascertain how far they have learnt,
5. Arranging structured interviews with the trainees.
6. Comparing trainees performance on-the-job before and after training,
7. Studying profiles and career development charts of trainees.
8. Measuring levels of productivity, wastage. costs, absenteeism and employee turnover after training.
9. Trainees’ comments and reactions during the training period, and
10. Cost benefit analysis of the training programme. | http://www.simplynotes.in/training-and-development/7/ |
A consensus-based process to define standard national data elements for a Canadian emergency department information system.
Canadian hospitals gather few emergency department (ED) data, and most cannot track their case mix, care processes, utilization or outcomes. A standard national ED data set would enhance clinical care, quality improvement and research at a local, regional and national level. The Canadian Association of Emergency Physicians, the National Emergency Nurses Affiliation and l'Association des médecins d'urgence du Québec established a joint working group whose objective was to develop a standard national ED data set that meets the information needs of Canadian EDs. The working group reviewed data elements derived from Australia's Victorian Emergency Minimum Dataset, the US Data Elements for Emergency Department Systems document, the Ontario Hospital Emergency Department Working Group data set and the Canadian Institute for Health Information's National Ambulatory Care Reporting System data set. By consensus, the group defined each element as mandatory, preferred or optional, and modified data definitions to increase their relevance to the ED context. The working group identified 69 mandatory elements, 5 preferred elements and 29 optional elements representing demographic, process, clinical and utilization measures. The Canadian Emergency Department Information System data set is a feasible, relevant ED data set developed by emergency physicians and nurses and tailored to the needs of Canadian EDs. If widely adopted, it represents an important step toward a national ED information system that will enable regional, provincial and national comparisons and enhance clinical care, quality improvement and research applications in both rural and urban settings.
| |
(1) The percentages noted in the pie charts are calculated using the 12/31/16 NAV for each RMA direct investment.
(2) Various US Markets include Mid-Atlantic, Midwest, Southeast, Hawaii and the Rockies.
(3) Percentage calculated based on number of investments. | http://rmare.com/Portfolio-Analysis.php |
Boris Garlitsky and Dymtro Udovychenko examine style and contrasting character, among many more pertinent elements, in this violin masterclass.
Produced by the Saline royale Academy in October, 2021 at Arc-et-Senans.
In this masterclass, Boris Garlitsky delves into the details of Bach’s third violin sonata. The knowledge he imparts is not only useful for this sonata, but for solo works by Bach in general. He demonstrates which notes must be stronger versus others that are weaker in order to achieve the right phrasing for the music. Additionally, the professor discusses how to maintain consistency of style while still bringing out the different characters within the music. In the Fugue, which cycles through similar musical material, he encourages the student to stay present and keep the music fresh. Garlitsky challenges the performer to not only incorporate these fine details, but also exaggerate them enough, so the audience can comprehend them.
Boris Garlitsky
Consider that each time you play the Fugue, you travel. You travel to a different place, an unknown place, and you see a different thing, which is Fugue.
Boris Garlitsky
Aim for excellence! You can improve your skills with expert advice. Download the annotated sheet music of this violin masterclass. Please note that this piece has been annotated in accordance to Boris Garltisky’s feedback and comments.
In 1982 he was the winner of the Premio Paganini in Italy.
"Boris Garlitsky is an extremely lively musician of high intelligence and flexibility, with a wonderfully round tone and solid reliable technique... Concert Master of the London Philharmonic Orchestra, Mr. Garlitsky measures up to every Concert Master of the world’s top orchestras, such as New York, Vienna, Berlin etc., and can play an outstanding role in all leading international orchestras.” These are the words of Kurt Masur, one of the greatest conductors of the 20th century, with whom Boris Garlitsky worked together throughout many years. And still, Mr. Masur’s words grasp but a part of Boris Garlitsky’s musical richness.
In 1982, Boris Garlitsky won the Italian Paganini Competition and began his career as a soloist. Since then, he has played, among others, with the London Philharmonic Orchestra, the Vienna Radio Orchestra, the Chamber Orchestra of Philadelphia as well as the Milan based Giuseppe Verdi Orchestra and the British Orchestra of the Age of Enlightenment. His interpretations of Shostakovich’s violin concerto with the Orchestra National de Lyon were praised in the press. “The intensity and irresistible force of persuasion brought to it by all the skill of Boris Garlitsky was worthy of the work’s first interpreter, David Oistrakh”, the Lyon Figaro commented. Mr. Garlitsky is an active participator in several international music festivals. He regularly takes part in the Pablo Casals Festival in France, Mostly Mozart in New York, the London Proms, the Schleswig-Holstein Music Festival and Gidon Kremer’s Chamber Music Festival at Lockenhaus in Austria. Also, Mr Garlitsky performs for the BBC, Radio France as well as a number of radio stations in Italy, Russia and the United States. He has recorded for RCA, Naxos, Chandos and Polymnie. “Boris Garlitsky was a worthy partner of Anne-Sophie Mutter in Bach’s double concerto, performed together with the London Philharmonic… Let us concentrate on the gigantic chaconne from the partita in d minor for violin solo: Mr. Garlitsky’s interpretation as such made this a concert of outstanding class. Highly differentiated and uniquely colourful in play, Mr. Garlitsky’s brilliant intellectual understanding of the piece and expressive characterisation of the individual variations reflected the authenticity and individual depth of the artist’s Bach interpretation” (Dr. Karl Georg Berg).
Garlitsky is an outstanding chamber musician and member of the Hermitage String Trio, praised right and left in critical reviews: “… undoubtedly one of finest of its type, with discipline and musicianship second to none”(www.classicalsource.com); “true brilliance! This ensemble will do much to put more string trio repertoire on the musical map” (Strad); “with virtuosic elegance and, above all, affection” (Hexham and District Music Society); “that gentle exaltation of chamber music which passes by the dramatic gestures of symphonic music but rather expresses intimate and the profound, which goes straight to the heart and transports you to a dream” (Nice Matin). Mr. Garlitsky’s repertoire is amazingly rich. Among his partners are Pinchas Zuckerman, Gidon Kremer, Marta Argerich, Anne-Sophie Mutter, Vadim Repin, Truls Mork, Maria-Joao Pires. Last but not least, Mr. Garlitsky is so popular among his colleagues due to his amiable character. “Garlitsky’s charisma is glaringly obvious. And how! A first violin of such imposing presence is a blessing for any ensemble” (La Montagne).
Born in Russia, Mr. Garlitsky received his first music lessons from his father, the author of the standard textbook for young violinists, “Step by Step”. He studied with Professor Yankelevich at the Moscow Conservatory, and afterwards worked as the Concertmaster for the Moscow Virtuosi and the London Symphony Orchestra, the Covent Garden Opera, the Vienna ORF Orchestra, the Hamburg Philharmonic and many more.
Today, Mr. Garlitsky devotes a large amount of his time to education. He holds a chair at the Folkwang Universität der Künste, Essen (Germany). In addition, Mr. Garlitsky offers master classes on a yearly basis at the most renowned music institutions including the Curtis Institute in Philadelphia, the Peabody Conservatoty in Baltimore, Hanns Eisler Musikhochschule in Berlin and Kronberg Academy. “He is also very successful as a teacher and his instruction would be an enrichment for any musical institution, be it orchestra or music academy. His knowledge, his energy, his honesty and his ability to connect with people and create harmony are in my opinion the quintessence of why he can serve as a role model and ‘leading light’ for the young generation.” (Kurt Masur)
Johann Sebastian Bach is undoubtedly one of the most important figures in music history. His incredible creative power, technical mastery, and intellect have made a lasting impression not only on classical music but also on many different modern music genres we know today.
Born in 1685 in Eisenach, Germany, Bach was a member of a very well-known family of musicians. At 18-years-olds, he began working in Arnstadt where he accompanied hymns at church. His professional career as a musician would follow in Weimar, where he resided from 1708 to 1717. Here, Bach would deepen his theoretical study of composition and write most of his organ works. Moreover, he composed preludes and fugues that would be part of his collection The Well-Tempered Clavier. After building a considerable reputation in Weimar, Bach moved to Köthen to take a new role as Chapel Master. Writing less religious songs and putting more of a focus on chamber music, his compositions from this time would bring Baroque instrumental music to its pinnacle.
From 1723 until his death in 1750, Bach worked in Leipzig. First, as Thomaskantor at the Thomasschule and later as a private tutor and director of the Collegium Musicum. During this time, Bach worked on creating a repertoire of cantatas for church and revised many of his previous compositions. From 1726 onward, his keyboard works were published. His death in 1750 came to mark the end of the Baroque period and the beginning of Classicism. For many years after his passing, Johann Sebastian Bach’s works were buried with him until they resurfaced many years later and celebrated for their musical ingenuity. | https://www.salineacademy.com/en/master-class-streaming/boris-garlitsky-bach-sonata-no-3 |
The data suggest that, following local frontal lobe damage, there is a global compensatory recruitment of an adaptive and integrated fronto-parietal network in the well-known executive control or multiple-demand system.
Graph lesion-deficit mapping of fluid intelligence
- PsychologybioRxiv
- 2022
Fluid intelligence is arguably the defining feature of human cognition. Yet the nature of its relationship with the brain remains a contentious topic. Influential proposals drawing primarily on…
Fluid intelligence is supported by the multiple-demand system not the language system
- PsychologyNature Human Behaviour
- 2017
A set of frontoparietal brain regions—the multiple-demand (MD) system1,2—has been linked to fluid intelligence in brain imaging3,4 and in studies of patients with brain damage5–7. For example, the…
Neural contributions to reduced fluid intelligence across the adult lifespan
- PsychologybioRxiv
- 2022
Fluid intelligence – the ability to solve novel, complex problems – declines steeply during healthy human aging. Using functional magnetic resonance imaging (fMRI), fluid intelligence has been…
Neuroanatomic overlap between intelligence and cognitive factors: Morphometry methods provide support for the key role of the frontal lobes
- Psychology, BiologyNeuroImage
- 2013
Inter- and intra-individual differences in fluid reasoning show distinct cortical responses
- Psychology, BiologybioRxiv
- 2016
This work uses task-based fMRI to show that within and between subject dimensions show both partial overlap and widespread differences, and shows that when difficulty is equated across individuals, those with higher ability tend to show more fronto-parietal activity, whereas low fluid intelligence individuals tends to show greater activity in higher visual areas.
Architecture of fluid intelligence and working memory revealed by lesion mapping
- Psychology, BiologyBrain Structure and Function
- 2013
The observed latent variable modeling and lesion results support an integrative framework for understanding the architecture of fluid intelligence and working memory and make specific recommendations for the interpretation and application of the WAIS and N-Back task to the study of fluid Intelligence in health and disease.
Fluid intelligence and naturalistic task impairments after focal brain lesions
- Psychology, BiologyCortex
- 2022
Spatio-Temporal Brain Dynamic Differences in Fluid Intelligence
- Psychology, BiologyFrontiers in Human Neuroscience
- 2022
Time re-resolved measures suggest that the left parietal cortex specifically impacts on early processes of attentional focus on task critical features within the MD system, which is novel evidence on the neurocognitive correlates of fluid intelligence suggesting that individual differences are critically linked to an early process of attentioning on task-relevant information.
Activity in the fronto-parietal multiple-demand network is robustly associated with individual differences in working memory and fluid intelligence
- Psychology, BiologyCortex
- 2020
This large-sample fMRI investigation found that stronger activity in MD regions was robustly associated with more accurate and faster responses on a spatial working memory task performed in the scanner, as well as fluid intelligence measured independently.
References
SHOWING 1-10 OF 39 REFERENCES
Lesion Mapping of Cognitive Abilities Linked to Intelligence
- PsychologyNeuron
- 2009
Is the Prefrontal Cortex Important For Fluid Intelligence? A Neuropsychological Study Using Matrix Reasoning
- PsychologyThe Clinical neuropsychologist
- 2008
The results failed to support the hypothesis that prefrontal damage would disproportionately impair fluid intelligence, and every prefrontal subgroup the authors studied had Matrix Reasoning scores that were indistinguishable from those of the brain-damaged comparison groups.
Fluid intelligence after frontal lobe lesions
- PsychologyNeuropsychologia
- 1995
Distributed neural system for general intelligence revealed by lesion mapping
- Psychology, BiologyProceedings of the National Academy of Sciences
- 2010
It is suggested that general intelligence draws on connections between regions that integrate verbal, visuospatial, working memory, and executive processes to reflect the combined performance of brain systems involved in cognitive tasks or draws on specialized systems mediating their interactions.
The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence
- Psychology, BiologyBehavioral and Brain Sciences
- 2007
It is proposed that the P-FIT provides a parsimonious account for many of the empirical observations, to date, which relate individual differences in intelligence test scores to variations in brain structure and function.
Common regions of the human frontal lobe recruited by diverse cognitive demands
- Biology, PsychologyTrends in Neurosciences
- 2000
Neural correlates of superior intelligence: Stronger recruitment of posterior parietal cortex
- Psychology, BiologyNeuroImage
- 2006
Common prefrontal activations during working memory, episodic memory, and semantic memory
- Psychology, BiologyNeuropsychologia
- 2003
Severe disturbance of higher cognition after bilateral frontal lobe ablation
- Psychology, MedicineNeurology
- 1985
After bilateral ablation of orbital and lower mesial frontal cortices, a patient had profound changes of behavior that have remained stable for 8 years. Although he could not meet personal and…
Voxel-based lesion–symptom mapping
- PsychologyNature Neuroscience
- 2003
VLSM maps for measures of speech fluency and language comprehension in 101 left-hemisphere-damaged aphasic patients confirm the anticipated contrast between anterior and posterior areas and indicate that interacting regions facilitate Fluency and auditory comprehension, in agreement with findings from modern brain imaging. | https://www.semanticscholar.org/paper/Fluid-intelligence-loss-linked-to-restricted-of-and-Woolgar-Parr/42e6b0bbb17125a0e11b2ad398f1576fdc29c408?p2df |
Can agricultural biotechnology mitigate climate change effects and increased demand on food?
A global population of 9.1 billion expected in 2050 together with climate change effects of increased temperature, reduced precipitation, increased CO2 concentration in the atmosphere, reduced productivity of livestock, escalated prevalence of pests and diseases will all impose a challenge on the scientific community to come up with appropriate approaches to cope with the increased demand on food under more stressed conditions. Several biotechnological tools are already in use to mitigate the adverse effects of climate change. Genetic transformation to produce drought-, heat-, cold-, pest-, and salt-tolerant crops, as well as crops with improved nutritional quality has been increasingly used over the past few decades. In addition, tissue culture techniques (somaclonal variation, double haploid production, embryo rescue…..etc.) are also used to develop new crop varieties with increased yield and tolerance to a variety of biotic and abiotic stresses. Many molecular tools at present (e. g. markers assisted selection) proved to be extremely useful to traditional breeding to produce improved crop varieties within shorter periods of time. Moreover, molecular tools are permitting better understanding of existing biodiversity, and deployment of useful traits that exist in wild relatives of crop plants. Furthermore, molecular diagnostic tools are now permitting more accurate and faster identification of crop pests and this helps in better management of these pests and consequently reduce the yield losses they cause to agricultural crops. It should be kept in mind that there is no single solution to the stresses facing food production. However, agricultural biotechnology techniques will play a role as one of several tools available in the fight against climate change effects.
Presenter bio: K. Makkouk received his B. Sc. in Agriculture from Cairo University (1963), his M. Sc. in plant pathology from Louisiana State University (1971) and his Ph. D. degree from Univesity of California at Riverside (1974). Following graduation he served as (i) a researcher with CNRS (1974-1985), (ii) professor of plant pathology (part time), Faculty of Agriculture, AUB (1977-1985), (iii) senior scientist with the International Center for Agricultural Research in Dry Areas (ICARDA), Aleppo, Syria (1985-2002), (iv) President of Almanar University, Tripoli, Lebanon (2002-2004), (v) Regional coordinator, Nile Valley and Red Sea Regional Program, ICARDA, Cairo, Egypt (2005-2008). At present, Dr. Makkouk is serving as Advisor for Agriculture and Environment, CNRS, Beirut, Lebanon. Dr. Makkouk contributed to the scientific knowledge through 174 articles in refereed journals, 27 articles in meetings proceedings and 16 chapters in books by international publishers. In addition to his research achievements, Dr. Makkouk played a leadership role within national, regional and international scientific communities. He is a founding member, secretary-treasurer, vice-president and president of the Arab Society of Plant Protection (ASPP). At present he serves as the Editor-in-Chief of the Arab Journal of Plant Protection published by ASPP. He also served as member, board member, vice-president and president of the Mediterranean Phytopathological Union (MPU). In addition, Dr. Makkouk served as an active member in a number of international professional scientific groups such as (i) Special Projects Committeee of the International Society of Plant Pathology (1983-1988), (ii) Chairman of the Mediterranean Fruit Improvement Council Steering Committee (1983-1999), (iii) Member of of the Plant Virus Sub-committee of the International Committee for Taxonomy of Viruses (1988-1999), and (iv) Executive Secretary, International Working Group on Legume Viruses (1994-1996). Dr. Makkouk also guided around 25 graduate students (M. Sc. and Ph. D.) to conduct their thesis research in his laboratory.
Abstract/Summary
Can agricultural biotechnology mitigate climate change effects and increased demand on food? Khaled Makkouk, Advisor for Environment and Agriculture, National Council for Scientific Research, Beirut, Lebanon. | http://biotech.ul.edu.lb/conference/laas20/makkouk.html |
Individual and neighborhood wastewater treatment systems. Rain gardens and green roofs. Water-efficient appliances and landscaping. These are examples of decentralized water technologies in action. These systems can beautify cities and towns, enhance water supply, recover energy and nutrients, provide local reuse opportunities, and improve health and the environment.
The Decentralized Water Resources Collaborative (DWRC) conducts research and provides outreach to improve science, technology, economics, and management to help ensure these systems meet critical environmental and public health challenges.
Decentralized Wastewater Systems are used for collection, treatment, and dispersal/reuse of wastewater from individual homes, clusters of homes, isolated communities, industries, or institutional facilities, at or near the point of waste generation (CIDWT Glossary, 2007). Individual septic systems and neighborhood cluster systems are included among the types of treatment practices utilized.
Decentralized Stormwater Systems are used to treat, store, infiltrate, evapotranspirate, filter, and reuse water at or near the point of runoff generation for management of stormwater quality and quantity. Green roofs, vegetated swales, pocket wetlands, cisterns, rain gardens, and other practices are among the techniques utilized. These techniques are also referred to as low impact development (LID) techniques. | http://decentralizedwater.org/ |
The Brucejack mine is in a remote region located 275 km northwest of Smithers and can only be reached via a 12 km glacier. Equipment and supplies are transported by a special vehicle (Husky 8). The miners are brought in by helicopter.
As from 2017, the site will produce 2,700 tonnes of high-grade gold ore per day. Veolia will treat 10,000 m3 of effluent per day from the production processes. To ensure respect for the very strict environmental discharge standards, numerous tests have been conducted since 2014, Veolia has in particular provided Pretium with a mobile water treatment unit (Actiflo®) which confirmed the discharge rates (metals and suspended solids) in the industrial water.
“Based on Pretium’s investment in several months of test work, it was clear how committed they are to ensuring their environmental stewardship”, stated David Oliphant, Vice President Business Development Heavy Industry for Veolia Water Technologies Canada.
“The Brucejack project is a great example of how Veolia can partner with industrial clients over several years, from rapidly providing a temporary solution to working through months of testing and navigating the steps to secure required permitting, and coming up with the best water treatment solution possible”, explained Klaus Andersen, CEO of Veolia Water Technologies, Inc.
More: | https://www.veolia.com/en/mining-industry-gold-effluent-water-treatment |
Effective teamwork can increase productivity, produce better results and make work more enjoyable. Regardless of your role on a team, you will have a direct impact on its efficiency. Teamwork is a valuable skill, but it takes practice and development. These tips will help you work with a team to succeed in your next group project.
Create a productive work environment
Each team-based project will thrive in a different work environment depending on its team members, goals and deliverables. Begin your project by tailoring the physical space you occupy to meet your needs. When creating this space, think about tools you may require, access to contributors or stakeholders outside of your team and the time you will need to occupy the space.
Example: If you are working on a creative task that needs input from all members, find a room that will allow your team to maximize communication without affecting the rest of the workplace. Make sure your co-workers have all the tools they need to begin the project. If brainstorming is a key component of the project, make sure you have a physical or digital whiteboard available. For a project where team members meet occasionally to review individual work, a short standup in a common area may be a better fit.
Identify a clear purpose
Effective teamwork needs a defined goal. You can create a clear purpose by forming a mission statement, key performance indicator (KPI) or deliverable. A unified goal allows work to remain focused and helps with prioritization of your team's resources.
Clarifying why you are undertaking the project is just as important as developing the deliverable, goal or mission statement. This helps your team understand their roles and the purpose of the project, which can support long-term motivation.
Example: Once you have your work environment set up, begin your process by defining your purpose and goals as a team. A great way to approach this is to brainstorm a mission statement and write it out next to your team's deliverable. Make the mission statement and goal visible so your team will keep them in mind while they work on the project.
Communicate openly
Open, honest and respectful communication is vital to effective teamwork. Team members need to feel comfortable expressing their ideas and opinions so each individual contributes to their full potential. Clear communication leads to more trust among team members and breaks down barriers that can slow down a team's work.
To set a foundation for strong communication, establish expectations and best practices. This includes when and how you should use different methods of communication such as emails, online messaging, phone calls and in-person meetings.
Example: If you have an ongoing project, schedule short meetings to review progress. Give every team member equal time to relay their wins and losses. Set aside time for the group to help solve problems an individual is facing or give constructive feedback. You might explain that you will be using chats to discuss day-to-day work and a weekly email to communicate important learnings, updates and milestones.
Read More: Interpersonal Skills: Definitions and Examples
Recognize your team members
Recognition can help your fellow team members remember the importance of their role in the project and increase morale. Both team leaders and individual contributors should give positive feedback by noting exceptional work or praising a valuable idea. Recognizing your coworkers as you work toward a common goal can help motivate the entire team.
Example: During your scheduled meetings, plan time to recognize the successes of team members. Base these affirmations on milestones so team members understand what they have achieved and so other members know what work they should model going forward. The benchmarks can be small—you do not need to limit recognition to major achievements.
Be creative
It is important to exercise your creativity and support the creativity of others during team-based projects. This includes taking and supporting risks. Creative problem solving and experimentation are vital concepts to engage in as a group. Doing so can utilize everyone's unique perspectives to create more thorough processes and solutions.
Example: Set aside time for brainstorming sessions. Brainstorming as a group allows for multiple solutions to one problem. Consider every suggestion a member makes regardless of how unconventional they seem. The more ideas your group explores, the more likely you are to find a solution.
Read More: Problem-Solving Skills: Definitions and Examples
Meet outside the workplace
While you may have tailored your work environment to meet all of your team's needs, sometimes team members feel more comfortable in a setting outside of work. Meetings outside of the workplace also allow coworkers to build rapport which can increase creativity. Taking time out of the office can also lead to the team growing closer as a unit. This will help you work together toward a common goal.
Example: Have a group lunch or meet at a coffee shop. You can start the meeting with a few short team-building exercises to help get everyone more comfortable and open for discussion. Depending on the length of your project, you may want to schedule a recurring event, such as a team lunch every two weeks to discuss new ideas and give recognition to individual team members. | https://www.indeed.com/career-advice/career-development/tips-for-effective-teamwork |
*MDR: The installation distance from the camera module to the relay lens camera end . WDV: Distance from the camera module to the simulated imaging position. WDC: Distance from the relay lens object end to the test card.
• Reducing Testing Cost
When the camera module is tested, you need a large test distance and a large target. If the relay lens is used, you can shorten the test distance and reduce the test chart, thereby reducing the test cost in the end.
• Improving Test Credibility
Using relay lens in testing can ensure high-quality imaging and footage. What's more, due to the standard test environment, the impact of random errors and uncontrollable factors on the test results can be significantly reduced and the test credibility can be improved.
• Increasing Production Efficiency
As relay lens is widely used in the testing of camera modules, the size of production and testing equipment related to camera modules has been well controlled, improving productivity and reducing costs. It also provides effective and feasible solutions for other applications that require testing of camera, such as laboratories.
Relay lens is mainly used for camera resolution test. With the development of mobile camera, its application has been constantly extended. It has been used in vehicle, security, video conference equipment, smart home and other fields. In particular, it plays an irreplaceable role in the production and testing of wide-angle and long-focus modules.
Currently, simulation testing (relay lens or collimator array) is almost the only solution in all mass AA (active alignment) production processes, module focusing, and product terminal testing.
Mobile Camera
On-Board Camera
Security Camera
Video Conference Camera
Smart Home Camera
1: FOV
The design FOV of a relay lens shall be slightly larger than the FOV of the camera module.
2: Installation
The parameter design of relay lens takes into account the support for installation structure, so the following three data should be considered when selecting a relay lens:
MDR (the distance from the mechanical rear surface of relay lens to the camera viewpoint) of 6-40mm is suitable;
WDC (the distance from the mechanical front surface (non-optical surface) of relay lens to the chart being tested) shall be usually 150-400mm;
The maximum chart size that the test environment can support.
3: Entrance pupil diameter
The entrance pupil diameter (EFL focal length/F.no focal number) of a product shall be smaller than the exit pupil diameter of relay lens.
The luminous area shall be customized according to the parameters of a relay lens; Uniformity shall be greater than 90%; Color temperature shall be 2,700-7,000k; Various infrared bands are optional.
Film is most commonly used; Accuracy shall be ±10 μm; Accuracy shall not be less than 8,000DPI; Suitable edge sharpness of graphics.
Select a relay lens according to its optical parameters and test environment.
The center of the tested product, relay lens and chart shall be on the same optical axis; Fixed or adjustable height. | https://www.oklab.com/products/relay-lens-high-integration-high-efficiency-series-rl09150b |
Co-founder Patrisse Cullors on the Evolution of Black Lives Matter: From a Hashtag to a Nobel Peace Prize Nominee
The Black Lives Matter (BLM) movement was co-founded in 2013 by Alicia Garza, Patrisse Cullors, and Opal Tometi in response to the acquittal of the man who shot Trayvon Martin, an unarmed 17-year-old Black high school student in Florida. The movement gained wider recognition in 2014 following protests over the deaths of Michael Brown and Eric Garner, both unarmed Black men who were fatally shot by police. Following the deaths of George Floyd and Breonna Taylor in 2020, Black Lives Matter helped mobilize a series of peaceful protests that sparked a national reckoning on social and racial justice and police reform in America.
Today, Black Lives Matter, a Tides partner through the Black Lives Matter Support Fund, is a nominee for the 2021 Nobel Peace Prize.
We speak to Black Lives Matter Co-founder and Executive Director Patrisse Cullors about what this moment means for the movement—and how far we still have to go.
—
How did you find out that BLM was nominated for a Nobel Peace Prize, and what was your reaction?
I was in the middle of a meeting and received a text from a trusted advisor. My mouth dropped!
What does it mean for BLM to be nominated for this prize, given the ways BLM has been portrayed in the media and by individuals and groups opposed to the movement?
We read this nomination as a symbol of our movement’s power. Only seven years ago, the movement started online with a hashtag—one that affirmed a basic right to life. We were shocked, but not surprised, that such a basic call could evoke such anger and pushback from people all over the world. For BLM to now receive a nomination from our world’s highest honor within peace work is a statement that our affirmation of a right to life was needed. That Black lives have not mattered across the globe for too long, but our movement is changing that.
In his nomination letter, Petter Eide, a member of Norway’s parliament, wrote he had nominated Black Lives Matter “for their struggle against racism and racially motivated violence,” adding that “BLM’s call for systemic change has spread around the world, forcing other countries to grapple with racism within their own societies.” In what ways do you think BLM has moved the needle on social and racial justice in America and the world?
BLM did not start something new. Our movement is one of many—movements that are ongoing today and movements that have been built upon in years past. What we have done is unite a multi-racial, transnational group of people under a basic call. While our work is widespread, our mission is simple: we want Black lives to matter. For an entire movement to be shaped around this has illuminated to many all the different ways that our world has devalued Black life. As our movement has grown, our areas of work have spread: we started on social media platforms and on the streets. Today, we continue that and are also stepping into public policy and electoral politics.
All of this comes on the heels of so many challenges, disruptions, and tragedies: COVID-19; the tragic police killings of George Floyd, Breonna Taylor, and so many others; voter suppression; the attacks on the U.S. Capitol—all of which have directly and disproportionately impacted Black communities. How does BLM prioritize and tackle these issues?
We approach every issue through the Black experience: How are Black people being impacted? What do Black people, in this particular moment, need? Then we reflect on our sphere of influence and begin to respond. In some areas, like electoral politics and policy, we are only getting started, so we respond in the ways that we know we have impact: through digital media and public awareness.
2021 is a new year, but the challenges remain. What is BLM’s focus for the new year, and how can people get involved?
While we’ve never needed a reminder that our mission remains important, 2020 and the impact it has had on Black communities has shown more people that our mission is an urgent one. Fundamentally, we will never be able to do all that we want to. It is impossible for us to be in all the places and spaces we know we need to be in. But the devastating economic impact that COVID has had (and will continue) having on Black communities is where we’re focusing our efforts. We all know the national status on unemployment, how these rates skyrocketed last year. But how many people know that about half of all Black-owned small businesses have been wiped out? That Black families have been almost twice as likely to be forced to move than white people? In deciding what we want our focus to be, we look at what’s going on in the world and how it’s impacting Black people. The effects of COVID-19, on all fronts, have been devastating for many. For Black and brown people, the impacts have been exacerbated. We’ll be focusing on economic justice, while continuing our never-ending push towards reimagining public safety.
At the individual level, people can get involved by buying Black. Shop at your local Black-owned small business, support Black creators, and consider donating to Black movements and organizations across the country. Look to your workplace and ask: Are we supporting the movement? Is there more that we can be doing as an institution? #BuyBlack
What would winning the Nobel Peace Prize mean to you, personally?
Our movement doesn’t belong to one person, or one small group of people. As executive director of the organization supporting this movement, I would be reminded that the last seven years have only been the beginning. How exciting.
But what I would also hope for is this: that with the win comes a promise. Far too many times have Black people been promised progress, only to be met with a label of criminalization and threat. Recognition means little if it is not met with action and accountability. | https://www.tides.org/tides-news/featured/evolution-of-blm-from-a-hashtag-to-a-nobel-peace-prize-nominee/ |
Breath analysis has long been recognized as a reliable technique for diagnosing certain medical conditions through the detection of volatile organic compounds (VOCs) in exhaled breath. The diagnosis is usually performed by collecting breath samples to a container followed by subsequent measurements of specific VOCs using mass spectrometry.
The composition of VOCs in exhaled breath is dependent upon cellular metabolic processes and it includes, inter alia, saturated and unsaturated hydrocarbons, oxygen containing compounds, sulfur containing compounds, and nitrogen containing compounds. In healthy individuals, the composition provides a distinct chemical signature with relatively narrow variability between samples from a single individual and samples from different individuals.
In exhaled breath of patients with cancer, elevated levels of certain VOCs including, volatile C4-C20 alkane compounds, specific monomethylated alkanes as well as benzene derivatives were found (Phillips et al., Cancer Biomark., 3(2), 2007, 95). Hence, the composition of VOCs in exhaled breath of patients with cancer differs from that of healthy individuals, and can therefore be used to diagnose cancer. An additional advantage for diagnosing cancer through breath is the non-invasiveness of the technique which holds the potential for large-scale screening.
Gas-sensing devices for the detection of VOCs in breath samples of cancer patients have recently been applied. Such devices perform odor detection through the use of an array of cross-reactive sensors in conjunction with pattern recognition methods. In contrast to the “lock-and-key” model, each sensor in the electronic nose device is widely responsive to a variety of odorants. In this architecture, each analyte produces a distinct fingerprint from an array of broadly cross-reactive sensors. This configuration may be used to considerably widen the variety of compounds to which a given matrix is sensitive, to increase the degree of component identification and, in specific cases, to perform an analysis of individual components in complex multi-component (bio) chemical media. Pattern recognition algorithms can then be applied to the entire set of signals, obtained simultaneously from all the sensors in the array, in order to glean information on the identity, and concentration of the vapor exposed to the sensor array.
The hitherto used gas-sensing devices comprise a variety of sensor arrays including conductive polymers, nonconductive polymer/carbon black composites, metal oxide semiconductors, fluorescent dye/polymer systems, quartz microbalance sensors coated with metallo-porphyrins, polymer coated surface acoustic wave devices, and chemoresponsive dyes (Mazzone, J. Thoracic Onc., 3(7), 2008, 774). Di Natale et al. (Biosen. Bioelec., 18, 2003, 1209) disclosed the use of eight quartz microbalance gas sensors coated with different metalloporphyrins for analyzing the composition of breath of patients with lung cancer. Chen et al. (Meas. Sci. Technol., 16, 2005, 1535) used a pair of surface acoustic wave (SAW) sensors, one coated with a thin polyisobutylene (PIB) film, for detecting VOCs as markers for lung cancer. Machado et al. (Am. J. Respir. Crit. Care Med., 171, 2005, 1286) demonstrated the use of a gaseous chemical sensing device comprising a carbon polymer sensor system with 32 separate sensors for diagnosing lung cancer. Mazzone et al. (Torax, 62, 2007, 565) disclosed a colorimetric sensor array composed of chemically sensitive compounds impregnated on a disposable cartilage for analyzing breath samples of individuals with lung cancer and other lung diseases. The results presented in these disclosures have yet to provide the accuracy or consistency required for clinical use.
Sensors based on films composed of nanoparticles capped with an organic coating (“NPCOCs”) were applied as chemiresistors, quartz crystal microbalance, electrochemical sensors and the like. The advantages of NPCOCs for sensing applications stem from enhanced sensing signals which can be easily manipulated through varying the nanoparticles and/or aggregate size, inter-particle distance, composition, and periodicity. Enhanced selectivity can further be achieved through modifying the binding characteristics of the capping film as well as linker molecules. The morphology and thickness of NPCOC networks were shown to induce a dramatic influence on sensor response, wherein changes in permittivity induced a decrease in resistance of NPCOC thinner films (Joseph et al., J. Phys. Chem. C, 112, 2008, 12507). The three dimensional assembly of NPCOC structures provides additional framework for signal amplifications. Other advantages stem from the coupling of nano-structures to solid-state substrates which enable easy array integration, rapid responses, and low power-driven portable apparatuses.
Some examples for the use of NPCOCs for sensing applications are disclosed in U.S. Pat. Nos. 5,571,401, 5,698,089, 6,010,616, 6,537,498, 6,746,960, 6,773,926; Patent Application Nos. WO 00/00808, FR 2,783,051 US 2007/0114138; and in Wohltjen et al. (Anal. Chem., 70, 1998, 2856), and Evans et al. (J. Mater. Chem., 8, 2000, 183).
International patent application publication number WO 99/27357 discloses an article of manufacture suitable for use in determining whether or in what amount a chemical species is present in a target environment, which article comprises a multiplicity of particles in close-packed orientation, said particles having a core of conductive metal or conductive metal alloy, in each said particle such core being of 0.8 to 40.0 nm in maximum dimension, and on said core a ligand shell of thickness from 0.4 to 4.0 nm, which is capable of interacting with said species such that a property of said multiplicity of particles is altered.
U.S. Pat. No. 7,052,854 discloses systems and methods for ex-vivo diagnostic analysis using nanostructure-based assemblies comprising a nanoparticle, a means for detecting a target analyte/biomarker, and a surrogate marker. The sensor technology is based on the detection of the surrogate marker which indicates the presence of the target analyte/biomarker in a sample of a bodily fluid.
EP 1,215,485 discloses chemical sensors comprising a nanoparticle film formed on a substrate, the nanoparticle film comprising a nanoparticle network interlinked through linker molecules having at least two linker units. The linker units are capable of binding to the surface of the nanoparticles and at least one selectivity-enhancing unit having a binding site for reversibly binding an analyte molecule. A change of a physical property of the nanoparticle film is detected through a detection means.
WO 2009/066293 to one of the inventors of the present invention discloses a sensing apparatus for detecting volatile and non-volatile compounds, the apparatus comprises sensors of cubic nanoparticles capped with an organic coating. Further disclosed are methods of use thereof in detecting certain biomarkers for diagnosing various diseases and disorders including cancer.
There is an unmet need for a fast responsive sensor array based on a variety of sensors which provide improved sensitivity as well as selectivity for specific VOCs indicative of cancer.
| |
Plants in Action (1983)
We tend to think of plants as being essentially stationary incapable of movement other than that generated by the wind. But all plants do move as they grow and respond to aspects of their environment.
This film looks at a variety of plants in action. Some movements, like that of Mimosa, the sensitive plant, or that of the Venus flytrap are quite conspicuous.
Much plant activity, however, takes place too slowly for direct human perception it can be revealed only by time-lapse cinematography.
[Music plays, a grass field with the sun in the horizon appears on screen]
[Image changes to shots of different plants]
Narrator: Plants come in a great variety of colours, shapes and sizes. In the world of living things plants are valued for their beauty and their usefulness. Few people though think of plants as being particularly active.
[Title appears: Plants in action]
Most plants show little or no obvious movement apart from the movement of leaves and branches in the wind. Yet when we look more closely at plants or change the way we look at them we find they show a great deal of activity.
[Camera zooms in on the leaves of the mimosa plant]
These are leaves of mimosa, a sensitive plant which is common in warmer parts of the world. Watch what happens when a mimosa leaf is touched.
[A finger touches the mimosa leaves and the leaves fold close]
The leaves of some plants such as the Venus flytrap can capture and digest insects and other small animals. The leaf responds when sensitive hairs inside it are touched.
[Image changes to show an insect landing on the leaf of the Venus flytrap, and shows the leaves closing to trap the insect]
The sundew plants have sticky hairs that can close around an insect, hold it in place and digest it.
[Image changes to show an ant being trapped in the sticky hairs of the sundew plant]
Flowers of the trigger plant respond to touch in a way that helps pollinate the flower.
[Image changes to show an object being put into the centre of the flower and a trigger springs out]
Not many plants move in such ways yet all plants do move. Their movements are not as obvious because they take place much more slowly. These are leaves of an albizia tree during the day. At night it looks like this. The leaflets have closed.
[Image changes to show the leaves of the albizia tree open and then close]
Slow movements can be made to look faster by a filming technique called time-lapse photography. At 60 times normal speed patterns of movement become obvious. The same technique can be used to film the leaves of the albizia tree as day ends and night begins.
[Image changes to show day turn to night and the albizia tree closing its leaves]
Here’s the action again. Even during the night the plant is not entirely still. With a new day the leaflets open.
[Image changes to show the oxalis plant with its leaves opening and closing]
Some other plants also close their leaves at night. This is oxalis. Its leaves respond to light changes with a display of graceful movements.
[Image changes to show the cassia bush with its leaves opened during the day and closed at night]
These are leaves of a cassia bush in daylight, at night and at dawn.
[Image changes to show bright pig face opening in the sunlight and closing at night]
Some flowers will only open in sunlight. So with time-lapse photography all sorts of plant movements become noticeable.
[Image changes to show the movements of the clovers throughout the day]
Clover leaves in a pasture move continually through the day as the sun moves across the sky. And this is leucaena a tropical plant.
[Image changes to show the leaves of the leucaena plant opening and closing]
[Image changes to shows the yellow daffodils unfolding as the sunlight warms them up]
Daffodils at dawn on a frosty morning, watch the leaves once the sun is up.
Some of the most striking activity in plants is found during growth, a developing bean.
[Image changes to show a green bean sprouting from the ground]
A young fern frond.
[Image changes to show a ferns growth using time-lapse photography]
The subtle movements of a plant can be both complex and beautiful. Climbing plants are remarkable for their growth movements.
[Image changes to show the hibbertia growing and curling around a garden stake]
This is the growing tip of a climbing hibbertia.
[Image changes to show a burnt banksia plant, the camera zooms in on the seed pod as it opens and a seed falls out.
It is even possible to watch the woody fruits of a banksia opening after a fire and releasing their seeds.
With some plants the daily cycle of movement combined with growth movements is very striking. These are water lilies at 10,000 times normal speed.
[Image changes to show pink and white water lilies blooming]
A flower bud opens during the day usually in bright light and closes again later in the afternoon. Morning, afternoon, night and morning again.
At first glance then most plants look inactive but a great deal of movement is in fact taking place as the plants grow and respond to changes in their surroundings. | https://csiropedia.csiro.au/plants-action-1983/ |
There are a number of reasons that might be given as to why the profession might be best suited for your needs, but perhaps the most important factor in whether you should choose an occupational therapy doctor is the type of work you are looking for.
According to occupational therapists, they are the ones who work with the patients to address their issues in the workplace, in the home and at home.
This involves working with a team of trained specialists to provide individualised therapy that can help a person with a specific problem.
According the International Occupational Therapists Association (IOTTA), more than 20 million occupations are covered by this type of therapy.
The profession is particularly popular in the UK, where more than 80 per cent of the population is eligible to be an occupational therapist.
However, the IOTTA has highlighted a number problems with the profession.
They say that a large majority of occupational therapists are not trained in occupational therapy, so that patients cannot rely on the expertise of trained occupational therapists to address them.
As a result, there are a growing number of organisations that are offering occupational therapy services in some of the UK’s most deprived areas.
The latest example of this is a new organisation, A Therapeutic Association for the Homeless, which aims to address the needs of those living in homelessness.
It has partnered with a number organisations that offer services in different areas in the city centre, such as the A-Line, The People’s Housing Centre and the People’s Homelessness Centre, and is now looking for a qualified therapist to join its team.
The A Therapist’s CareerIt is not unusual for people who have suffered trauma to seek help from an occupational therapists.
For example, a person who has been raped might want to talk to a trained counsellor who can help them cope with their traumatic experiences and help them to cope with any subsequent problems.
However there is one thing that may make an occupational Therapist particularly qualified to work with you.
In fact, this is the thing that the A Therapapist’s career aims to achieve.
“The key to having an excellent career is to be passionate about what you do and why you do it,” said Dr Emma Roper, who runs a clinical occupational therapy clinic in London and is the first licensed occupational therapist in the country.
“We want to create a service that is very accessible, very compassionate, that can offer an opportunity for a person to get to know a therapist in a way that they can’t do now with the help of a traditional therapist.”
Roper says that in order to provide this service, the occupational therapist must be experienced and highly qualified, and she hopes that the group will help people who are struggling to find a professional.
“It’s important to be well-rounded and to be able to address people with any level of trauma, regardless of the type or severity of the trauma,” she said.
The IOTMA also points out that it is not just the people who need an occupational counsellors help that should look to an occupationaltherapy doctor.
Many people need support to cope in a challenging situation, and this is where the ICTA comes in.
The organisation is one of the first accredited occupational therapy providers in the United Kingdom.
It works with individuals and their families, as well as the homeless and those in care.
Its services include individual therapy, group therapy, and a range of support and assistance for people with disabilities.
The professional has been able to work closely with those who are experiencing trauma, and the support and support that comes with it.
“For some people, they may have issues in their family, and it can be difficult to talk with a counsellur,” Roper said.
“Often, people feel overwhelmed by the trauma and what they have to do, and there are things that can be helpful that they don’t know about.”
The service also has a dedicated mental health team that can assist people with psychological issues and assist them to make changes to their lives, and help in their recovery.
The work of an occupational occupational therapist can vary greatly from individual to individual, but Roper says she does not think that it can change the person’s situation overnight.
“I think that in many cases, you have to find that the right occupational therapist is right for you, and you have a right to go about your life,” she explained.
“If you are someone who is having a hard time in the job market, I think it is important that you have the right professional at your disposal to work through those issues.”
It’s vital that you’re looking for the right personAn occupational therapist has a lot to offer the profession, as they can be a source of insight into a person’s needs and problems.
They can also help you understand the nature of the issues that you may be facing.
“People have different needs and it’s very important to work together with the occupational therapy professional to address those needs,” said Roper. | https://durjoysaha.com/2021/09/10/which-occupational-therapy-doctors-do-you-think-would-recommend/ |
BACKGROUND
DETAILED DESCRIPTION
1. Technical Field
The present disclosure relates to power supply devices, and particularly to a power supply device for server cabinets.
2. Description of Related Art
A server cabinet may contain a plurality of servers and require a special power supply device for simultaneously supplying electrical power to these servers. A power supply device for server cabinets may include a power distribution unit (PDU) and a plurality of power supply units (PSUs). The PSUs are all electrically connected to the PDU. In use, the power supply device is installed in a server cabinet that receives a plurality of servers, and the PSUs are respectively electrically connected to the servers. The PDU receives electrical power from an external power supply (e.g., a wall socket or a battery). Each of the PSUs receives electrical power from the PDU, regulates the received electrical power (e.g., in respect of voltage value, current value, and alternating frequency), and transmits the regulated electrical power to the server corresponding to the PSU.
In an above-described power supply device, the PDU and all of the PSUs are generally independent devices. Thus, the PDU and all of the PSUs may occupy much space in the server cabinet and together generate excessive heat.
Therefore, there is room for improvement within the art.
FIG. 1
100
100
100
10
30
30
10
10
10
30
10
30
is a block diagram of a power supply device , according to an exemplary embodiment. The power supply device can be positioned in a server cabinet to simultaneously supply electrical power to a plurality of servers received in the server cabinet. The power supply device includes a power distribution unit (PDU) and a plurality of power supply units (PSUs) corresponding to the plurality of servers. The PSUs are all electrically connected to the PDU and structurally integrated with the PDU . The PDU can be electrically connected to an external three-phase alternating current (AC) power supply to receive a three-phase AC voltage, and can transform the three-phase AC voltage into three types of single-phase AC voltages having different phases. Each of the PSUs receives one of the three types of single-phase AC voltages from the PDU , regulates the received single-phase AC voltages (e.g., in respect of peak value, effective value, and alternating frequency), and transmit the regulated single-phase AC voltage to the server corresponding to the PSU .
10
11
13
15
17
11
13
15
30
15
17
30
17
30
15
17
10
The PDU includes an input terminal , a distribution board , a connection board , and a base board . The input terminal , the distribution board , and the connection board are electrically connected in series. The PSUs are all electrically connected in parallel between the connection board and the base board , and each of the PSUs can be electrically connected to a corresponding server via the base board . Furthermore, the PSUs are all structurally held between the connection board and the base board , that is, structurally integrated with the PDU .
11
11
13
The input terminal is a circuit breaker. The input terminal is configured to be electrically connected to the external three-phase AC power supply to receive the three-phase AC voltage. The distribution board is configured to conduct portions of the three-phase AC voltage having different phases to different circuits, respectively, and thereby transform the three-phase AC voltage into the three types of single-phase AC voltages having different phases.
15
151
151
13
30
151
30
17
151
13
30
151
30
151
17
The connection board includes a plurality of sockets . The sockets are all electrically connected to the distribution board , and the PSUs are respectively electrically connected to the sockets . The PSUs are all electrically connected to the base board . Each of the sockets can receive one of the three types of single-phase AC voltage from the distribution board , and transmits the received single-phase AC voltage to the PSU electrically connected to the socket . Thus, the PSU electrically connected to the socket can supply electrical power to a corresponding server via the base board .
FIG. 2
100
200
100
200
300
200
30
17
300
Also referring to , the power supply device is used to supply electrical power to a plurality of servers . The power supply device and the servers are all received in a server cabinet , and the servers are respectively electrically connected to the PSUs via the base board . The server cabinet includes a three-phase AC power supply A.
11
13
151
13
30
151
30
151
30
200
30
200
200
In use, the input terminal is electrically connected to the three-phase AC power supply A and receive a three-phase AC voltage. The distribution board transforms the three-phase AC voltage into three types of single-phase AC voltages having different phases. Each of the sockets receives one of the three types of single-phase AC voltages from the distribution board , and transmits the received single-phase AC voltage to the PSU electrically connected to the socket . Each of the PSUs receives one of the three types of single-phase AC voltages from the socket corresponding to the PSU , regulates the received single-phase AC voltage (e.g., in respect of peak value, effective value, and alternating frequency), and transmits the regulated single-phase AC voltage to the server electrically connected to the PSU . In this way, the power supply device can supply all the electrical power required by the individual servers at the same time.
100
30
15
151
13
30
151
30
In this embodiment, the power supply device includes six PSUs , and the connection board includes six sockets . Each type of the single-phase AC voltages generated by the distribution board is transmitted to two of the PSUs via two of the sockets , respectively. That is, every two of the PSUs share one of the three types of the single-phase AC voltages.
30
30
30
30
15
17
10
300
30
In other embodiments, the total number of the PSUs is variable, and the number of the PSUs sharing one particular type of the three types of single-phase AC voltage can also be changed. The number of the PSUs required is irrelevant, and there is no upper limit to this number. The PSUs are all structurally received held between the connection board and the base board , that is, structurally integrated with the PDU . Thus, space inside the server cabinet is conserved, and the PSUs in this arrangement generate much less heat than PSUs in an independent arrangement.
100
50
50
17
100
50
30
The power supply device can further include a control board . The control board is electrically connected to the base board and includes an interface configured to be electrically connected to an external data processing unit (not shown), such as a personal computer (PC). When the power supply device is used, via the control board , the external data processing unit can detect and monitor relative parameters (e.g., voltage values, current values, and alternating frequencies) of each of the PSUs .
It is to be further understood that even though numerous characteristics and advantages of the present embodiments have been set forth in the foregoing description, together with details of structures and functions of various embodiments, the disclosure is illustrative only, and changes may be made in detail, especially in matters of shape, size, and arrangement of parts within the principles of the present invention to the full extent indicated by the broad general meaning of the terms in which the appended claims are expressed.
BRIEF DESCRIPTION OF THE DRAWINGS
Many aspects of the present disclosure can be better understood with reference to the following drawings. The components in the various drawings are not necessarily drawn to scale, the emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the figures.
FIG. 1
is a block diagram of a power supply device, according to an exemplary embodiment.
FIG. 2
FIG. 1
is a block diagram of the power supply device shown in supplying electrical power to a plurality of servers contained in a server cabinet. | |
Your business could benefit from the extra protection provided by an excess or umbrella policy. We have options ranging from $1,000,000 up to $25,000,000. These policies are designed to work in conjunction with our general liability policies to offer a comprehensive solution to your risk management needs. | https://outdoorunderwriters.com/umbrella_liability.html |
"The disconnect between educators and parents reveals the need for schools to improve the integration of technology into the learning environment and students’ learning experiences," said Julie Evans, CEO of Project Tomorrow. "Parents do not feel that schools are effectively preparing students for the jobs of the 21st century, and [they] view technology implementation as essential to student success."
Pam Young, a parent from Mission Viejo, Calif., said she would like her son’s school to give its students a "world-class" education and help students develop skills that will carry them through to the post-college career world.
"Using technology in school is key to achieving both of these objectives," Young said. "I think it is essential that our schools provide opportunities for students to use a wide range of new technologies in the classroom, and that the teachers are well trained in how to use technology to increase student achievement."
"Parents recognize that information literacy is crucial to their children’s success in the 21st-century global economy," said Jessie Woolley-Wilson, president of Blackboard’s K-12 division. "Today’s students regularly utilize technology tools and resources in many aspects of their lives, yet [many] do not experience the same technology integration in their academic experience."
Policy makers and educators need to "help bridge the gap between the way students live and the way they learn," she added.
The findings are included in the report "Learning in the 21st Century: Parents’ Perspectives, Parents’ Priorities," which examines parent responses to the aspirations of students for technology-enhanced learning environments.
Schools and districts that participate in the Speak Up survey gain free online access to their own aggregated data for internal planning purposes, with national benchmark data for comparison. National findings from the 2009 survey will be released during two Congressional briefings in the spring.
Evans said participation in the survey is increasing every year.
"Each year, more schools sign up to be part of Speak Up, because it offers them–their students, parents, and staff–a way to express their opinions about the future of learning, local and national policies, how teaching could be improved, and more," she said.
"The information is invaluable to the schools [that] participate, [as well as] the elected leaders at all levels who use Speak Up data as a way to gauge opinions on a range of educational issues," she added.
The 2009 online survey is open through Dec. 18 for all K-12 students, parents, teachers, pre-service teachers, and administrators.
"Learning in the 21st Century: Parents’ Perspectives, Parents’ Priorities" | https://www.ecampusnews.com/2009/10/30/parents-focus-more-on-21st-century-skills/ |
The best fit in counseling men : are there solutions to treating men as the problem?
Login
Cardinal Scholar
Back
Cardinal Scholar Home
→
Ball State Theses and Dissertations
→
Doctoral Dissertations
→
View Item
The best fit in counseling men : are there solutions to treating men as the problem?
Hurst, Mark A.
Advisor:
Gerstein, Lawrence H.
Date:
1997
Other Identifiers:
LD2489.Z68 1997 .H87
CardCat URL:
http://liblink.bsu.edu/catkey/1073730
Degree:
Thesis (Ph. D.)
Department:
Department of Counseling Psychology and Guidance Services
Abstract:
Men's reluctance to seek psychological help appears to be related to a discrepancy between values and behaviors of the traditional male role and values and behaviors commonly associated with the counseling setting. The view that men must adopt traditional feminine ways of relating and coping to engage in and receive value from therapy has been challenged recently. Alternative interventions may be more attractive to some men who need help, but are unwilling to enter therapy.This study assessed: (a) the influence of male role socialization on help-seeking and (b) men's preferences for and expectations of different therapeutic orientations. It was proposed that more traditional men would be less likely to seek help for a serious psychological concern, but would be more attracted to interventions that reflect values consistent with traditional male ways of coping if they were to seek help (solution-focused and cognitive-behavioral therapy). Additionally, it was proposed that men expect psychologists to use interventions that require expression of more feminine characteristics and behaviors (psychodynamic and person-centered orientations).Undergraduate males (N = 259) were recruited from intact classrooms at a large midwestern university. Three gender role measures were administered to assess traditional masculinity ideology, and male role stress and conflict. Subjects viewed a video of a male client describing a serious personal problem and were asked about their likelihood to seek help if they were experiencing this problem. They were also asked to report their preference for and expectation of four therapy orientations if they were to seek help.Males who endorsed more traditional ideology and experienced greater role conflict were less likely to seek help for the videotaped problem. Males less likely to seek help preferred that their psychologist employ a solution-focused orientation if they were to seek help. Participants expected their psychologist to employ person-centered and psychodynamic orientations more often than solution-focused or cognitive behavioral orientations. Prior experience in counseling also affects preferences.Conclusions support the idea that some males view the counseling setting as a poor fit and may prefer and access interventions that more closely represent male ways of relating and coping.
Show full item record
Files in this item
Files
Size
Format
View
There are no files associated with this item.
This item appears in the following Collection(s)
Doctoral Dissertations
Doctoral dissertations submitted to the Graduate School by Ball State University doctoral candidates in partial fulfillment of degree requirements. | http://cardinalscholar.bsu.edu/handle/handle/176977 |
A detailed biographical study of the members of the army medical service during the Revolution and Napoleonic wars that charts their background and life both in and outside the army. It demonstrates how a group of medical practitioners from relatively humble backgrounds could use social contacts and experience forged in the army to become an established... more...
Courage After FireUlysses Press 2009; US$ 14.99
The bravery displayed by our soldiers at war is commonly recognized. However, often forgotten is the courage required by veterans when they return home and suddenly face reintegration into their families, workplaces, and communities. Authored by three mental health professionals with many years of experience counseling veterans, Courage After Fire... more...
U.S. Army First Aid ManualSkyhorse Publishing 2009; US$ 14.95
U.S. Army First Aid Manual offers skills and knowledge necessary for many life-threatening situations, with an emphasis on treating oneself and aiding others—of use to soldiers in the field, to outdoorsmen, or to anyone who may find themselves in a dangerous situation without a medical professional on-hand. This is the official manual for treating... more...
Military Unionism In The Post-Cold War EraTaylor and Francis 2006; US$ 54.95
This unique study of military unionism shows how the changing nature of present day conflicts has made soldier representation more important then ever. This new collection of essays clearly establish the key factors in the military union debate in recent years and highlight the mechanisms different armed forces have created to deal with the aspirations... more...
The Last and Greatest BattleOxford University Press 2015; US$ 19.99
In The Last and Greatest Battle-- the first book devoted exclusively to the problem of military suicides--John Bateson brings this neglected crisis into the spotlight. more...
The Gulf War and Mental Health: A Comprehensive GuideABC-CLIO 1996; US$ 84.00
The brief, successful Gulf War resulted in few casualties, but there were still recognizable pockets of trauma. This study examines the Mental Health Services available in the theater of operations, the preparations made to train the soldiers for the stress of combat, and details of how they coped with the experience of combat. It assesses the Gulf... more...
The Veterans and Active Duty Military Psychotherapy Progress Notes PlannerWiley 2010; US$ 65.00 US$ 56.33
The Veterans and Active Duty Military Psychotherapy Progress Notes Planner contains complete prewritten session and patient presentation descriptions for each behavioral problem in The Veterans and Active Duty Military Psychotherapy Treatment Planner. The prewritten progress notes can be easily and quickly adapted to fit a particular client need... more...
The Psychology and Physiology of StressElsevier Science 2012; US$ 72.95
The Psychology and Physiology of Stress investigates the psychological and physiological consequences of stress caused by the Vietnam War. It includes the contributions of the representatives of the US Armed Forces and the Army of the Republic of Vietnam. Furthermore, it summarizes advances both in the clinical and research spheres that have evolved... more...
Understanding Military Workforce ProductivitySpringer New York 2014; US$ 156.75
Based on a survey of health-related behaviors conducted for the Department of Defense by RTI International since 1980, this book examines trends in substance abuse, health behavior, and mental health among active duty military personnel over the past 20 years more...
Making Men MoralNYU Press 1997; US$ 27.00
On May 29, 1917, Mrs. E. M. Craise, citizen of Denver, Colorado, penned a letter to President Woodrow Wilson, which concluded, We have surrendered to your absolute control our hearts' dearest treasures--our sons. If their precious bodies that have cost us so dear should be torn to shreds by German shot and shells we will try to live on in the hope... more... | http://www.ebooks.com/subjects/military-sciences-other-services-ebooks/3687/?sortBy=SortAuthor |
Which One is Better for the Environment: Physical or Digital Music?
A study published yesterday found that streaming digital music led to “unintended” environmental and economic impacts. Despite a reduction in the use of plastics in physical music media, storing and transmitting digital music led to an increase in greenhouse gas emissions (GHGs).
Researchers discovered that the amount of GHGs generated by the streaming and downloading of music online is actually much larger than the amount that was once generated by the production of plastic used to make vinyl records, cassettes and CDs in earlier decades.
The study seems focused on plastic use in physical media vs. storing and streaming digital music from servers. I’d like to see more data, such as how much electricity is used when billions of people play the average CD vs. a digital song. | |
Emerson or Hawthorne?
Abstract
This paper focuses on Ralph Waldo Emerson's influence on prominent American writers. Specifically, the paper examines themes that Emerson emphasizes in Self-Reliance and Nature and how those themes are central to selected works by Nathaniel Hawthorne, Walt Whitman, Henry David Thoreau, and Herman Melville. Motifs include, but are not limited to, identity, independence, individuality, introspection, isolation, and ingenuity.
This paper has been withdrawn. | https://cupola.gettysburg.edu/student_scholarship/118/ |
Flowers attract different types of pollinators. Bees, wasps, butterflies, moths and flies, even humming birds and bats, help pollinate flowers.
Honey bees produce honey for us to eat. Pollinators help increase the yield of fruits, vegetables and nuts. Every plant that has a flower benefits from bees and pollinators. They are important in our farming and gardening.
We took some pictures in our local area of pollinators in action. Included are mining bees that are native to our Rocky Mountain area. They are considered solitary bees and dig their nests at trail heads in the forests and mountains. Click on the pollinator link below for a simple science reading lesson. | https://www.montessorimom.com/Pollinators/ |
A blog about all things IoT and business transformation.
Filter by Topics
Healthcare
IoT
Technology
Internet of Things
Automotive
Wearables
Artificial Intelligence
Healthcare
Trends
Cloud
Industrial IoT
Machine Learning
AI
Edge Computing
Construction
Cybersecurity
Insights
Pet tech
5G
ADAS
Baby Tech
Business Analysis
Embedded
Green Tech
Manufacturing
Mobile
Retail
Smart Home
Transportation
AR/VR
AWS
Analytics
Back end
Bluetooth
Chip
Consumer Electronics
Consumer IoT
Digital Signage
Digital Twins
Discovery Phase
Drones
Ecommerce
Hardware
Innovations
Oil and Gas
Real Estate
Renewables
Robotics
iiot
Topic : Healthcare
IoT
/
Healthcare
4 min
Written By
Tatsiana Tsiukhai
/ May 21, 2021
Connected Health: How to Secure Booming Medical Devices for Remote Patient Monitoring
Medical devices remain the main target for cyberattacks. Learn how to ensure healthcare data security with various approaches and tactics.
IoT
/
Healthcare
/
Edge Computing
/
AI
5 min
Written By
Maryia Aukhimovich
/ April 6, 2021
Business Opportunities in Health IoT in 2021: Edge Computing and AI for Connected Devices
Health IoT is a crucial element of emerging virtual care. Let’s see how AI and edge computing help augment telehealth services with health data and what the market expects from digital health companies.
Healthcare
/
Robotics
6 min
Written By
Nadejda Alkhaldi
/ February 9, 2021
How Robotic Process Automation Helps the Healthcare Sector Ease the Pandemic-Related Stress
Learn how robotic process automation can reduce physician burnout and help the HR department. Also, find out why many hospitals are still hesitant to adopt this technology.
Wearables
/
Healthcare
/
Artificial Intelligence
5 min
Written By
Vitaly Gonkov
/ December 16, 2020
5 Ways AI-Powered Wearable Devices are Rocking the Healthcare Industry
In the times of COVID-19, AI-powered wearable devices can help hospitals serve more patients while keeping healthcare delivery costs down. Here’s how.
Healthcare
/
Machine Learning
/
Artificial Intelligence
9 min
Written By
Nadejda Alkhaldi
/ October 28, 2020
A Rundown of AI-driven Medical Imaging, and How We Can Increase Its Acceptance and Adoption
What does the future of AI-based medical imaging look like? Is it true that smart algorithms now outperform human radiologists? What prevents Artificial Intelligence from going mainstream in healthcare? These are just a few of the questions we cover in our latest blog post.
Internet of Things
/
Wearables
/
Healthcare
7 min
Written By
Vera Solovyova
/ September 1, 2020
IoT for Seniors: How Technology Improves Quality of Life for the Elderly
IoT solutions for senior citizens help the elderly monitor health conditions at home, interact with the outside world, and live independently longer. Here’s how.
IoT
/
Technology
/
Healthcare
9 min
Written By
Nadejda Alkhaldi
/ August 4, 2020
How Has Telehealth Evolved During the Pandemic, and What Comes Next?
Amid the coronavirus pandemic, telehealth has become a reasonable substitute for regular doctor-patient communication. Here’s why.
Healthcare
/
Machine Learning
/
Artificial Intelligence
3 min
Written By
Eugen Nekhai
/ July 21, 2020
Machine Learning in Radiology: How to Achieve an Accurate Diagnosis with Limited Training Data
In medical datasets, the images of healthy people usually outnumber X-rays indicating the presence of a disease. Is it possible to train accurate Machine Learning models for medical image analysis in the absence of a balanced dataset?
IoT
/
Wearables
/
Healthcare
5 min
Written By
Alex Makarevich
/ July 2, 2020
Rethinking Remote Patient Monitoring in 2020
The COVID-19 pandemic has completely altered the way patients receive healthcare services. In our latest article, we zoom in on remote patient monitoring (RPM) solutions, explain how these systems work, and identify RPM trends for 2020. | https://www.softeq.com/blog/tag/healthcare |
What affects the Rate of Reaction when Calcium carbonate is dissolved in nitric acid?
Extracts from this document...
Introduction
What affects the Rate of Reaction when Calcium carbonate is dissolved in nitric acid? Introduction This Sc1 is about looking at what affects the rate of reaction in an experiment and why it affects it. For example you can use several ways to find this out. One way is by adding a catalyst, by adding a catalyst you can increase rate without the catalyst being used up itself. Different reactions need different catalysts. They also can reduce the activation energy, this is good for big industries because the faster the product is made the less costly it will be. Activation energy basically means the energy needed to break reactant bonds. An example of a catalyst being used is, magnesium dioxide catalysing the decomposition of hydrogen peroxide. Another to see what affects the rate of reaction is to increase the temperature. By increasing the temperature, the particles will have more energy and will move faster. This will make them collide more and with greater energy. Changing the surface area affects the reaction as well. A larger surface area will give a faster reaction. This is because reactions happen on surface of solid reactions. ...read more.
Middle
Each time we did the experiment we timed the reaction for five minutes. We gradually increased the concentration of the acid by 0.5 molars. The most we increased it till was 2 molar because this was the most concentrated we could get hold of and we never needed anymore to see the result. To make it a fair experiment we had to measure the acid and water as precise as possible each time round. Also we had to make sure that when we changed between test tubes we tried not to lose any bubbles. The time was stopped and started as close as possible without too much of a difference. We tried to as much as possible to get everything thing on target, but there were some slight mess-ups. Every molar was repeated 3 times for accurate results in case we had accidentally got something wrong in one. Diagram Results These are our results for each different molar of concentrated acid starting from 0.5 to 2.0 molar. Each one was repeated three times for accurate results. Time (sec) 1st 2nd 3rd Average 0 0 0 0 0 30 0 0 0 0 60 0.5 0.5 0.2 0.4 90 0.7 0.8 0.5 0.7 120 1.0 1.2 0.8 1 ...read more.
Conclusion
There were one or two major times where all the results of that experiment were different to the others, this was the first experiment of the 1.5 molar-concentrated acid. This could have happened because the acid was measured wrong; all the results for that first experiment were higher than the others. This also could have been a result of too less calcium carbonate being put in, which resulted in a quicker reaction. We knew this column was odd because the other results for that 1.5 molar experiment were similar too each other. Other than that the majority of the results were all overall quite good. The experiment could have been improved a lot and we could have got much more accurate results. We could have improved it by using accurate scales, which would have told us more precise readings. Another way to improve our experiment is to use a gas syringe; this would have made it easier to read the reading when the gas was being given off. Other things we could have done are to use better measuring instruments for the acid. By using these two improvements we could have got better results and overall much clearer difference. ?? ?? ?? ?? Kanwaljit Singh NagraNagra Family Page 1 5/4/2007 ...read more.
This student written piece of work is one of many that can be found in our GCSE Patterns of Behaviour section.
Found what you're looking for? | http://www.markedbyteachers.com/gcse/science/what-affects-the-rate-of-reaction-when-calcium-carbonate-is-dissolved-in-nitric-acid.html |
Montreal athletes to cheer for at the Tokyo Olympics
It’s been an unusual journey for the Tokyo 2020 Olympics. As the opening ceremonies are underway, Tokyo is under another state of emergency due to the pandemic and organizer controversy has gotten more attention than the games itself. It’s an Olympics like no other held a year later with no spectators cheering the athletes on as countries hope for personal bests and medal finishes. Montrealers can get ready to watch a wide range of Montreal athletes in the Olympics, but bear in mind the 13-hour time difference.
Canada is sending its largest summer team with 370 athletes and the Games feature new sports such as skateboarding, sports climbing, karate and surfing. There are many Montrealers in the mix, some returning and many making their Olympic debuts.
Leylah Fernandez is among the Montreal athletes to cheer for at the Tokyo 2020 Olympics
Canada is experiencing a golden age in tennis with great success for both men and women. Despite missing key players Bianca Andreescu and Denis Shapovalov due to COVID concerns, Montreal is well represented on the court. Monteal’s Felix Auger-Aliassime continues to be the talk of men’s tennis as the world’s 15th ranked player reached a career high in the Wimbledon quarterfinals this past month. He’ll be playing against defending Olympic Champion Great Britain’s Andy Murray in the first round. Laval’s Leylah Fernandez continues to excite and rise up the ranks as the 18-year-old was key to Canada’s success at the Billie Jean King Cup playoffs and won her first WTA title at the Monterrey Open this year. She’ll be playing Dayana Yastremska of Ukraine in the first round. The Olympic Tennis tournament will start on July 24.
The Women’s rugby 7s will look to improve on their Rio Olympic bronze medal performance. Six members of the 2016 Rio team will be returning including Montreal’s Bianca Farella who is fifth on the all time Seven Series scoring list and became the second woman to reach 150 career tries when Canada captured Silver at the 2020 World Rugby Series in New Zealand. Canada will play Brazil on July 29.
Canada’s women’s water polo team returns to the Olympics for the first time since the 2004 Athens games. A silver medal performance at the 2019 Pan Am games in Lima Peru helped the team qualify for the games. Part of the team from Lima includes Montrealers Axelle Crevier (a second generation player, her mother Marie-Claude Deslières was on Canada’s water polo team at the 2000 Sydney games), Elyse Lemay-Lavoie and team veteran and 2010 FINA World Cup All Star Joëlle Békhazi. Canada is up against Australia in the preliminary round on July 24.
Caroline Veyre will be making her Olympic boxing debut in the newly added 57kg featherweight class. She has been competing internationally since 2013 and a gold medalist at the 2015 Pan Am games in Toronto in the 60kg lightweight class. Veyre will be fighting in the preliminaries July 26.
Kristel Ngarlem is among the many Montrealers competing in the Summer Olympic Games in events such as tennis, diving, rugby, basketball and weightlifting
Kristel Ngarlem, who has competed internationally since debuting at the 2014 Commonwealth games in Glasgow, will be making her Olympic debut in weightlifting. Ngarlem won bronze at the Roma Weightlifting World Cup in 2020, lifting a personal best of 233kg. She’ll be lifting in the 76kg on Aug. 1.
Oliver Bone is among the Montreal athletes to cheer for at the Tokyo 2020 Olympics
It’s a sailing comeback for Montreal’s Oliver Bone. He competed at the Beijing games in 2008 then retired in 2011. He coached the duo of Jacob and Graham Saunders in the 470 two-person dinghy at the 2016 Rio games. Now in Tokyo, Bone and Jacob Saunders are two of Canada’s nine sailors competing in the 470 on July 28.
Guillaume Boivin was a recent addition to the Canadian road cycling team and will be the support rider for Canada’s Michael Woods in the Olympic road race on July 24. A pro cyclist since 2009, Boivin was a bronze medalist at the 2015 Pan Am games in Toronto, has raced in the Giro Italia and was recently the support rider for Woods at this year’s Tour de France.
Meaghan Benfeito is among the many Montrealers are competing in the Summer Olympic Games in events such as water polo, fencing, boxing, sailing, cycling and judo
Diving is massive in Montreal. The city is sending a 10-member team led by four-time Olympian Jennifer Abel, who is aiming to win an individual medal in the 3m springboard as well as another medal in the synchronized 3m springboard with partner Mélissa Citrini-Beaulieu (making her Olympic debut). Meaghan Benfeito returns for a third games and hopes to medal again after winning bronze in both the 10m platform and synchronized 10m platform in Rio with longtime partner Roseline Filion. Benfeito will be competing in the synchro event with Olympic newcomer Caeli McKay, who’s been her partner since 2017. Greenfield Park’s Pamela Ware returns for a second games and will be competing in the 3m springboard. The new faces of diving are Cédric Fofana in the 3m springboard, Pointe-Claire’s Nathan Zsombor-Murray in both the 10m platform and partnering with second time Olympian Vincent Riendeau in the 10m synchronized platform. The diving competition starts July 25.
Maximilien Van Haaster is among the Montreal athletes to cheer for at the Tokyo 2020 Olympics
Canada is sending its largest fencing team since Beijing 2008 and it features two Montrealers. Maximilien Van Haaster returns for his second Olympics competing in Individual and Team Foil on July 26 and Aug. 1 while Marc-Antoine Blais Bélanger will be making his Olympic debut in the épée individual event on July 25.
Canada’s women’s basketball team are on a mission. After making the quarterfinals at both the London and Rio games, the fourth-ranked FIBA World team is aiming for a medal, featuring star player Kia Nurse and co-flag bearer Miranda Ayim. The women’s squad qualified for the games with an undefeated record at the FIBA Olympic qualifying event in Ostend, Belgium in 2020. The roster includes Montreal guard Nirra Fields at her second Olympics. A member of the national team since 2103, Fields had 34 points and 11 rebounds in Rio and was part of the 2015 Pan Am team that won Gold in Toronto. Canada will start their tournament against Serbia on July 26.
Catherine Beauchemin-Pinard is among the Montreal athletes to cheer for at the Tokyo 2020 Olympics
Japan is the birthplace of Judo and Canada is sending a six-person team to compete. The team includes Montreal’s Arthur Margelidon making his Olympic debut after missing the Rio games with a broken forearm weeks before competition. Margelidon has been on the podium five times at Grand Prix events and will be competing in the 73kg event on July 26. This will be the second Olympics for Catherine Beauchemin-Pinard who holds five Grand Slam medals, was a silver medalist at the 2015 Pan Am games in Toronto and will be competing in the 63kg event on July 27. ■
To see when Montreal athletes will be competing, and for the complete Tokyo 2020 Olympics schedule, please click here.
Cindy Lopez is a Montreal based photographer who is the main photographer at Cult MTL. Cindy has photographed various assignments, from music festivals to news and sporting events and is a main contributor to the Cult MTL Instagram. On occasion, she writes sports related stories ranging from lists to events for the website. | |
Curio GroupWe work with educators, corporates, professional bodies and health services to create learning and development for a changing world.
About Curio Faculty
Curio, a leading education consulting and learning experience company, works with universities, vocational education providers, schools and companies to drive change through learning.
We have led the online learning transformation of many of our clients and partners through a focus on the creation of exceptional learning experiences, through course design and online facilitation.
Curio Faculty provides quality education services to universities, vocational education providers, other higher education providers, and corporate partners and not for profits.
We are passionate educators who believe in the power of expert content development and online facilitation to deliver exceptional learner outcomes.
Course Overview
This overall aim of this course is to examine the link between high levelcode written by humans and low level code executed by systems. An understanding of the ways in which these can differ is essential for developing exploitation skills, and for low level defensive design and testing. This course requires entering students to already understand higher level programming languages such as C and low level machine code for common microprocessors such as x86/x64 or ARM. Students learn software assurance and security analysis as well as basic malware analysis.
Learning Outcomes
- Explain the structure and key elements of a compiled binary on common target platforms
- An appreciation of how idioms in high level languages are compiled into machine code and the process of linking and be able to apply strategies to convert machine code back into such idioms.
- Select and use appropriate tools such as debuggers and decompilers
- Explain the characteristic common vulnerabilities and able to identify potential vulnerabilities in binaries
- Analyse and unpack obfuscated malware
Details:
TOTAL HOURS: 150 hours of work, at $100/hour
DURATION: from late Jan to mid-April 2023
DEADLINE: 31 Jan 2023
Please submit a tailored CV and a 2-page course proposal structure with weekly contents.
Position Summary
A subject-matter expert (SME) is a person who is an authority in an area, topic or discipline.
SMEs are engaged to assist the development or re-development of a course (subject). They may contribute, author, research and provide a high level of assistance as well as expertise in the development or re-development of the course.
SMEs may collaborate with Learning Designers and Course Coordinators during the development or re-development of courses. This may include updating an existing course or the assessments within an existing course, creating a course map for a new course, developing a prototype, and building a full course.
SMEs are often sourced from industry, consultancies, partner organisations, or recruitment organisations.
Key responsibilities and Duties:
- Participate in workshops to map out the overall curriculum and course architecture, the assessment strategy in overview, and the more finely grained structure of individual learning activities, contents, test questions
- Participate, as required, in workshops or conversations with clients, partners and industry to mine their expertise and understand how they will be able to contribute
- Work with Learning Designer to design activities, develop authentic and engaging learning activities, create performance tasks and identify opportunities for automated formative feedback
- Write course content based on a pre-agreed course map, ensuring appropriate referencing
- Deliver course content according to project timelines
- Assist in planning for the development of bespoke videos, writing interview questions/prompts
- Communicating with University librarians around resources to be located / cleared
- Identify resources to be used
- Communicate with graphic designers on images to be designed for the course
- Source contemporary course content material
- Write assessment tasks and rubrics with the learning designer and/or learning and teaching experts from partner/client and creating any assessment materials such as case studies or data sets
- Quality assurance of each learning activity in the final learning design from a SME perspective
- Incorporate feedback and changes as required by the client/partner
Objectives of this role:
This role of Subject Matter Expert (SME) is primarily to provide relevant subject matter content for online courses. The SME works in close collaboration with Learning Experience Designers and Developers and partners/clients to develop meaningful and engaging courses, support resources and learning experiences based on the objectives and learning outcomes of the learning experience. The design and development of learning experiences is typically closely project managed but also dynamic and fast paced, to achieve project deadlines, and entails highly developed communication and collaboration.
Requirements
Skills and Qualifications:
- at least 5 years industry experience in cyber security, with Masters or PhD in computer sciences
- know C and Assembler language and operating systems
- Have experience in course development.
- Online teaching experience in a relevant field, tertiary level is highly desirable
- Experience in curriculum design, online course content development is highly desirable
- Contemporary Industry and/or professional practice experience and currency in relevant field
- Contemporary knowledge of e-learning trends and student learning is highly desirable
- Ability to write course content that is responsive to the learner audience
- Demonstrated commitment to high quality teaching in delivery; an ability to lead a workshop and facilitate in an engaging and original way.
- Ability to work as part of a team with Learning Experience Designers and Developers and Clients/Partners
- An interest in online learning pedagogy
- Hold a solutions-oriented approach
- Ability to self-manage time, and to work to deadlines
- Openness and ability to receive feedback, incorporating input from diverse perspectives
- Strong communication and interpersonal skills
Key Selection Criteria
- SMEs must have a relevant area, topic or discipline or related field if the topic area is new or developing
- Must be an industry leader or thought leader in the relevant area, topic or discipline
- Experience running workshops and/or presenting at conferences or industry events
- Will happily collaborate with Learning Designers and Course Coordinators
- Has an understanding/passion for online learning and online pedagogy
- Excellent communication skills
- Can work towards milestones and deadlines - course map, prototype, full course
- Enjoys telling a story - narrative, structure, sequencing
- Can think outside the box to reimagine a course vision, mission and objectives
* Salary range is an estimate based on our salary survey 💰
Tags: C Malware PhD Reverse engineering Security analysis Strategy Teaching Vulnerabilities
Perks/benefits: Career development Conferences Team events
More jobs like this
-
-
-
Sydney, New South Wales, … Sydney, New South Wales, Australia Full TimeSenior Senior-levelUSD 60K - 135K * USD 60K+ *
Canva
Application Security Engineer - Security Champions (Open to remote across ANZ)Application security AWS Golang IAM Java Nonprofit OWASP +5
Career development Fitness / gym Flex hours Parental leave Wellness
-
Explore more InfoSec/Cybersecurity career opportunities
Find open roles in Ethical Hacking, Pen Testing, Security Engineering, Threat Research, Vulnerability Analysis, Cryptography, Digital Forensics and Cyber Security in general, filtered by job title or popular skill, toolset and products used. | https://infosec-jobs.com/job/22173-eoi-for-subject-matter-expert-for-zzen9215-reverse-engineering/ |
23 F.Supp.2d 1091 (1998)
Marilyn OLMER, an Individual, John Kelly, an Individual, Theresa Lane, an Individual, and Michelle Mann, an Individual, Plaintiffs,
v.
CITY OF LINCOLN, a Municipality, William Austin, in his Official Capacity as Lincoln City Attorney, and Thomas Casady, in his Capacity as Chief of the Lincoln Police Department, Defendants.
No. 4:98CV3311.
United States District Court, D. Nebraska.
November 4, 1998.
*1092 *1093 Jefferson Downing, Gary L. Young, Keating, O'Gara Law Firm, Lincoln, NE, Vollis E. Summerlin, Jr., Krista L. Kester, Osborn, Summerlin Law Firm, Lincoln, NE, for Plaintiffs.
Daniel E. Klaus, Rembolt, Ludtke Law Firm, Lincoln, NE, Mark S. Shelton, David Donovan, Wilmer, Cutler Law Firm, Washington, DC, for Defendants.
*1094 MEMORANDUM AND ORDER
KOPF, District Judge.
Plaintiffs four individuals who have engaged in protests and demonstrations opposing abortion in the vicinity of a Presbyterian church in Lincoln, Nebraska seek a preliminary injunction prohibiting enforcement of a Lincoln city ordinance that makes it unlawful for any person to intentionally or knowingly engage in "focused picketing" of a "scheduled religious activity" at certain times and within certain boundaries of a religious premises.
The plaintiffs claim that the ordinance violates the Free Speech, Establishment, and Free Exercise Clauses of the First Amendment,[1] as well as the Equal Protection and Due Process Clauses of the Fourteenth Amendment.[2] Further, the plaintiffs claim that the ordinance is unconstitutionally vague, ambiguous, and overbroad. (Filing 1, Compl. for Declaratory & Inj. Relief ¶ 31.) The defendants the City of Lincoln and its city attorney and chief of police[3] assert, among other things, that the ordinance is a constitutional time, place, and manner restriction upon expressive activity that does not violate the First Amendment.
After an evidentiary hearing on the plaintiffs' preliminary injunction motion, I conclude that a preliminary injunction should issue because the ordinance is not narrowly tailored to serve the government interest that prompted the ordinance, in violation of the Free Speech Clause of the First Amendment.[4] Given the court's resolution of the plaintiffs' Free Speech Clause argument, and because federal courts are required to avoid unnecessary and broad adjudication of constitutional issues, I do not reach the merits of the plaintiffs' other constitutional claims.[5]See Ashwander v. Tennessee Valley Auth., 297 U.S. 288, 347, 56 S.Ct. 466, 80 L.Ed. 688 (1936) (Brandeis, J., concurring) ("The Court will not pass upon a constitutional question although properly presented by the record, if there is also present some other ground upon which the case may be disposed of."); American Foreign Serv. Ass'n v. Garfinkel, 490 U.S. 153, 161, 109 S.Ct. 1693, 104 L.Ed.2d 139 (1989) (district court should not pass on constitutional question unless it was "imperative" on remand in case involving statute which precluded use of appropriated funds to implement or enforce nondisclosure agreements signed by Executive Branch employees; "courts should be extremely careful not to issue unnecessary constitutional rulings"); Carhart v. Stenberg, 972 F.Supp. 507, 509 (D.Neb.1997) (court declined to reach merits of alternative constitutional argument regarding state's "partial-birth" abortion law because federal courts are to avoid unnecessary and broad adjudication of constitutional issues, and court resolved request for preliminary injunctive relief on another constitutional ground).
I. FINDINGS OF FACT
The Ordinance
1. On September 14, 1998, the Lincoln City Council passed Ordinance No. 17413, which created Lincoln Municipal Code *1095 § 9.20.090. Lincoln Mayor Mike Johanns vetoed the ordinance on September 16, 1998, but the Lincoln City Council voted to override the mayor's veto on September 21, 1998. Section 9.20.090 provides as follows:
9.20.090 Disturbing the Peace by Focused Picketing at Religious Premises
(a) Definitions. As used in this ordinance, the following terms shall have the meanings here set forth:
(1) The term "religious premises" shall mean "the property on which is situated any synagogue, mosque, temple, shrine, church or other structure regularly used for the exercise of religious beliefs, whether or not those religious beliefs include recognition of a God or other supreme being";
(2) The term "scheduled religious activity" shall mean "an assembly of five or more persons for religious worship, wedding, funeral, memorial service, other sacramental ceremony, religious schooling or religious pageant at a religious organization's premises, when the time, duration and place of the assembly is made known to the public, either by a notice published at least once within 30 days but not less than 3 days before the day of the scheduled activity in a legal newspaper of general circulation in the city or in the alternative by posting the information in a reasonably conspicuous manner on the exterior premises for at least 3 days prior to and on the day of the scheduled activity."
(3) The term "focused picketing" shall mean "the act of one or more persons stationing herself, himself or themselves outside religious premises on the exterior grounds, or on the sidewalks, streets or other part of the right of way in the immediate vicinity of religious premises, or moving in a repeated manner past or around religious premises, while displaying a banner, placard, sign or other demonstrative material as a part of their expressive conduct." The term "focused picketing" shall not include distribution of leaflets or literature.
(b) It shall be deemed an unlawful disturbance of the peace for any person intentionally or knowingly to engage in focused picketing of a scheduled religious activity at any time within the period from one-half hour before to one-half hour after the scheduled activity, at any place (1) on the religious organization's exterior premises, including its parking lots; or (2) on the portion of the right of way including any sidewalk on the same side of the street and adjoining the boundary of the religious premises, including its parking lots; or (3) on the portion of the right of way adjoining the boundary of the religious premises which is a street or roadway including any median within such street or roadway; or (4) on any public property within 50 feet of a property boundary line of the religious premises, if an entrance to the religious organization's building or an entrance to its parking lot is located on the side of the property bounded by that property line. Notwithstanding the foregoing description of areas where focused picketing is restricted, it is hereby provided that no restriction in this ordinance shall be deemed to apply to focused picketing on the right of way beyond the curb line completely across the street from any such religious premises.
2. Ordinance No. 17413, which created Lincoln Municipal Code § 9.20.090, contains a severability clause that states: "If any part of this ordinance shall be adjudged to be invalid or unconstitutional, such adjudication shall not affect the validity of this ordinance as a whole, or any part thereof not adjudged invalid or unconstitutional." City of Lincoln Ordinance No. 17413 § 3 (Sept. 21, 1998).
3. Ordinance No. 17413 contains a section entitled "Legislative Intent and Findings," which characterizes the focused-picketing law as a "time, place and manner restriction[ ] ... intended to provide narrowly tailored, reasonable, common[-]sense limitations on focused picketing." City of Lincoln Ordinance No. 17413 § 1(f) (Sept. 21, 1998). The statement of legislative intent further provides:
It is the intent of this ordinance to preserve the peace at religious premises in order to protect and secure several significant and compelling interests of this city. Those interests include the health, safety and welfare of all the citizens and especially of children, all citizens' freedoms of expression, *1096 assembly, association and religion, and the ordinary, good public order of the community.
City of Lincoln Ordinance No. 17413 § 1(a) (Sept. 21, 1998). The statement of intent contains a finding that the focused picketing to be regulated disrupts religious activities, hinders reasonable access to religious activities by families with young children, disturbs the peace of those seeking to participate in such activities, and interferes with the use and safety of public sidewalks and streets that provide access to religious premises. City of Lincoln Ordinance No. 17413 § 1(b) (Sept. 21, 1998). According to the statement of legislative intent, "This ordinance is intended only to prohibit a certain specified form of disturbance of the peace which arises when one form of expressive conduct, focused picketing, tends to override another form of expressive conduct, namely free exercise of religion." City of Lincoln Ordinance No. 17413 § 1(c) (Sept. 21, 1998). The statement goes on to describe the specific injury targeted by the legislation:
The mechanism of such injury to individual freedom of religion operates as follows: infants and young children are emotionally vulnerable to focused picketing in close proximity to them, which is a typical characteristic of focused picketing at religious premises, and many of these children tend to react with fear, unhappiness, anxiety and other emotional disturbance when such activity is imposed on them. Families with infants and young children who must pass through the ring of focused picketing in order to attend or leave religious activities are for the time of entrance to the time of departure, captive audiences. Their option of foregoing their worship or other religious activity on the one hand, or risking pain and injury to their children on the other, amounts to a substantial and intolerable burden on their personal religious freedom. The technique of using focused picketing to disturb the very young so their families will feel coercion either to comply with picketers' wishes or forego their chosen religious activity entirely, is a pernicious and contemptible form of harassment. A buffer zone to protect against focused picketing at the immediate borders of religious premises is a reasonable remedy, and by reducing the proximity will provide some protection to the peace of mind of children and to the religious freedom of families.
City of Lincoln Ordinance No. 17413 § 1(e) (Sept. 21, 1998).
4. On September 30, 1998 within days of the ordinance's effective date this court issued a temporary restraining order (filing 9) prohibiting enforcement of Lincoln Municipal Code 9.20.090, also known as Ordinance No. 17413. The temporary restraining order was to be effective for no longer than 10 days pursuant to Fed.R.Civ.P. 65(b), but counsel for the parties subsequently agreed to extend the temporary restraining order until November 2, 1998, or until issuance of this court's order granting or denying the plaintiffs' motion for preliminary injunction.
The Plaintiffs' Picketing Activity
5. Plaintiffs John Kelly, Theresa Lane, and Michelle Mann hold religious, political, and moral beliefs that abortion is wrong and that the appointment of Dr. Winston Crabb, a medical doctor, as a deacon and elder in the Westminster Presbyterian Church in Lincoln, Nebraska, violated Biblical requirements. (Affs. John Kelly, Theresa Lane & Michelle Mann ¶¶ 2-4.) These plaintiffs have engaged in protests and demonstrations on the public sidewalk that adjoins Westminster Presbyterian Church, carrying signs which read, "Winston Crabb, Abortionist and Elder," "1 Corinthians 5:13," "Dr. Crabb is Unfit to be an Elder," "Jesus Loves the Little Children," and "Life." These plaintiffs testified[6] that they have never intentionally or knowingly harassed, threatened, or intimidated anyone, including children, during their protests at Westminster Presbyterian Church. (Id. ¶¶ 5-7.) Specifically, they testified that:
While carrying protest signs at the church, I generally hold the sign in a manner that allows those walking along the sidewalk or driving by on the street to read the sign, *1097 but I have not used the sign to: (a) block or attempt to block anyone's access to the church, church parking lot, or public sidewalk; (b) touch, strike or attempt to touch or strike anyone; and (c) never attempted to force any apparently unwilling viewers to read my sign by either following viewers who have turned away from my sign, by intentionally moving my sign into the line of sight of viewers who have turned away from my sign, or by particularly targeting my sign at any apparently unwilling viewers.
(Id. ¶ 8.)
6. Larry Donlan is the director of Rescue the Heartland, the organization allegedly responsible for the picketing activities at Westminster Presbyterian Church. Donlan testified before the Lincoln City Council that the picketers at Westminster "agreed to exclude graphic photos of abortion from the south side entrance of the church. So, those who objected to them could enter on that side." Minutes of the Regular Lincoln City Council Meeting: Comments During Public Hearing on Amending Chapter 9.20 of the LMC by Adding a New Section Numbered 9.20.090, at 28 (Sept. 8, 1998) (comments of Larry Donlan, Director of Rescue the Heartland).
7. Lincoln Chief of Police Tom Casady testified that during police monitoring of the picketing at Westminster Presbyterian Church over the course of several months, the "officers have not seen anyone being chased. They haven't seen any physical confrontations nor anyone's way being blocked or impeded. They haven't seen anyone committing a crime. There's been dialog, but nothing that officers felt rose to the level of something for instance like terroristic threats or disturbing the peace." The police monitoring at Westminster included police officers in plain clothes patrolling the area and videotaping from a vehicle. Minutes of the Regular Lincoln City Council Meeting: Comments During Public Hearing on Amending Chapter 9.20 of the LMC by Adding a New Section Numbered 9.20.090, at 36 (Sept. 8, 1998) (comments of Chief of Police Tom Casady).
8. Members of the congregation and clergy of Westminster Presbyterian Church testified that the picketers have thrust five- to six-foot signs at children as they enter and exit the church, and "children, after being confronted with such signs, have experienced nightmares and problems with sleeping"; picketers have displayed "gruesome and graphic" signs against side and rear windows of cars in which children were seated when such cars were entering or exiting the Westminster vicinity; "part of Sunday routine included asking our children to remove their seat belts and sit on the floor as we pulled out of the church parking lot to avoid having them see the large graphic signs"; "When my children and I attend worship services together, my children are always confronted by the picketers with their graphic signs. When I attend services alone, I am rarely targeted by the picketers"; "it breaks my heart that my now aged 6 year old son is telling his younger sister, `Put your head down, don't look, those bad people are there'"; the picketers target religious services and activities children are likely to attend; and 15 families most with young children have left the Westminster Presbyterian Church because of the picketing at the church. (Aff. Rev. William C. Yeager ¶ 5; Aff. Jane Sievers ¶ 3; Aff. Joseph J. Morten ¶¶ 3, 4, 6; Aff. Maureen Allman ¶ 3; Aff. Mary E. Huenink ¶¶ 3-4; Aff. Carl Horton ¶¶ 3-6 & 9.)
The City Council's Vote
9. Before passing the challenged ordinance, the Lincoln City Council heard a substantial amount of anecdotal testimony describing the picketing activities at Westminster Presbyterian Church, as well as comments from law professors, Westminster's attorney, and City Attorney Bill Austin. Mr. Austin advised the City Council that, in his view, the proposed Lincoln ordinance was unconstitutional. Minutes of the Regular Lincoln City Council Meeting: Comments During Public Hearing on Amending Chapter 9.20 of the LMC by Adding a New Section Numbered 9.20.090, at 4 (Sept. 8, 1998) (comments of City Attorney Bill Austin).
II. CONCLUSIONS OF LAW
A. Preliminary Injunction Standards
In Dataphase Systems, Inc. v. CL Systems, Inc., 640 F.2d 109 (8th Cir.1981), the *1098 court, sitting en banc, clarified the standard district courts should apply when considering a motion for preliminary injunctive relief:
(1) the threat of irreparable harm to the movant; (2) the state of balance between this harm and the injury that granting the injunction will inflict on other parties litigant; (3) the probability that movant will succeed on the merits; and (4) the public interest.
Dataphase, 640 F.2d at 114. "No single factor in itself is dispositive; rather, each factor must be considered to determine whether the balance of equities weighs toward granting the injunction." United Indus. Corp. v. Clorox Co., 140 F.3d 1175, 1179 (8th Cir.1998).
At base, the question is whether the balance of equities so favors the movant that justice requires the court to intervene to preserve the status quo until the merits are determined....
....
[W]here the balance of other factors tips decidedly toward movant a preliminary injunction may issue if movant has raised questions so serious and difficult as to call for more deliberate investigation.
Dataphase, 640 F.2d at 113.
1. Probability of Success on the Merits
The First Amendment provides in part that "Congress shall make no law ... abridging the freedom of speech...." U.S. Const. amend. I. "There is no doubt that as a general matter peaceful picketing ... [is] expressive activit[y] involving `speech' protected by the First Amendment." United States v. Grace, 461 U.S. 171, 103 S.Ct. 1702, 75 L.Ed.2d 736 (1983) (statute prohibiting display of flags, banners, or devices designed to bring into public notice any party, organization, or movement on United States Supreme Court grounds held unconstitutional as applied to public sidewalks on the Court's grounds because statute did not substantially serve its purpose).
"It is also true that `public places' historically associated with the free exercise of expressive activities, such as streets, sidewalks, and parks, are considered, without more, to be `public forums.'" Id. "`[T]ime out of mind' public streets and sidewalks have been used for public assembly and debate, the hallmarks of a traditional public forum." Frisby v. Schultz, 487 U.S. 474, 480, 108 S.Ct. 2495, 101 L.Ed.2d 420 (1988) (quoting Perry Educ. Ass'n v. Perry Local Educators' Ass'n, 460 U.S. 37, 45, 103 S.Ct. 948, 74 L.Ed.2d 794 (1983)). The government's ability to restrict expressive activities in these public places is very limited.
In these quintessential public forums, the government may not prohibit all communicative activity. For the State to enforce a content-based exclusion it must show that its regulation is necessary to serve a compelling state interest and that it is narrowly drawn to achieve that end. The State may also enforce regulations of the time, place, and manner of expression which are content-neutral, are narrowly tailored to serve a significant government interest, and leave open ample alternative channels of communication.
Perry, 460 U.S. at 45, 103 S.Ct. 948 (citations omitted).
a. Content Neutrality
A governmental regulation of expressive activity will be deemed content-neutral if it is "justified without reference to the content of the regulated speech." Ward v. Rock Against Racism, 491 U.S. 781, 791, 109 S.Ct. 2746, 105 L.Ed.2d 661 (1989) (emphasis and internal quotation omitted).
The principal inquiry in determining content neutrality, in speech cases generally and in time, place, or manner cases in particular, is whether the government has adopted a regulation of speech because of disagreement with the message it conveys. The government's purpose is the controlling consideration. A regulation that serves purposes unrelated to the content of expression is deemed neutral, even if it has an incidental effect on some speakers or messages but not others.
Id.
At this early stage of the litigation, I assume the ordinance at issue is content-neutral because (1) the ordinance facially bans all "focused picketing" regardless of the apparent message sought to be conveyed; and (2) the Lincoln City Council's purpose in passing the ordinance and overriding the *1099 mayor's veto was, superficially at least, unrelated to the content of the expression conveyed by those engaging in focused picketing. However, I note that the evidence at trial could very well be otherwise.
There is evidence in the record that the ordinance at issue was written by the attorney for the Westminster Presbyterian Church; that the attorney, on the church's behalf, was attempting to target and regulate demonstrators who carry anti-abortion signs[7] outside of that particular church; and that the ordinance was drafted broadly for strategic purposes in order to avoid perceived constitutional difficulties.[8] Moreover, as noted earlier, the city attorney advised the city council that the ordinance was unconstitutional. Still further, the chief of police advised the city council that "officers have not seen anyone being chased. They haven't seen any physical confrontations nor anyone's way being blocked or impeded. They haven't seen anyone committing a crime. There's been dialog, but nothing that officers felt rose to the level of something for instance like terroristic threats or disturbing the peace." From this and other evidence, a plausible argument can be made that the ordinance's facial content-neutrality is but a pretext for siding with a large and influential church to the detriment of a few protestors who seek to express a contrary religious point of view.
b. Significant Interest
I find and conclude that the interest sought to be served by the ordinance is protecting very young children from being forced to view extremely graphic and quite disturbing images upon their entrance to, or exit from, church. Such images frighten the very young, and cause their parents to shy away from attending church with their children.
During argument on Plaintiffs' motion for a temporary restraining order, counsel for Defendants conceded that the interest sought to be protected by the ordinance was not adults, but "to protect children or their parents who are reluctant to go to church because of the impact on the child." (TRO Hr'g Tr. 38:17-40:21.) The "legislative intent and findings" section of Ordinance 17413 reflects this interest, stating that the ordinance seeks to protect several interests, "especially of children"; that picketing in a manner that disrupts religious activities or hinders access to such activities "by families with young children" disturbs the peace of individuals wishing to participate in religious activities and impedes public access to religious premises; and that "injury to individual freedom of religion" occurs when "infants and young children," who are "emotionally vulnerable to focused picketing in close proximity to them," are faced with such picketing, causing "fear, unhappiness, anxiety and other emotional disturbance when such activity is imposed on them." City of Lincoln Ordinance No. 17413 §§ 1(a), (b), (e) (Sept. 21, 1998).
Testimony before the Lincoln City Council regarding the ordinance included numerous references to "gruesome," "horrific," "scary," "gross," and "graphic" signs being displayed to children by the picketers at Westminster. *1100 Minutes of the Regular Lincoln City Council Meeting: Comments During Public Hearing on Amending Chapter 9.20 of the LMC by Adding a New Section Numbered 9.20.090 (Sept. 8, 1998). Comments by Westminster's attorney and various Lincoln City Council members at the voting session on the proposed ordinance indicate that protection of children from this graphic signage was the interest to be furthered by the ordinance.
For example, in response to a Council member's question regarding whether "this ordinance is going to take care of the problem," Westminster attorney Alan Peterson stated, "[I]t was the close proximity of the big signs or whatever form of material being involved was frightening, disturbing children.... [T]he experience of harm that has been shown has actually been the harm of frightening ... the children." City Council member Johnson stated, "The children are still the issue [and] the one's [sic] we are going to make a decision for," and City Council member Fortenberry commented, "While this ordinance is a well[-]intended effort by parents who have concern for their young children, its implications go way beyond its intended purpose." Minutes of the Regular Lincoln City Council Meeting: Verbatim Comments During Voting Session on Amending Chapter 9.20 of the LMC by Adding a New Section Numbered 9.20.090, at 2, 6, 10 (Sept. 14, 1998).
I also find that the city's interest in protecting very young children from frightening images is constitutionally important; that is, the interest is "significant," "compelling," and "legitimate."[9] For example, the United States Supreme Court has recognized "that there is a compelling interest in protecting the physical and psychological well-being of minors." Sable Communications of California, Inc. v. Federal Communications Comm'n, 492 U.S. 115, 126, 109 S.Ct. 2829, 106 L.Ed.2d 93 (1989). In another case, the Court stated: "[W]e have repeatedly recognized the governmental interest in protecting children from harmful materials." Reno v. American Civil Liberties Union, 521 U.S. 844, 117 S.Ct. 2329, 2346, 138 L.Ed.2d 874 (1997).
In fact, the plaintiffs concede: "There is no question here that the protection of children would qualify as a significant government interest, and in fact, there is really [no] question it would qualif[y] as a compelling government interest." (Prelim. Inj. Hr'g Tr. 8.) As for gruesome and frightening images displayed to very young children on a sidewalk as they are about to enter a church, the plaintiffs' counsel also conceded that he "can imagine a sign that is so gruesome that the City can justifiably ban it," such as "a dead fetus" or "pictures you see from the holocaust." (TRO Hr'g Tr. 64:4-9.)
Thus, the testimony received by the Lincoln City Council regarding the activity at Westminster Presbyterian Church, the language of the ordinance, and the city council's comments during the voting session on the ordinance establish that the interest sought to be served by the ordinance is protecting very young children from viewing graphic signs displayed by picketers upon the childrens' entrance to, or exit from, church. Such an interest is a constitutionally important one.
c. Narrowly Tailored
"A statute is narrowly tailored if it targets and eliminates no more than the exact source of the `evil' it seeks to remedy." Frisby, 487 U.S. at 485, 108 S.Ct. 2495 (quoting Members of City Council v. Taxpayers for Vincent, 466 U.S. 789, 807, 104 S.Ct. 2118, 80 L.Ed.2d 772 (1984)). The narrow-tailoring requirement is satisfied when the governmental regulation "promotes a substantial government interest that would be achieved less effectively absent the regulation." Ward, 491 U.S. at 799, 109 S.Ct. 2746 (internal quotation and citation omitted). However, this standard "does not mean that a time, place, or manner regulation may burden substantially more speech than is necessary to further the government's legitimate interests. Government may not regulate expression in such a manner that a substantial *1101 portion of the burden on speech does not serve to advance its goals." Id. Yet, this "narrowly tailored" analysis does not require a court to decide whether there are alternative methods of regulation that would achieve the desired end, but would be less restrictive of the plaintiffs' First Amendment rights. Id. at 797, 109 S.Ct. 2746.
This less-restrictive-alternative analysis ... has never been a part of the inquiry into the validity of a time, place, and manner regulation. Instead, our cases quite clearly hold that restrictions on the time, place, or manner of protected speech are not invalid simply because there is some imaginable alternative that might be less burdensome on speech.
Id. (internal quotation and citations omitted) (ellipses in original).
In other words, an ordinance regulating speech will not be held invalid simply because there may be a better way to regulate it. Rather, the test is whether a major part of the regulation's impact on speech fails to achieve the legitimate goals set by the government and instead bars substantially more speech than is necessary. So, for example, we must ask (1) whether the ban bars speech directed at adults, while failing to protect children, or (2) whether the ban significantly impacts adults and even children who wish to receive the prohibited speech.
Ordinance 17413 would prohibit a person, on the public sidewalks and streets abutting a religious premises during certain times, from passively holding a sign or banner that does not frighten children in any way, but may persuade adults to adopt the viewpoint of the speaker. For example, the ordinance bans signs saying, "Please Follow the Lord's Teachings" and other messages or images, no matter how innocuous to young children. Yet the ordinance does not ban a protestor from handing a young child a leaflet containing gruesome pictures of a dead fetus. Moreover, even messages that a congregation wants to see are banned. For example, the ordinance bars a Catholic priest from displaying to his willing flock on the cathedral sidewalk shortly before mass a sign that states: "Abortion is Wrong."[10]
The ordinance, then, fails in two respects. First, the ordinance bans speech that is harmless to very young children, yet potentially significant to adults, while failing to prohibit other speech (leaflets with pictures of aborted fetuses) that may terrorize a child. Second, the ban impacts all people who desire to receive the speech during "scheduled religious activities." Thus, the Lincoln ordinance does not "target[ ] and eliminate[ ] no more than the exact source of the `evil' it seeks to remedy." Frisby, 487 U.S. at 485, 108 S.Ct. 2495 (quoting Members of City Council v. Taxpayers for Vincent, 466 U.S. 789, 807, 104 S.Ct. 2118, 80 L.Ed.2d 772 (1984)). Stated another way, the city has attempted to "regulate expression in such a manner that a substantial portion of the burden on speech does not serve to advance its goals." Ward, 491 U.S. at 799, 109 S.Ct. 2746.
Therefore, the ordinance is not narrowly tailored to serve the constitutionally important governmental interest of protecting children from horrifying graphic images upon entry to, and exit from, church. While the government has a significant interest in protecting children from harmful materials, "that interest does not justify an unnecessarily broad suppression of speech addressed to adults." Reno v. American Civil Liberties Union, 521 U.S. 844, 117 S.Ct. 2329, 2346, 138 L.Ed.2d 874 (1997). Or, as the Supreme Court said when striking down a ban on signs around its own building, a "total ban [on signs] is no more necessary for the maintenance of peace and tranquility on the public sidewalks surrounding the [Supreme Court] than on any other sidewalks in the city." Grace, 461 U.S. at 182, 103 S.Ct. 1702.
Defendants argue that the court should extend the specific holding of Frisby v. Schultz, 487 U.S. 474, 108 S.Ct. 2495, 101 L.Ed.2d 420 (1988), to this case. Frisby involved an ordinance that made it unlawful *1102 to picket "before or about the residence or dwelling of any individual in the Town of Brookfield." The Supreme Court found the ordinance to be content-neutral and construed the ban to be limited to "focused picketing" occurring in front of a particular residence. The court found that, as narrowed, the ordinance was narrowly tailored to serve the significant government interest of protecting residential privacy. "`The State's interest in protecting the well-being, tranquility, and privacy of the home is certainly of the highest order in a free and civilized society.'" Id. at 484, 108 S.Ct. 2495 (quoting Carey v. Brown, 447 U.S. 455, 471, 100 S.Ct. 2286, 65 L.Ed.2d 263 (1980)). The defendants argue that the state has a similar interest in protecting one's "religious home."
In the absence of authority from a higher court to do so, I do not believe it is proper to analogize private residences to other places, like churches. In Frisby, the speech was directed at the residents of a home when the listener had no means of avoiding the unwanted speech. See Frisby, 487 U.S. at 484 & 487, 108 S.Ct. 2495 ("Although in many locations, we expect individuals simply to avoid speech they do not want to hear, the home is different.... The resident is figuratively, and perhaps literally, trapped within the home, and because of the unique and subtle impact of such picketing is left with no ready means of avoiding the unwanted speech.") (internal citations omitted). It is at best an imperfect analogy to suggest that adults and children attending church are similar to the residents of a home.
Extending the "residential captive audience" holding in Frisby to churches would be a gateway to government regulation of expressive activity at many places where adults and their children must go to receive the essentials of life a welfare office where parents and children go to receive food stamps, a school where children and their parents go to educate the young, or a doctor's office where children must go with their parents for medical care. Keeping in mind the fact that children are ubiquitous, the implications of extending Frisby beyond its facts, and the lack of precedent for extending Frisby, I decline to assume that churches, or anyplace else, are like homes for First Amendment purposes.
The defendants also argue that they have done the best they can; that is, if they sought to ban only gruesome images, the effort would be struck down as an unconstitutional content-based restriction. Therefore, the defendants argue that they must draw the ban broadly to prohibit innocuous speech and speech which the listener desires to hear in order to get at the speech which frightens children. There are two responses to this argument, and both suggest the inherent weakness of it.
Initially, and assuming for the sake of argument that a ban on gruesome pictures is content-based, the courts have never categorically prohibited content-based regulation of speech when it comes to children. See, e.g., Reno, 117 S.Ct. at 2346 ("[W]e have repeatedly recognized the governmental interest in protecting children from harmful materials.") While content-based restrictions are difficult to defend, it does not therefore follow that a ban on more speech than is necessary is justified.
Furthermore, the defendants' argument we cannot effectively ban gruesome pictures that frighten children without banning a great deal of other speech is not accurate. For example, the courts have explicitly upheld content-neutral narrowly tailored ordinances that create limited buffer zones for church entrances while allowing the adjacent sidewalk to carry the full range of speech customarily protected in that public forum. See, e.g., Edwards v. City of Santa Barbara, 150 F.3d 1213, 1215-1216 (9th Cir.1998) (city ordinance that prohibited demonstration activity within eight feet of entrances to places of worship was narrowly tailored to city's interest in ensuring access to religious worship and permitted ample alternative avenues of communication by placing no limit on speech or expressive activity outside narrow zone). There is no reason to think that a limited buffer zone ordinance could not be drafted that would effectively[11] shield children *1103 from having to pass by pictures of aborted fetuses in front of a church entrance, while still allowing protestors to use signs on the remaining portion of the sidewalk adjoining the church outside the buffer zone.[12]
d. Alternative Channels of Communication
The ordinance is not narrowly tailored to serve a constitutionally important government interest. Therefore, it is unnecessary to determine whether the ordinance leaves open ample alternative channels of communication.
Nevertheless, the ordinance effectively bans all signs around a large city block that is adjacent to a heavily traveled four-lane highway. Thus, the ability of protestors to communicate with those who drive by the busy streets adjoining the Westminster church has been significantly curtailed. To illustrate the pernicious nature of this ban, if the City of Lincoln's ordinance were imposed in New York City, protestors would be barred from displaying signs on the city block covered by St. Patrick's Cathedral 53 times a week when masses are taking place[13] within that huge building.[14] To say that such an ordinance meets constitutional muster because one can engage in other expressive activity is to wrongly depreciate the constitutional significance of public sidewalks and signs and banners. Grace, 461 U.S. at 182, 103 S.Ct. 1702 (declaring unconstitutional a ban that prohibited display of "any flag, banner, or device designed or adapted to bring into public notice any party, organization, or movement" on public sidewalks constituting outer boundary of the grounds of the Supreme Court, despite the fact that other expressive activity was not prohibited).
e. Conclusion Regarding Likely Success on Merits
Because Lincoln Municipal Code 9.20.090, also known as Ordinance No. 17413, is not narrowly tailored to serve an important government interest, in violation of the Free Speech Clause of the First Amendment, the plaintiffs are reasonably likely to succeed on the merits of their free-speech claim.
2. Irreparable Harm to the Plaintiffs
"The loss of First Amendment freedoms, for even minimal periods of time, unquestionably constitutes irreparable injury." Elrod v. Burns, 427 U.S. 347, 373, 96 S.Ct. 2673, 49 L.Ed.2d 547 (1976) (plurality). See also Marcus v. Iowa Public Television, 97 F.3d 1137, 1140 (8th Cir.1996) (citing Elrod and stating that if congressional candidates who moved for injunctive relief were denied their First Amendment rights when they were excluded from appearance on public television with other political candidates, they have suffered an irreparable injury under Dataphase); Kirkeby v. Furness, 52 F.3d 772, 775 (8th Cir.1995) (district court should have granted demonstrators' motion to enjoin enforcement of city ordinance restricting residential picketing; citing Elrod and stating that since demonstrators' right to speak had probably been violated, they would suffer irreparable injury under Dataphase if injunction did not issue).
Because I have concluded that the municipal ordinance at issue is not narrowly tailored to serve an important government interest, enforcement of the ordinance against the plaintiffs would deny them their First Amendment free-speech rights. Therefore, if a preliminary injunction barring enforcement of this ordinance did not issue, the plaintiffs would suffer irreparable harm under Dataphase.
3. Harm to the Defendants
If the court issues a preliminary injunction barring enforcement of Ordinance No. 17413, the defendants simply lose an opportunity to prosecute violators of new *1104 Lincoln Municipal Code § 9.20.090 while the court decides whether the plaintiffs are entitled to a permanent injunction. When balanced against the risk that Plaintiffs will be denied their First Amendment free-speech rights if a preliminary injunction does not issue, this potential harm to Defendants is minimal. Moreover, given that the city has numerous ordinances that protect churchgoers from threatening behavior and disturbances of the peace (exs.116-118), and the fact that the police have witnessed no threatening behavior on the part of the protestors, the harm to members of the Westminster church occasioned by issuance of the injunction is less than the harm to the plaintiffs occasioned by the failure to issue the injunction.
4. Public Interest
The court finds that the public interest in avoiding violation of the plaintiffs' First Amendment free-speech rights while the court considers the plaintiffs' request for a permanent injunction, and the public interest in encouraging laws to be written and passed in a constitutionally acceptable manner, outweigh the arguably significant public interest in enabling families to approach and exit their place of worship without fearing the effects of "focused" picketing upon their young children. This is particularly true because there is unbiased evidence from the city's chief of police stating that the allegations of the church members about improper conduct on the part of protestors cannot be documented.
B. "Facial" or "As Applied" Challenge
Plaintiffs in this case seek to bring a "facial" challenge to the city ordinance at issue because, they argue, there has not yet been a prosecution under the ordinance, leaving Plaintiffs with "no ability to have an as-applied challenge." (TRO Hr'g Tr. 29:13-25.) Based on my findings above, I will invalidate the ordinance on its face, but for different reasons than that suggested by Plaintiffs.
Outside the First Amendment free-speech context, a plaintiff challenging the constitutionality of a statute may bring an "as-applied" or "facial" challenge. A plaintiff bringing an "as-applied" challenge contends that the statute would be unconstitutional under the circumstances in which the plaintiff has acted or proposes to act. If a statute is held unconstitutional "as applied," the statute may not be applied in the future in a similar context, but the statute is not rendered completely inoperative. Ada v. Guam Soc'y of Obstetricians & Gynecologists, 506 U.S. 1011, 1012, 113 S.Ct. 633, 121 L.Ed.2d 564 (1992) (Scalia, J., Rehnquist, C.J., and White, J., dissenting from denial of petition for writ of certiorari).
In contrast, a "facial" constitutional challenge outside the First Amendment free-speech context is used when a plaintiff seeks to render a statute utterly inoperative. Facial challenges must be rejected "unless there exists no set of circumstances in which the statute can constitutionally be applied." Id. (emphasis in original).
The only exception to this rule recognized in our jurisprudence is the facial challenge based upon First Amendment free-speech grounds. We have applied to statutes restricting speech a so-called `overbreadth' doctrine, rendering such a statute invalid in all its applications (i.e., facially invalid) if it is invalid in any of them.
Id. (citing Gooding v. Wilson, 405 U.S. 518, 521, 92 S.Ct. 1103, 31 L.Ed.2d 408 (1972)).
Under the "free speech" exception to the "facial challenge" standard, a plaintiff may challenge a statute's vagueness or overbreadth as applied to others.
"Although a statute may be neither vague, overbroad, nor otherwise invalid as applied to the conduct charged against a particular defendant, he is permitted to raise its vagueness or unconstitutional overbreadth as applied to others. And if the law is found deficient in one of these respects, it may not be applied to him either, until and unless a satisfactory limiting construction is placed on the statute. The statute, in effect, is stricken down on its face. This result is deemed justified since the otherwise continued existence of the statute in unnarrowed form would tend to suppress constitutionally protected rights."
Gooding v. Wilson, 405 U.S. 518, 520-23, 92 S.Ct. 1103, 31 L.Ed.2d 408 (1972) (quoting *1105 Coates v. City of Cincinnati, 402 U.S. 611, 619-20, 91 S.Ct. 1686, 29 L.Ed.2d 214 (1971) (White, J., dissenting) (citation omitted)). See also Members of City Council v. Taxpayers for Vincent, 466 U.S. 789, 799, 104 S.Ct. 2118, 80 L.Ed.2d 772 (1984) (justification for allowing plaintiff to challenge statute on behalf of others based on First Amendment "overbreadth doctrine" is "`a judicial prediction or assumption that the statute's very existence may cause others not before the court to refrain from constitutionally protected speech or expression.'") (quoting Broadrick v. Oklahoma, 413 U.S. 601, 612, 93 S.Ct. 2908, 37 L.Ed.2d 830 (1973)).
Although courts and commentators often address the issue of facial challenges in the First Amendment context in a confusing manner, a review of numerous cases discussing such challenges has evidenced at least some guidelines for determining whether a law implicating the First Amendment should be invalidated on its face or as applied. Broadrick v. Oklahoma, 413 U.S. 601, 615, 93 S.Ct. 2908, 37 L.Ed.2d 830 (1973) ("It remains a matter of no little difficulty to determine when a law may properly be held void on its face and when such summary action is inappropriate." (internal quotation marks and citation omitted)). These guidelines are as follows:
1. "Facial" or "overbreadth" challenges to statutes in the First Amendment context have been used in cases in which (a) plaintiffs whose speech validly may be prohibited or sanctioned (i.e ., "unprotected speech") challenge a statute or ordinance because the statute or ordinance threatens others who are not before the court, Brockett v. Spokane Arcades, Inc., 472 U.S. 491, 503, 105 S.Ct. 2794, 86 L.Ed.2d 394 (1985); Secretary of State of Maryland v. Joseph H. Munson Co., Inc., 467 U.S. 947, 956-57, 104 S.Ct. 2839, 81 L.Ed.2d 786 (1984), and (b) plaintiffs challenge a statute that in all its applications directly restricts protected First Amendment activity, Munson, 467 U.S. at 965 n. 13, 104 S.Ct. 2839. See also Members of City Council v. Taxpayers for Vincent, 466 U.S. 789, 796, 104 S.Ct. 2118, 80 L.Ed.2d 772 (1984) (a statute or ordinance may be considered invalid "on its face" in two ways: the statute or ordinance may be unconstitutional in every conceivable application, or may be unconstitutionally overbroad because the statute or ordinance seeks to prohibit such a broad range of protected conduct). A statute that is void for overbreadth is one that "offends the constitutional principle that `a governmental purpose to control or prevent activities constitutionally subject to state regulation may not be achieved by means which sweep unnecessarily broadly and thereby invade the area of protected freedoms.'" Zwickler v. Koota, 389 U.S. 241, 250, 88 S.Ct. 391, 19 L.Ed.2d 444 (1967) (quoting National Ass'n for Advancement of Colored People v. Alabama, 377 U.S. 288, 307, 84 S.Ct. 1302, 12 L.Ed.2d 325 (1964)).
2. In contrast, when a statute or ordinance is challenged by plaintiffs (a) who engage in protected speech that the overbroad statute or ordinance seeks to punish or (b) who seek to engage in both protected and unprotected speech, there exists a proper party to challenge the statute and there is no concern that an attack on the statute will be unduly delayed or that protected speech will be discouraged. Under these circumstances, a facial challenge is not appropriate; instead, the statute or ordinance will "be declared invalid to the extent that it reaches too far, but otherwise left intact." Brockett, 472 U.S. at 503-04, 105 S.Ct. 2794. If the court finds nothing in the record to indicate that a statute or ordinance will have a different impact on third parties' interests in free speech than it has on the plaintiffs themselves, the court will not entertain a facial challenge based on overbreadth. Members of City Council v. Taxpayers for Vincent, 466 U.S. 789, 801, 104 S.Ct. 2118, 80 L.Ed.2d 772 (1984) (court deemed challenge to ordinance prohibiting posting of signs on public property to be as-applied challenge, rather than facial challenge based on overbreadth, when there was no evidence the ordinance would have different impact on third parties than on plaintiffs themselves; when plaintiffs failed to demonstrate that ordinance applied to any conduct more likely to be protected under the First Amendment; and when plaintiffs failed to argue that the ordinance can never be validly applied).
*1106 3. In some cases the United States Supreme Court has not invoked the facial overbreadth doctrine when "a limiting construction has been or could be placed on the challenged statute." Broadrick v. Oklahoma, 413 U.S. 601, 613, 93 S.Ct. 2908, 37 L.Ed.2d 830 (1973) (collecting cases); United Food & Comm'l Workers Int'l Union v. IBP, Inc., 857 F.2d 422, 431 (8th Cir.1988). "`[A] state statute should be deemed facially invalid only if (1) it is not readily subject to a narrowing construction by the state courts and (2) its deterrent effect on legitimate expression is both real and substantial.'" Kirkeby v. Furness, 52 F.3d 772, 775 (8th Cir.1995) (quoting United Food, 857 F.2d at 431). Federal courts must remember that they are "generally without authority to construe or narrow state statutes"; that they "may not impose [their] own narrowing construction ... if the state courts have not already done so"; and that their role in a case involving a state statute which has not been construed by the state courts is to determine if a construction urged by the parties is "reasonable and readily apparent" from the language and legislative history of the statute and the applicable rules of statutory construction. United Food, 857 F.2d at 431-32 (finding portions of statute facially overbroad when it was beyond federal court's power to rewrite the broad, straightforward language of the statute in order to avoid constitutional difficulties created by the statute's plain meaning) (internal quotation marks and citations omitted).
4. In deciding whether an overbreadth challenge should be allowed in a particular case, the United States Supreme Court has "weighed the likelihood that the statute's very existence will inhibit free expression." Members of City Council v. Taxpayers for Vincent, 466 U.S. 789, 799, 104 S.Ct. 2118, 80 L.Ed.2d 772 (1984). "`[T]he overbreadth of a statute must not only be real, but substantial as well, judged in relation to the statute's plainly legitimate sweep.'" Id. at 799-800, 104 S.Ct. 2118 (quoting Broadrick v. Oklahoma, 413 U.S. 601, 615, 93 S.Ct. 2908, 37 L.Ed.2d 830 (1973)). While "substantial overbreadth" is not easily defined, it is clear that "the mere fact that one can conceive of some impermissible applications of a statute is not sufficient to render it susceptible to an overbreadth challenge." Id. at 800, 104 S.Ct. 2118. Instead, the law must reach "`substantially beyond the permissible scope of legislative regulation'" and "there must be a realistic danger that the statute itself will significantly compromise recognized First Amendment protections of parties not before the Court for it to be facially challenged on overbreadth grounds." Id. at 800 n. 19 & 801, 104 S.Ct. 2118 (quoting Jeffries, Rethinking Prior Restraint, 92 Yale L.J. 409, 425 (1983)) (emphasis in case). A court will not strike down an ordinance on overbreadth grounds if the statute's legitimate reach "dwarfs its arguably impermissible applications." Excalibur Group, Inc. v. City of Minneapolis, 116 F.3d 1216, 1224 (8th Cir.1997) (internal quotation marks and citation omitted), cert. denied, ___ U.S. ___, 118 S.Ct. 855, 139 L.Ed.2d 755 (1998). "Because the overbreadth doctrine has far-reaching ramifications, however, it is strong medicine that should be employed only with hesitation, and then only as a last resort." Excalibur Group, Inc., 116 F.3d at 1223 (internal quotation marks and citations omitted).
5. The "normal rule" is that partial, rather than facial, invalidation of a statute or ordinance is the required course, except when partial invalidation would be contrary to legislative intent, as when the "legislature had passed an inseverable Act or would not have passed it had it known the challenged provision was invalid." Brockett, 472 U.S. at 504 & 506, 105 S.Ct. 2794. Further, a federal court should not extend invalidation of a statute or ordinance more broadly than necessary in order to dispose of the particular case before it. Rather, if a court deems a statute or ordinance unconstitutional in part, and if the constitutional and unconstitutional parts are "wholly independent of each other," the constitutional portion may stand, while the unconstitutional portion should be rejected. Id. at 502, 105 S.Ct. 2794 (internal quotation marks and citation omitted). "`It is not the usual judicial practice, ... nor do we consider it generally *1107 desirable to proceed to an overbreadth issue unnecessarily that is, before it is determined that the statute would be valid as applied.'" Jacobsen v. Howard, 109 F.3d 1268, 1274 (8th Cir.1997) (quoting Board of Trustees v. Fox, 492 U.S. 469, 484-85, 109 S.Ct. 3028, 106 L.Ed.2d 388 (1989)) (concluding that statutes were invalid as applied and it was therefore inappropriate to consider overbreadth challenge).
Application of each of the above five factors to the factual situation before the court yields opposite results. For instance, under factors (1) and (2), above, the court should invalidate the ordinance "as applied" because Plaintiffs seek to engage in protected speech that the Lincoln ordinance seeks to regulate. Further, there is no evidence that the ordinance will have a different impact on third parties' interests in free speech than it has on Plaintiffs themselves.[15] Therefore, there exist proper parties to challenge the ordinance on their own behalf, and it is not appropriate to entertain a facial challenge to the ordinance brought on behalf of others who are not before the court.
In contrast, application of the third and fourth factors to this case dictate that the court facially invalidate the ordinance. The ordinance can be deemed facially invalid only if "`(1) it is not readily subject to a narrowing construction by the state courts and (2) its deterrent effect on legitimate expression is both real and substantial.'" Kirkeby v. Furness, 52 F.3d 772, 775 (8th Cir.1995) (quoting United Food, 857 F.2d at 431). The Nebraska courts in this instance have not construed or narrowed the ordinance in any fashion. Defendants' suggestion that this court strike that portion of the ordinance prohibiting focused picketing "of a scheduled religious activity at any time within the period from one-half hour before to one-half hour after the scheduled activity, at any place ... on the religious organization's exterior premises, including its parking lots," City of Lincoln Ordinance No. 17413 § 2 (Sept. 21, 1998) (emphasis added), contradicts the principle that federal courts are "generally without authority to construe or narrow state statutes" and that they "may not impose [their] own narrowing construction ... if the state courts have not already done so." United Food, 857 F.2d at 431-32.
Even if the court did consider the defendants' proposed narrowing construction, it would conclude that the construction is not "reasonable and readily apparent" from the language and legislative history of the ordinance. There is no indication in the transcripts from the public hearings and voting session on the ordinance that it is reasonable to "construe" the ordinance to not include the above provision. The transcripts reflect much testimony about the display of large and horribly graphic images in the parking lot of Westminster Presbyterian Church. As discussed above, the ordinance at issue was enacted to regulate this very type of visual display. Deleting the above provision from the ordinance is simply not "reasonable and readily apparent" from the language and legislative history of the ordinance. United Food & Comm'l Workers Int'l Union v. IBP, Inc., 857 F.2d 422, 431-32 (8th Cir.1988) (finding portions of statute facially overbroad when it was beyond federal court's power to rewrite the broad, straightforward language of the statute in order to avoid constitutional difficulties created by the statute's plain meaning) (internal quotation marks and citations omitted). Thus, the ordinance is not readily subject to a narrowing construction, and the first prong of the Kirkeby test is satisfied.
Most importantly, and as discussed above in the court's analysis of the merits of the plaintiffs' First Amendment free-speech claim, a substantial portion of the burden on legitimate speech caused by the Lincoln ordinance does not serve to advance the Lincoln City Council's goals in enacting the ordinance. Due to its expansive reach, the ordinance's "deterrent effect on legitimate expression is both real and substantial," the second prong of the Kirkeby test. In this regard, I emphasize that the ban prohibits members of other religious sects from receiving speech on the sidewalks adjacent to their churches even if they fervently desire to receive the speech.
*1108 Under factors (4) and (5), then, the ordinance could be deemed facially invalid. This conclusion is buttressed by looking at whether the overbreadth of the ordinance is real and substantial in relation to its "`plainly legitimate sweep.'" Members of City Council v. Taxpayers for Vincent, 466 U.S. 789, 799-800, 104 S.Ct. 2118, 80 L.Ed.2d 772 (1984) (quoting Broadrick v. Oklahoma, 413 U.S. 601, 615, 93 S.Ct. 2908, 37 L.Ed.2d 830 (1973)). As stated above, the Lincoln ordinance reaches substantially beyond the constitutionally permissible scope of regulation, and it cannot be said that the ordinance's legitimate reach "dwarfs its arguably impermissible applications." Excalibur Group, Inc. v. City of Minneapolis, 116 F.3d 1216, 1224 (8th Cir.1997) (internal quotation marks and citation omitted). In fact, the opposite is true.
Finally, the rule that partial, rather than facial, invalidation of an ordinance the fifth factor above is the required course is not applicable here because the court cannot lift unconstitutional "parts" of the ordinance from wholly independent constitutional parts. Rather, the entire statute is not narrowly tailored to serve a significant government interest.
After consideration of the above factors, I conclude that the ordinance is facially void for overbreadth because it "offends the constitutional principle that `a governmental purpose to control or prevent activities constitutionally subject to state regulation may not be achieved by means which sweep unnecessarily broadly and thereby invade the area of protected freedoms.'" Zwickler v. Koota, 389 U.S. 241, 250, 88 S.Ct. 391, 19 L.Ed.2d 444 (1967). As discussed above, the ordinance's "plainly legitimate sweep" protecting young children from graphic and disturbing images is overshadowed by the effect this ordinance has on all types of speech conveyed via "a banner, placard, sign or other demonstrative material" around religious premises, and on the people who wish to hear such speech. Further, the ordinance has not been subject to a narrowing construction by the state courts, nor can it be by this court, and the ordinance's deterrent effect on display of all types of expression conveyed by all types of demonstrative material on or around religious premises is both real and substantial. Accordingly, the ordinance is facially invalid.
III. CONCLUSION
After analyzing the factors for granting preliminary injunctive relief set out in Dataphase Systems, Inc. v. CL Systems, Inc., 640 F.2d 109 (8th Cir.1981), I conclude that a preliminary injunction should issue, barring enforcement of Lincoln City Ordinance No. 17413, which created Lincoln Municipal Code § 9.20.090, because the ordinance is not narrowly tailored to serve an important government interest, in violation of the Free Speech Clause of the First Amendment, and is therefore facially invalid. Thus, the plaintiffs are reasonably likely to succeed on the merits of their free-speech claim. Further, when balanced against the risk that Plaintiffs will be denied their First Amendment free-speech rights if a preliminary injunction does not issue, the potential harm to Defendants is minimal if the preliminary injunction is granted. Finally, the public interest in avoiding violation of the plaintiffs' First Amendment free-speech rights while the court considers the plaintiffs' request for a permanent injunction weighs in favor of issuing a preliminary injunction.
IT IS ORDERED:
1. Plaintiffs' request for a preliminary injunction is granted as provided herein;
2. Defendants, and each of them, including their agents, servants, and employees, are preliminarily enjoined from enforcing Lincoln City Ordinance No. 17413, which created Lincoln Municipal Code § 9.20.090, against any person; and
3. The court determines under Fed. R.Civ.P. 65 that a bond in the amount of $500 is sufficient and directs Plaintiffs to post such a bond.
NOTES
[1] "Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech...." U.S. Const. amend. I. Pursuant to the Fourteenth Amendment, city ordinances fall within the scope of the First Amendment's limitation on governmental authority. Members of City Council v. Taxpayers for Vincent, 466 U.S. 789, 792 n. 2, 104 S.Ct. 2118, 80 L.Ed.2d 772 (1984) (citing Lovell v. City of Griffin, 303 U.S. 444, 58 S.Ct. 666, 82 L.Ed. 949 (1938)).
[2] "No State shall ... deprive any person of life, liberty, or property, without due process of law; nor deny to any person within its jurisdiction the equal protection of the laws." U.S. Const. amend. XIV.
[3] As noted later, the city attorney advised the city council that the ordinance at issue was unconstitutional. The chief of police also informed the city council that he had not been able to substantiate the claims of church members that the protestors engaged in picketing that was overtly threatening or physically abusive.
[4] I take this opportunity once again to praise counsel for each side. Their briefs have been well written, their arguments cogent and candid, and their commitment to professionalism and civility outstanding.
[5] During the hearing on Plaintiffs' motion for a preliminary injunction, both parties agreed that the court should not now decide Plaintiffs' remaining claims. In particular, the defendants indicated they had not submitted evidence to the court sufficient to establish a factual basis for the court to rule on those claims.
[6] The word "testified" will be used throughout this memorandum and order to apply to testimony given by deposition, affidavit, or live.
[7] For example, at the public hearing on Ordinance 17413, there was repeated testimony about the desirability of the ordinance because the protestors carried "an enlarged sign of a bloody, dead, mutilated baby," "six[]foot signs of decapitated and dismembered babies," "a bloody baby picture," and "horrifying enlarged signs and photographs of dismembered bloody fetuses." Minutes of the Regular Lincoln City Council Meeting: Comments During Public Hearing on Amending Chapter 9.20 of the LMC by Adding a New Section Numbered 9.20.090, at 11, 14, 15, 18, 19 (Sept. 8, 1998).
[8] At the voting session of the Lincoln City Council on September 14, 1998, regarding Lincoln Municipal Code 9.20.090, Westminster attorney Alan Peterson stated:
And, while we could have tried to pair [sic] this down even narrower in a sense by saying that only certain signs which scare kids will be forbidden on that side of the street, the problem, ... is, if ... government starts selecting which signs are okay [and] which signs are not, that's content control. That's the worst thing [and] really faces [an] even tougher constitutional test. So, that's why we say all signs, banners, [and] placards. I think strategically, legally it's more defensible this way [and] especially since we're not saying don't use the signs. We're simply saying go across the street.
Minutes of the Regular Lincoln City Council Meeting: Verbatim Comments During Voting Session on Amending Chapter 9.20 of the LMC by Adding a New Section Numbered 9.20.090, at 3 (Sept. 14, 1998).
[9] Assuming the images are not gruesome, I do not agree that the city has a legitimate interest in shielding young children from the mere presence of persons carrying signs on the sidewalk. Absent a picture of a dead body or the like, there is no credible and unbiased evidence that the mere presence of a sign-carrying antiabortion protestor harms young children.
[10] In fact, the ordinance bars the priest from displaying his anti-abortion sign on the church's premises shortly before, during, and shortly after mass. The ordinance bans "focused picketing" on "the religious organization's exterior premises, including its parking lots." The defendants essentially concede that this portion of this ordinance is unconstitutional. (Prelim. Inj. Hr'g Tr. 30:2-6.)
[11] As stated earlier, Westminster attorney Alan Peterson testified before the Lincoln City Council that "it was the close proximity of the big signs or whatever form of material being involved was frightening, disturbing children." Minutes of the Regular Lincoln City Council Meeting: Verbatim Comments During Voting Session on Amending Chapter 9.20 of the LMC by Adding a New Section Numbered 9.20.090, at 2 (Sept. 14, 1998) (emphasis added).
[12] In so stating, I do not use a "least-rest rictive-alternatives" test. Rather, I use this analysis to illustrate the weakness of the defendants' argument that the ban prohibits no more speech than is necessary.
[13] This information is taken from the automated information system made available to the public. (St. Patrick's Cathedral, 460 Madison Ave., New York, N.Y., telephone number X-XXX-XXX-XXXX.)
[14] And, as noted earlier, this would be true even if the worshipers wanted to receive the speech.
[15] However, I note that the plaintiffs apparently do not carry signs with gruesome pictures, while their colleagues, who are not parties, do carry those types of signs.
| |
RELATED APPLICATIONS
BACKGROUND
SUMMARY
DESCRIPTION OF PREFERRED EMBODIMENTS
The present patent document is a continuation-in-part of U.S. patent application Ser. No. 13/153,551, filed Jun. 6, 2011 and claims the benefit of the filing date under 35 U.S.C. §119(e) of Provisional U.S. patent application Ser. No. 61/381,084, filed Sep. 9, 2010, which are hereby incorporated by reference.
The present embodiments relate to a computerized system for case management or accountable care optimization.
According to the American Case Management Association (ACMA), the case management process encompasses communication and facilitates care along a continuum through effective resource coordination. The goals of case management include the achievement of optimal health, access to care and appropriate utilization of resources, balanced with the patient's right to self determination. The goals of case management are to facilitate timely discharges, prompt and efficient use of resources, achievement of expected outcomes, and performance improvement activities which lead to optimal patient outcomes.
In a typical setting, a healthcare facility hires personnel (e.g. case managers or case management nurses) that typically fulfill roles of utilization review manager, quality manager, or discharge planner. These case managers review charts for the use of interdependent hospital systems, timeliness of service as well as safe and appropriate utilization of services. The case managers work with a physician for monitoring the quality of services provided to the patient. For example, high-risk patients with high-risk diagnosis (e.g., stroke, myocardial infarction, or complicated pneumonia) are evaluated. If after review of the patient's stay and utilization of services, a patient no longer needs to stay in an acute care setting, the case manager may request of the attending physician that the patient have outpatient or utilize other services. The evaluation may not only impact quality of care and patient outcome, but also may have financial and legal implications for healthcare facilities. Financial implications exist for low risk patients as well. The sooner a low risk patient is discharged from the hospital, the higher the rates of reimbursement can be.
The case manager evaluates some, but not all patients. To perform case management, the case manager manually reviews charts and clinical records for patients and in turn formulates care plans, assigns patients to diagnosis-related groups (DRG), and creates a discharge timeline. Due to the laborious nature of the task, typically case management is done for a random sample of patients. In some institutions, computerized tools present data for a patient in a unified manner. The computerized tools merely provide for a simpler presentation of data and rely on the case manager for action. These tools may fail to fully improve the actual patient level outcome and may fail to substantially increase the number of patients evaluated.
In various embodiments, systems, methods and computer readable media are provided for computer-based patient management for healthcare. Case management is provided by a processor with or without further management by a person. Patient data is used to determine a severity, assign a patient to a corresponding diagnosis-related group, and/or provide a timeline for care. Reminders or alerts are sent to maintain the timeline for more cost effective care. As more data becomes available as part of the care, the care and timeline may be adjusted automatically for more efficient utilization of resources.
The case management is performed for a patient stay at a medical facility. The case management may additionally be performed outside the medical facility. Accountable care optimization is provided as part of case management. Automated care management before any injury or illness and automated care management after discharge are provided to optimize the health and costs for a patient. Reminders, suggestions, transitions between care givers, scheduling and other risk management actions are performed based on a cohort to which a patient is assigned. The patient is assigned to the cohort based on the patient data.
In a first aspect, a method is provided for computer-based patient management for healthcare. A processor gathers first clinical data for a patient of a healthcare facility. The processor establishes a workflow for care of the patient as a function of the first clinical data and a cost factor. The workflow is for multiple actions by different entities of the healthcare facility and includes a timeline for the actions. The processor obtains second clinical data after the establishing and as part of the workflow for the care of the patient. The processor updates the workflow for the care of the patient as a function of the first and second clinical data and the cost factor. The updating occurs while the patient is at the healthcare facility. The processor generates at least one alert for at least one of the multiple actions. The alert is generated as a function of the timeline.
In a second aspect, a system is provided for computer-based patient management for healthcare. At least one memory is operable to store data for a plurality of patients. A first processor is configured to classify each of the patients into diagnosis-related groups based on respective data for each of the patients, select a timeline to discharge as a function of the diagnosis-related group for each of the patients, alter the diagnosis-related group for at least one of the patients, the altering being based on a utilization and new data not used in the classifying, change the timeline for the one of the patients, the changing being a function of the altering, and monitor tasks across multiple medical professionals as a function of the timeline.
In a third aspect, a non-transitory computer readable storage medium has stored therein data representing instructions executable by a programmed processor for computer-based patient management for healthcare. The storage medium includes instructions for acquiring data for a patient, establishing, as a function of the data, first care for the patient prior to an admission to a healthcare facility, managing, as a function of the data, second care for the patient upon the admission to the healthcare facility, and establishing, as a function of the data, third care for the patient after discharge from the healthcare facility.
Any one or more of the aspects described above may be used alone or in combination. These and other aspects, features and advantages will become apparent from the following detailed description of preferred embodiments, which is to be read in connection with the accompanying drawings. The present invention is defined by the following claims, and nothing in this section should be taken as a limitation on those claims. Further aspects and advantages of the invention are discussed below in conjunction with the preferred embodiments and may be later claimed independently or in combination.
Case management is automated at healthcare facilities and/or for a patient before, after and/or during a stay at a healthcare facility. The severity of an illness or injury, diagnosis-related group, and/or a cohort to which a patient belongs is predicted from data for the patient. For case management, patient data is obtained or mined from electronic medical records (EMRs), such as patient information databases, radiology information systems (RIS), pharmacological records, or other form of medical data storage or representation. In an EMR or RIS, various data elements are normally associated to a patient or patient visit, such as diagnosis codes, lab results, pharmacy, insurance, doctor notes, images, and genotypic information.
The system combines information from multiple sources and produces the best DRG considering the symptoms, severity, all morbidities and co-morbidities and by presenting the evidence for such an output. This is valuable for multiple reasons i) it may provide a better care plan for the patient since all the relevant information is mined and presented some of which may have been missed by any manual review, ii) the financial outcomes are better for care providers because the manual process could classify the patient into a lesser severity group than it actually belongs to which results in lesser payments iii) the denial of claims may be minimal as for all DRGs, the evidence for including that group is clearly presented as a part of the mining process.
Using the mined data, a computer system predicts the severity of an illness or injury and/or class to which a patient belongs for treatment. Based on the prediction and cost considerations, a workflow and/or timeline to care for the patient are created. Clinical records for a patient are combined with the clinical knowledge and case management guidelines to automatically perform the tasks for case management. Schedules, reminders, alerts, or other tasks are created and monitored to manage the care of the patient while optimizing utilization.
FIG. 1
414
shows a method for computer-based patient management for healthcare at a healthcare facility. The method is implemented by or on a computer, server, workstation, system, or other device. The method is provided in the order shown, but other orders may be provided. Additional, different or fewer acts may be provided. For example, act may not be provided.
Continuous (real time) or periodic classification of the diagnosis-related group and/or severity is performed. Throughout a stay at a hospital or other healthcare facility, the care is tuned based on the most recent data and associated classification. The care of the patient is managed based on the current status of the patient derived from patient specific data and based on cost or other utilization considerations. As the time passes and as more data (e.g., new labs results, new medications, new procedures, existing history etc.) is gathered, the care plan for the patient may be updated and presented to a case or patient manager.
FIG. 1
FIG. 2
FIGS. 1 and 2
is directed to case management at a healthcare facility. Healthcare facilities include hospitals, emergency care centers, or other locations or organizations for treating illness or injury. The patient may stay one or more days at a healthcare facility for diagnosis and/or treatment. In some cases, the stay may be only hours. is directed to accountable care optimization, which may or may not include care at a healthcare facility. The care of the patient before, during, and after any stay at a healthcare facility is managed. Given the rise in accountable care where the care provider shares the financial risk, managing care before a stay, during a stay, and after the stay based on patient well being and associated cost considerations allows alteration of the care of the patient in such a way that the cost of care is kept low. For example, the care of the patient at a healthcare facility may be different (e.g., perform an extra task, such as education) in order to reduce costs for later care of the patient outside the healthcare facility or after discharge. It may also prevent an unplanned readmission of the patient which results in increased cost and often penalties by payers such as CMS. In both , a computer performs the case management and/or presents options to a case manager for the management of care.
FIG. 1
Referring again to , the case management operation is triggered in response to an event. For example, an indication of admission of a patient to a healthcare facility or new data for the patient being available is received. The receipt is by a computer or processor. For example, a nurse or administrator enters data for the medical record of a patient. The data entry indicates admission to the healthcare facility, to a practice group within the healthcare facility or to a different practice. Similarly, indication of transfer or discharge to another practice group, facility, or practice may be received. As another example, a new data entry is provided in the electronic medical record of the patient. In another example, an assistant enters data showing a key trigger event (e.g., completion of surgery, assignment of the patient to another care group, completion in a task of the workflow for care of the patient, or a change in patient status). In alternative embodiments, the indication is not received and periodic or continuous operation is provided.
In response to the trigger, an automated workflow is started. The indication or other trigger causes a processor to run a case management process. The case management workflow determines a cohort or diagnostic-related group for the patient and then establishes a workflow of care for the patient.
402
In act , clinical data about a patient is gathered. The case management workflow includes establishing the care for the patient. To establish the workflow of care, the case management workflow first gathers data for the patient.
A processor gathers clinical data for a patient of a healthcare facility. The data is gathered by searching or by loading from the medical record. In other embodiments, the information to be used for establishing the workflow of care for the patient is not available as specific values in the medical record or inconsistent data is provided. Rather than merely searching or loading data, the electronic medical record of the patient may be mined. Mining combines local and/or global evidence from medical records with the medical knowledge and guidelines to make inferences over time. Local evidence may include information available at the healthcare facilities, and global evidence may include information available from other sources, such as other healthcare facilities, insurance companies, primary care physicians, or treating physicians.
The classifier for case management has an input feature vector or group of variables used for establishing a workflow for care. The values for the variables for a particular patient are obtained by mining the electronic medical record for the patient. The electronic medical record for the patient is a single database or a collection of databases. The record may include data at or from different medical entities, such as data from a database for a hospital and data from a database for a primary care physician whether affiliated or not with the hospital. Data for a patient may be mined from different hospitals. Different databases at a same medical entity may be mined, such as mining a main patient data system, a separate radiology system (e.g., picture archiving and communication system), a separate pharmacy system, a separate physician notes system, and/or a separate billing system. Different data sources for the same and/or different medical entities are mined.
The different data sources have a same or different format. The mining is configured for the formats. For example, one, more, or all of the data sources are of structured data. The data is stored as fields with defined lengths, text limitations, or other characteristics. Each field is for a particular variable. The mining searches for and obtains the values from the desired fields. As another example, one, more, or all of the data sources are of unstructured data. Images, documents (e.g., free text), or other collections of information without defined fields for variables is unstructured. Physician notes may be grammatically correct, but the punctuation does not define values for specific variables. The mining may identify a value for one or more variables by searching for specific criteria in the unstructured data.
Any now known or later developed mining may be used. For example, the mining is of structured information. A specific data source or field is searched for a value for a specific variable. As another example, the values for variables are inferred. The values for different variables are inferred by probabilistic combination of probabilities associated with different possible values from different sources. Each possible value identified in one or more sources are assigned a probability based on knowledge (e.g., statistically determined probabilities or professionally assigned probabilities). The possible value to use as the actual value is determined by probabilistic combination. The possible value with the highest probability is selected. The selected values are inferred values for the variables of the feature vector of the classifier for case management.
U.S. Pat. No. 7,617,078, the disclosure of which is incorporated herein by reference, shows a patient data mining method for combining electronic medical records for drawing conclusions. This system includes extraction, combination and inference components. The data to be extracted is present in the hospital electronic medical records in the form of clinical notes, procedural information, history and physical documents, demographic information, medication records or other information. The system combines local and global (possibly conflicting) evidences from medical records with medical knowledge and guidelines to make inferences over time. Existing knowledge, guidelines, best practices, or institution specific approaches are used to combine the extracted data for case management.
U.S. Published Application No. 2003/0120458, the disclosure of which is incorporated herein by reference, discloses mining unstructured and structured information to extract structured clinical data. Missing, inconsistent or possibly incorrect information is dealt with through assignment of probability or inference. These mining techniques are used for quality adherence (U.S. Published Application No. 2003/0125985), compliance (U.S. Published Application No. 2003/0125984), clinical trial qualification (U.S. Published Application No. 2003/0130871), and billing (U.S. Published Application No. 2004/0172297). The disclosures of the published applications referenced in the above paragraph are incorporated herein by reference. Other patent data mining or mining approaches may be used, such as mining from only structured information, mining without assignment of probability, or mining without inferring for inconsistent, missing or incorrect information. In alternative embodiments, values are input by a user for applying the predictor without mining.
404
In act , the processor establishes a workflow for care of the patient. The workflow of care is established as a function of the gathered clinical data and one or more cost factors. Alternatively, the workflow of care is established as a function of the clinical data without specific cost factors. The workflow of care itself is based on clinical guidelines, hospital treatment standards, or other sources.
The clinical data is used to predict a severity and assign the patient to a diagnosis-related group as a function of the severity. For example, based on the past and current medical records of a patient, the patient is classified into a diagnosis-related group. Diagnosis-related groups are groups of patients to receive the same care, such as all acute myocardial infarction patients being in one group. For example, the patient is assigned to one of five types of myocardial infarction. Greater or lesser grouping may be provided, such as providing a single myocardial infarction group.
The severity may indicate the appropriate diagnostic-related group. By quantifying severity (e.g., low, medium and high), the specific diagnostic-related group may be determined. The severity may reflect the presence of complications or co-morbidities, resulting in a different diagnostic-related group (e.g., acute myocardial infarction in patients with diabetes being a different group than acute myocardial infarction). In alternative embodiments, the diagnostic-related group is established independently of severity.
To classify the patient into a diagnostic-related group and/or predict severity, the gathered clinical data is applied to a classifier or model. In one embodiment, different classifiers are provided for the respective different diagnosis-related groups and/or severities. In other embodiments, a single classifier distinguishes between different diagnosis-related groups and/or severities. For example, the classifier clusters based on the clinical data.
A probability of the patient being in each given class is output. The class associated with the highest probability is selected for the patient. In other embodiments, the classifier determines the class without a probability. Manually programmed criteria may be applied to distinguish among classes. In one embodiment, a machine-learned classifier uses the patient data to establish the workflow for care of the patient or at least provides severity and/or diagnosis-related grouping to be used for establishing the workflow for care.
A feature vector used for classifying is populated. By mining, the values for variables are obtained. The feature vector is a list or group of variables used to classify. The mining outputs values for the feature vector. The output is in a structured format. The data from one or more data sources, such as an unstructured data source, is mined to determine values for specific variables. The values are in a structured format—values for defined fields are obtained.
The mining may provide all of the values, such as resolving any discrepancies based on probability. Any missing values may be replaced with an average or predetermined value. The user may be requested to enter a missing value or resolve a choice between possible values for a variable. Alternatively, missing values are not replaced where the classifier may operate with one or more of the values missing.
The feature vector is populated by assigning values to variables in a separate data storage device or location. A table formatted for use by the classifier is stored. Alternatively, the values are stored in the data sources from which they are mined and pointers indicate the location for application of the classifier.
The diagnosis-related group and/or severity class is provided by applying the classifier. In one embodiment, the classifier is a machine-trained classifier. Any machine training may be used, such as training a statistical model (e.g., Bayesian network). The machine-trained classifier is any one or more classifiers. A single class or binary classifier, collection of different classifiers, cascaded classifiers, hierarchal classifier, multi-class classifier, model-based classifier, classifier based on machine learning, or combinations thereof may be used. Multi-class classifiers include CART, K-nearest neighbors, neural network (e.g., multi-layer perceptron), mixture models, or others. A probabilistic boosting tree may be used. Error-correcting output code (ECOC) may be used. In one embodiment, the machine-trained classifier is a probabilistic boosting tree (PBT) classifier. The detector is a tree-based structure with which the posterior probabilities of class membership are calculated from given values of variables. The nodes in the tree are constructed by a nonlinear combination of simple classifiers using boosting techniques. The PBT unifies classification, recognition, and clustering into one treatment. Alternatively, a programmed, knowledge based, or other classifier without machine learning is used.
For learning-based approaches, the classifier is taught to distinguish based on features. For example, a probability model algorithm selectively combines features into a strong committee of weak learners based on values for available variables. As part of the machine learning, some variables are selected and others are not selected. Those variables with the strongest or sufficient correlation or causal relationship to a cohort, severity, or diagnosis-related group are selected and variables with little or no correlation or causal relationship are not selected. Features that are relevant to case management or care are extracted and learned in a machine algorithm based on the ground truth of the training data, resulting in a probabilistic model. Any size pool of features may be extracted, such as tens, hundreds, or thousands of variables. The pool is determined by a programmer and/or may include features systematically determined by the machine. The training determines the most determinative features for a given classification and discards lesser or non-determinative features.
The classifier is trained from a training data set using a computer. To prepare the set of training samples, actual severity, diagnosis-related group, or cohort is determined for each sample (e.g., for each patient represented in the training data set). Any number of medical records for past patients is used. By using example or training data for tens, hundreds, or thousands of examples with known status, a processor may determine the interrelationships of different variables to the outcome. The training data is manually acquired or mining is used to determine the values of variables in the training data. The training may be based on various criteria, such as readmission within a time period.
The training data is for the medical entity for which the predictor will be applied. By using data for past patients of the same medical entity, the variables or feature vector most relevant to care management for that entity are determined. Different variables may be used by a machine-trained classifier for one medical entity than for another medical entity. Some of the training data may be from patients of other entities, such as using half or more of the examples from other entities with similar workflow of care, concerns, sizes, or patient populations. The training data from the specific institution may skew or still result in a different machine-learnt classifier for the entity than using fewer examples from the specific institution. In alternative embodiments, all of the training data is from other medical entities, or the classifier is trained in common for a plurality of different medical entities.
In alternative embodiments, the predictor is programmed, such as using physician knowledge or the results of studies. Input values of variables are used by domain knowledge to classify.
50
The classifier is trained to class patients in general. For example, the output of the classifier is an identity of one of multiple different classes. Alternatively, separate classifiers are trained for different classes, such as training a classifier for acute myocardial infarction and another classifier for angina. Different classifiers are trained to indicate a probability of a given patient being a member of a given class. By applying the multiple classifiers, the patient is assigned to a class or combination of classes based on relative probabilities. For example, the diagnoses above a % probability are used to identify a diagnosis-related group representing a combination of diagnoses. Similarly, the classifier may identify co-morbidities.
The learnt predictor is a matrix. The matrix provides weights for different variables of the feature vectors. The values for the feature vector are weighted and combined based on the matrix. The classifier is applied by inputting the feature vector to the matrix. Other representations than a matrix may be used.
For application, the classifier is applied to the electronic medical record of a patient. In response to the triggering, the values of the variables used by the learned classifier are obtained. The values are input to the classifier as the feature vector. The classifier outputs a class or a probability of the patient being in a given class based on the patient's current electronic medical record.
The class is determined automatically. The user may input one or more values of variables into the electronic medical record, but the classification is performed without entry of values after the trigger and while applying the classifier. Alternatively, one or more inputs are provided, such as resolving ambiguities in values or to select an appropriate classifier (e.g., select a predictor of infection as opposed to trauma).
By applying the classifier to mined information for a patient, a probability of membership in a class is provided for that patient. The machine-learnt or other classifier outputs a statistical probability of class based on the values of the variables for the patient. Where the classification occurs in response to an event, such as triggering at the request of a medical professional or administrator, the class is provided from that time.
The classifier may indicate one or more values contributing to the probability. For example, the mention of myocardial infarction in physician notes is identified as being the strongest link or contributor to a probability of the patient being in a heart attack group. This variable and value are identified. The machine-learnt classifier may include statistics or weights indicating the importance of different variables to class membership. In combination with the values, some weighted values may more strongly determine an increased probability of membership. Any deviation from norm may be highlighted. For example, a value or weighted value of a variable a threshold amount different from the norm or mean is identified. The difference alone or in combination with the strength of contribution to the class membership is considered in selecting one or more values as more significant. The more significant value or values may be identified.
In alternative embodiments of creating and applying the classifier, the class is integrated as a variable to be mined. The inference component determines the class based on combination of probabilistic factoids or elements. The class is treated as part of the patient state to be mined. Domain knowledge determines the variables used for combining to output the class.
The classifier outputs a diagnosis-related group for the patient. Alternatively, the classifier outputs a severity. The severity, with or without other patient data, is used to determine the diagnosis-related group. For example, the patient belongs to the genus group of myocardial infarction. The severity indicates a more specific diagnosis-related group.
The case management workflow queries the results of the mining and/or classification of the patient into a diagnosis-related group or severity. The workflow uses the results or is included as part of the classifier application. Any now known or later developed software or system providing a workflow engine may be configured to initiate a workflow based on data.
To establish the workflow for care of the patient, the diagnosis-related group, severity, and/or variables associated with the class for a particular patient may be used to determine a mitigation plan. The mitigation plan includes instructions, prescriptions, education materials, schedules, clinical actions, tests, visits, examinations, or other jobs that may care for the patient. The next recommended clinical actions or reminders for the next recommended clinical actions may be output so that health care personnel are better able to follow the recommendations.
A library of workflows for care (mitigation plans) is provided. At least one workflow for care is provided for each diagnosis-related group. Separate care workflows may be provided for different diagnosis-related groups. The severity may be used to select between different workflows of care for a same diagnosis-related group. The workflow of care appropriate for a given patient is obtained and output.
The cost factor may be included in the selected workflow. For example, when the severity of the patient is predicted better due to the combination of all of the information, the workflow might suggest directly getting a CT instead of first getting an x-ray and then ordering a CT when the x-ray results are not sufficient to reach a conclusion. This would not only save an extra exam but would also cut on the length of stay. Another example would be to create the optimal path or the critical task map where it becomes evident which tests/procedures can be done without waiting on results from others and which should be done in order one after the other. This will make results available quickly and possibly save on some procedures/tests.
For a given diagnosis, the most cost effective treatment with sufficient or better outcome for the patient is used as part of the workflow for that diagnosis. Rather than just rely on best or sufficient care, the best or sufficient care with optimized cost may be used. For example, testing for diabetes in myocardial infarction patients performed early in a patient's stay may result in more optimized care later by avoiding treatments not as effective for patients with this complication. By including a test for diabetes within a first day of the patient stay as part of the workflow, a better utilization of resources may be provided. The cost is built into the workflow.
In other embodiments, the cost factor is used as a factor for selecting the workflow. Different workflows for a given diagnosis may be provided. The workflow with a lesser cost to the healthcare facility may be selected. The selection may be based only on cost factor, such as where each workflow of care is appropriate, or based on cost factor and other variables, such as relatively weighting severity, cost factor to select between care workflow with a range of successful outcome, and/or data for the patient.
In other embodiments, the classifier is trained to class based, in part, on the cost factor. For example, one or more cost factors are used as input features. As another example, the classes are defined based on cost factors, such as dividing one general class into specific classes that may be reimbursed differently.
The cost factor may be a cost of care, a reimbursement for the care, or other utilization. Workflows with a cheaper cost to the healthcare facility, such as having a nurse perform an action instead of a physician, may be selected. Follow up calls can be scheduled to make sure that the patient is taking medications or follows up with the primary care/nursing facility. The cost of a short call could avoid the cost of a possible readmission of complication due to patient non-compliance. Workflows with a higher rate of return or payment likelihood, such as a workflow avoiding non-reimbursable or experimental treatment, may be selected. Workflows with a less cost to the patient may be selected. A combination of cost factors may be used to select, in part, the workflow for care. Patient outcome, such as success rate or readmission avoidance, may be another or more greatly weighted factor for selection of the workflow of care.
In other embodiments, the case management is performed at a population or cohort level. In an accountable care setting, data is shared between the participating entities. For example, primary care providers and payers share data with the participating hospital and critical care facilities. A case manager in this setting can manage a cohort instead of a single patient. A group of patients with similar diagnoses or severities or other similar characteristics can be managed within the facility or even outside it in an accountable care setting. The patient group that is predicted to be more risky patients can be asked to follow up with participating primary care providers or nursing care facilities more often and their hospital costs are kept at a minimal. Also, alerts are generated and sent to corresponding stakeholders e.g. when a patient is discharged, the case management workflow generates an alert for a follow up appointment with the primary care provider for a physical exam within a given timeframe failing which a reminder is sent to the office or the patient. The cohort/group level management may also include cost factors such as the cohort of patients that are the most expensive for the organization and the items in the workflow that account to the majority of the costs. This can often help in optimizing the workflow by optimizing and tuning the standard care procedure to that particular organization.
In one embodiment, the case management workflow suggests particular providers for a patient or cohort, such as specialists or critical care facilities that in the past have best managed such cases. This is performed by mining the patient outcomes and cost information for all the participating providers and then correlating them with, but not limited to, the DRG and other cohort information.
In one embodiment, the system simulates multiple workflows for a patient or group of patients and provides comparative effectiveness in terms of outcomes and also comparison of cost on the different possible outcomes. The system also provides the most optimal workflow, the workflow with best patient outcome and the workflow with minimal cost. The provider can select one of the many workflows and as more data is input into the system during the care of the patient, the workflow is updated accordingly.
Case Management may include care management and optimization, risk management and optimization, financial management and optimization, and workflow management and optimization
The workflow for care includes multiple actions by different entities of the healthcare facility in a timeline for the actions. The timeline may be maximized for efficiency and/or to provide savings. The actions are for tests, treatment, consultation, discharge, transfer or other tasks performed at the healthcare facility. The actions are performed by different people, such as nurses, physicians, administrators, techs, volunteers, or others. By providing a timeline, the different people involved may be coordinated to maximize the utilization of their time and healthcare facility resources.
In one embodiment, the workflow for care of the patient is established for review and monitoring by a case manager. The workflow for care may include actions or tasks to be performed by the case manager. The task entry may be to update patient data, arrange for clinical action, update a prescription, arrange for a prescription, review test results, arrange for testing, schedule a follow-up, review the probability, review patient data, or other action to reduce the probability of readmission. For example, where a follow-up is not scheduled during discharge and is not automatically arranged, arranging for the follow-up may be placed as an action item in an administrator's, assistant's, nurse's, or other case manager's workflow. As another example, a test is included in the timeline to be ordered to provide the missing information. Review of test results is placed in a physician's workflow by the case management workflow so that appropriate action may be taken before or after other actions. Missing information from the patient data may be identified. A workflow action is automatically scheduled for a case manager to contact the physician via phone or in person to inquire about performance of the test and/or review of the results.
The case manager may review the established workflow for care and alter the tasks or timeline. The workflow may be examined to determine if other action was warranted. Future workflow action items, discharge instructions, physician education, or other actions may be performed to avoid inefficiencies or care issues in other patients.
The case management workflow schedules, monitors, or otherwise implements the workflow for care. By accessing calendars, schedules or workflows, the case management workflow causes the workflow for care to be performed in conformance with the timeline.
The timeline of the selected or established workflow may be automatically altered to account for timelines of other workflows. For example, equipment (e.g., a medical imaging system), a person (e.g., a physician), or room (e.g., an operating or treatment room), a device (e.g., a catheter) or other resource may be unavailable due to other appointment, delivery timing, work schedule, or other reason. The closest availability to optimal may be selected. Other timelines may be adjusted, such as adjusting scheduled tasks for a timeline associated with a lesser overall cost due to delay and not adjusting a timeline for a workflow of care associated with a greater overall cost due to delay.
The workflow for care may include actions, such as for the case manager, to document for reimbursement or other purposes. Tasks specific to storing or obtaining proper documentation may be included in the timeline. The patient record may be mined or searched to identify the needed documents. Where the documents are not found, the processor may add tasks to the timeline for obtaining the documentation. The tasks for documentation may be assigned to personnel responsible for the creation or to a case manager responsible for getting the personnel to provide the documentation.
The timeline indicates an optimal discharge time. This prediction may be useful to the patient or for planning tasks to occur during the stay. For cost reasons, the discharge time may be longer to allow additional tasks. Despite the increase in cost for the current care at the facility, the cost for overall care including after discharge may be reduced. This cost may be part of the workflow for care and/or is based on data specific to the patient. For example, the workflow for care is different for two patients in the same diagnosis-related group since one patient has insurance and the other does not. The patient with insurance may be more likely to visit a physician for a follow-up, so is discharged earlier. The patient without insurance stays longer according to the workflow to allow follow-up. A given workflow for care may have data driven branches for tasks (job paths) or different workflows may be used.
406
In act , the processor obtains additional clinical data. The additional clinical data is obtained after establishing the workflow for care. The additional data may be generated as part of the workflow for the care of the patient. For example, the workflow includes a physician visit or diagnosis. The physician creates notes including the diagnosis where the notes are added to the patient record, or the physician enters information into a diagnosis system. As another example, a test is performed and the results are added to the patient record. By performing one or more of the actions in the workflow for care, additional clinical data is generated. The additional data may be acquired from actions not in the workflow for care.
In other embodiments, the additional clinical data was acquired before establishing the workflow for care, but was not previously available. The later availability results in the clinical data being additional data. Using a flag, trigger, scan, or other mechanism, the additional data or addition of additional data is detected by the processor. For example, any additional clinical data entered into the patient record triggers a review of the workflow. The case management workflow is triggered to review the workflow of care.
408
In act , the processor updates the workflow for the care of the patient. The update changes or replaces the current workflow with another workflow.
The update is performed as a function of the clinical data and the cost factor. For example, the same process for establishing the workflow is performed. In addition to the original patient data, the additional patient data is used. The additional patient data may indicate different values for variables (e.g., change from smoker to non-smoker due to further evidence). The different value or values may result in a different classification. For example, the severity is changed, resulting in a different diagnosis-related group for the patient. The workflow for the patient is reassigned due to the difference in severity. In alternative embodiments, the additional data is used to change any affected values without re-obtaining all of the values. In other embodiments, the additional data itself is used to identify any change to the workflow without reapplying the classifier and/or mining.
The case management workflow identifies any tasks or jobs in the updated workflow for care that have already been performed. Such tasks are marked or recorded as performed. Any tasks that should have been performed are scheduled with a greater priority in order to maintain the timeline for the updated workflow of care.
By reestablishing the workflow for care, the patient may be assigned to a more current or accurate severity or diagnosis-related group. A more optimized or appropriate workflow for care is performed. If performed in real time, suggestions and corrections can be made to improve the quality of care. For example, from the present symptoms, lab and imaging results, suggestions and communications can be made to elevate the severity of a patient from acute renal insufficiency to acute renal failure. This may not only yield a better outcome if confirmed but also may save time and resources.
The update occurs while the patient is at the healthcare facility, maximizing or increasing the opportunity to provide efficient and appropriate care. The update may result in cost savings, increased reimbursement opportunity, and/or more optimum care. The update may avoid increasing costs or reducing care due to performing actions not needed given the diagnosis-related group to which the patient now belongs.
The classification update is made during the patient stay. The classification may be repeated at different times during the patient stay. The classification is updated, such updated based on any data entered after the original classification.
410
In act , tasks are scheduled based on the timeline. The tasks are scheduled automatically. The system populates the calendars or task lists of different personnel, equipment, rooms, or other resources. For example, a time for medical imaging equipment and room is reserved, and the calendar of a technician for the medical imaging system is changed to indicate an appointment for that time. Any task to be performed by someone or something is a job entry. Reservations may be scheduled in addition to or as a job entry. Tasks may be added to the workflows of different people.
In another embodiment, a job entry in a workflow of care is automatically scheduled. The computerized workflow system includes action items to be performed by different individuals. The action items are communicated to the individual in a user interface for the workflow, by email, by text message, by placement in a calendar, or by other mechanism.
The automated scheduling may be subject to approval by one or more people. The technician, physician, or nurse may be required to accept any scheduled appointment. Where an appointment is rejected, the timeline may be adjusted to a next optimal time. In another example of approval, a case manager may be required to approve of the entire timeline and/or any changes to the timeline before scheduling is attempted and/or completed.
In one embodiment, a job entry is added to the workflow of a case manager. In a retrospective analysis or in real time after identification of a problem or issue, the case manager may be tasked with avoiding the problem or issue for the same patient or other patients with a same or similar workflow of care. For example, a patient or threshold number of patients is readmitted to a hospital due to a complication. The case manager may be tasked with attempting to prevent readmission of other patients with the same workflow of care. To avoid readmission, the case manager identifies cost effective actions, such as education about post discharge treatment. The actions are added to the workflow for care as an update. The case management workflow system may monitor for issues and generate tasks or suggest changes to deal with the issues.
412
In act , the processor generates at least one alert. The system may be configured to monitor adherence to the action items of the workflow for care. Reminders may be automatically generated where an action item is due or past due so that health care providers are better able to follow the timeline.
The timeline provides a schedule. Alerts are generated for conflicts with the schedule, such as physician being double booked. Alerts are generated as reminders for an upcoming action. Alerts are generated for administrators, nurses or others to cause another person to act on time. Alerts are generated where an action should have occurred and data entered, but where data has not been entered. Alerts may be generated for any reason in an effort to keep to the timeline or limit further delay than has already happened.
Any type of alert may be used. The alert is sent via text, email, voice mail, voice response, or network notification. The alert indicates the task to be performed, the location, and the patient. The alert is sent to the patient, family member, treating physician, nurse, primary care physician, and/or other medical professional. The alert may be transmitted to a computer, cellular phone, tablet, bedside monitor of the patient, or other device. The alert may be communicated through a workflow system. For example, a task to be performed is highlighted when past due or due soon. The highlighting may indicate a cost for selecting between multiple past or currently due tasks. The alert may be sent to the workflow of others for analysis, such as to identify people that regularly fail to perform on time so that future costs may be saved through training or education.
The alert may include additional information. The alert may indicate a cost associated with failure to perform on time. The diagnosis grouping, recently acquired data, relevant data, the severity, a probability associated with treatment, treatment options, or other information may be included.
In one embodiment, the alert is generated as a displayed warning while preventing entry of other information. The user is prevented from some action, task, or data entry to require submission of documentation of the act or other acts. For example, in response to the user attempting to schedule discharge or enter information associated with the patient, the alert is generated and the user is prevented from entering or saving the information. The prevention is temporary (e.g., seconds or minutes), may remain until the missing information is provided, or require an over-ride from an authorized personnel (e.g. a case manager or an attending physician). The prevention may be for one type of data entry (e.g., discharge scheduling) but allow another type (e.g., medication reconciliation) to reduce the risk of costly extensions.
414
In act , the processor predicts a probability of meeting the timeline, a cost associated with meeting the timeline, and a strongest link to the probability indicating a risk of failure to meet the timeline. Additional, different, or fewer items may be predicted. The prediction is based on past performance or a study. For example, the rate of timeline compliance is measured as performing every action on time, discharging on time or other measure. Previous implementations of a given workflow of care may be measured. The rate of compliance provides a probability of meeting the timeline. The probability is by physician, facility, practice group, or general. In alternative embodiments, the probability may be predicted by a machine-learned classifier based on training data of pervious patients.
The cost is predicted based on study or domain knowledge. For example, costs associated with performing the workflow of care over different timelines may be determined. The cost may be in terms of financial cost, resource utilization, reimbursement or difference between reimbursement and cost to perform the workflow. Based on financial study, the cost information may indicate the financial result of delay. Incentives and/or penalties may be associated with failure to perform on time. The costs may be broken down into components, such as the cost associated with each action or task.
The strongest link to the probability indicating a risk of failure to meet the timeline may be provided to a case manager or person associated with the strongest link. By providing the link, tasks may be handled more efficiently and/or to more likely avoid delay. The strongest link may be the most frequent cause for delay as compared to various causes of delay (e.g., people or equipment) or another less frequent cause associated with greater cost. The risk for delay may be linked to one of various variables. The variable with the strongest link is the most frequent cause, the cause of the longest delays, or cause associated with greater costs relative to other variables.
The probability, cost, or link may be specific to a hospital, physician, practice group or other entity, such as being calculated based on data for the hospital. Alternatively, the probability, cost, or link is based on peer performance or is general.
The implementation of the computerized case management may be based on a criteria set for the medical entity. For example, the medical entity may set the threshold for comparison to be more or less inclusive of different levels of performance or cost. As another example, the medical entity may select a combination of factors to trigger an alert. If one variable causes the case management system to regularly and inaccurately predict a class, then some values of that variable may cause an alert to be generated for a case manager to more completely review the established workflow of care.
The workflow of care for the patient is for the patient's stay at the healthcare facility. The same workflow or a further, different workflow may be established for the care of the patient after a discharge.
FIG. 2
shows one embodiment of a method for computer-based patient management for healthcare. The method is directed to accountable care optimization. In addition to or as an alternative to case management for patients at a healthcare facility, the patients are managed prior to and/or after any stay. By managing possible patients outside of the healthcare facility, the overall cost of healthcare may be reduced. The care optimization workflow is performed, at least in part, by a computer or automatically.
508
504
504
506
508
510
512
514
516
The acts are performed in the order shown or a different order. For example, act is performed as part of act . Additional, different, or fewer acts may be performed. For example, one or two of acts , , and are not performed. Acts , , , and are performed in parallel, not provided, or are alternatives.
502
402
406
In act , clinical data is gathered. A processor acquires the data. Data for a patient is acquired by searching the medical record or other information sources for the patient. In one embodiment, mining or other gathering discussed for act and/or or is performed. For example, unstructured information is mined. The mining provides values for variables where the values are inferred from different possible values and probabilities assigned to the possible values.
504
In act , the processor establishes care for a patient not at a healthcare facility. The workflow for care is established prior to a given admission to a healthcare facility. For example, the workflow for care is an ongoing process established by an insurance company, medical facility, primary care physician, or other group to prevent injury or illness. The goal is to avoid any hospital admissions or more costly procedures. Thus, the workflow for care is established regardless of whether there will be a later admission (prior to any later admission). Alternatively, the workflow for care is established after an admission is planned but prior to the actual admission.
404
The care for the patient is established as a function of the data for the patient. A cohort for the patient is predicted. A classifier, such as a classifier discussed above for act , determines a cohort to which a patient belongs. The classification may be based on a diagnosis or not. For example, cohorts are groups of the population with similar health concerns or risks. Whether a patient has been vaccinated or not, the weight of the patient, allergies, which allergies, diabetes, and/or other information may be used to group patients into different cohorts. Each cohort is associated with different types of risk, levels or severity of risk, and/or combinations of risks.
The available workflows for care may define the possible cohorts. Different combinations of concerns may lead to different care. The care is provided to manage risk and avoid more expensive health complications. By establishing care prior to any more major illnesses or injuries, actions may be taken to reduce costs for later care. The care may be provided as part of accountable care optimization, such as attempting to reduce costs of healthcare by managing the person rather than case managing after injury or illness has occurred.
408
As new data is acquired, the care may be updated, such as disclosed in act but for treatment or care outside of the healthcare facility. For example, a patient may suffer a fall or a plurality of falls within a given time frame. Such falls may be identified by data indicating calls to a healthcare provider, incident reports at a senior living facility or other sources. This new data may be used to reassign the patient to a different cohort, resulting in different care processes. A personalized plan of care is provided for each patient. Patient is used for people that may or may not be a patient of a physician, but are patients in the sense of people for which care is to be provided.
The workflow of care outside of the healthcare facility may include different actions. The actions may be for the patient to perform, such as membership at a health club or visiting a physician. The actions may be for others to perform, such as monitoring, home visits, calls, other contact, or other interaction to encourage, require, or test the patient. The care may be monitored by requiring entry of feedback by the patient and/or by acquiring data associated with the care.
The system may automatically monitor or schedule the care. The monitoring or scheduling may be in a timeline or otherwise arranged to minimize costs. For example, statistics may be used to determine the most cost effective approaches. Where visits to nurse practitioners are successful, such visits are arranged for a given task. Where such visits frequently lead to physician visits, the nurse practitioner approach is not used first.
506
FIG. 1
In act , the care for the patient upon admission to a healthcare facility is managed. The management may be automated, such as discussed above for , may be managed by a case manager, or combinations thereof. The management is performed using data for the patient.
508
504
404
408
504
508
504
508
In act , the processor establishes care for the patient after any discharge from the healthcare facility. The care is established as discussed above for act , , or . The classifier is the same or different for each of these acts. For example, a general classifier appropriate for acts and is used. In another example, separate classifiers are trained or provided for acts and given the different circumstances.
Currently available data is used to classify the patient into a cohort for assigning a care plan to the patient. The data includes data associated with the treatment at the healthcare facility, so may classify the patient into a cohort associated with care appropriate for the patient given the diagnosis.
For example, an optimal follow-up strategy (e.g., phone call, in-home follow-up, or visit to a doctor) may be provided in the care plan. The follow-up strategy may be selected or determined based on the probability of readmission, probability of compliance, guidelines for care, and/or the variables associated with the patient. For example, an in-home follow-up is scheduled for a probability of compliance further beyond (e.g., below) the threshold (e.g., beyond another threshold in a stratification of risk), and a phone call is scheduled for a probability closer to the threshold (e.g., for a lower risk). As another example, the severity or cost of the illness, injury, or health risk is used to select the appropriate care. Possible and alternative care plans for optimal patient outcomes may be provided for selection with or without cost considerations.
Other predictors or statistical classifiers may be provided. One example predictor is for compliance by the patient with instructions. A level of risk (i.e., risk stratification) and/or reasons for risk are predicted. The ground truth for compliance may rely on patient surveys or questionnaires. The predictor for whether a patient will comply is trained from training data. Different predictors may be generated for different groups, such as by type of condition, cohort, or diagnosis-related group. The variables used for training may be the same or different than for training a predictor of timeline performance. Mining is performed to determine the values for training and/or the values for application.
The predictor for compliance is triggered for application at the time of discharge or when other instructions are given to the patient, but may be performed at other times. The values of variables in the feature vector of the predictor of compliance are input to the predictor. The application of the predictor to the electronic medical record of the patient results in an output probability of compliance by the patient. The reasons for the probability being beyond a threshold or thresholds may also be output, such as a lack of insurance or high medication cost contributing as a strong or stronger link to the probability being beyond the threshold. For example, a patient may be discharged to an unknown location (no home or hospice listed in the discharge location variable). An unknown location may occur for homeless patients whom are less able to adhere to a care plan. The discharge location being unknown may be output so that a care provider may make subsequent care arrangements before discharge or assign a case worker to assist with adherence. Alternatively, the management workflow identifies the situation and arranged for assignment of the case work, visit by a case worker, or return visits to a healthcare facility.
The probability of compliance may be used to modify the discharge or other instructions and/or workflow action items. For example, the type of follow-up may be more intensive or thorough where the probability of compliance is low. As another example, a workflow action may be generated to identify alternative medicines where the cost of medication is high. A consultation with a social worker may be arranged and/or the discharge instructions based on lower cost alternatives may be provided where the patient does not have insurance. The timeline for care may be altered to provide for further tasks associated with compliance or reduction of other risks.
The probability of readmission may be predicted, such as disclosed in U.S. Published Application No. _______________ (Ser. No. 13/153,551), the disclosure of which is incorporated herein by reference. The probability of readmission may be used to identify care to avoid readmission. A risk of readmission is predicted and care to mitigate the risk may be provided in the workflow for care of the patient.
Other predictions may be used, such as predicting a financial impact and/or predicting a severity. Probabilities or other predictive information may be used to establish the care for the patient. Different tasks may be assigned as a function of the severity and/or financial impact. More severe or costly risks may result in a care plan with more intensive care and corresponding tasks to avoid the risk.
504
506
508
Any of the care plans of acts , , and may be updated. As new data is acquired, the cohort assignment may be updated, resulting in a different care plan. Different cohorts may have different plans.
FIG. 1
The care plans may be established, at least in part, based on utilization. Different actions are associated with different costs. Less expensive alternatives may be used where care does not suffer. For example, actions associated with less resource consumption are used while still satisfying treatment guidelines. The cost may be accounted for in any of the ways discussed above for even in managing patients outside of healthcare facilities.
510
In act , tasks for the care are generated or scheduled by the processor. To reduce, minimize or avoid case manager workload, the management of the care is performed automatically. The reminders, tasks, scheduling, or actions are assigned and monitored. For example, visits to healthcare providers are scheduled. A notice may be sent to the patient, allowing the patient to interact with the schedule of the physician to arrange for a visit. Exercise, support group, or other activities may be scheduled or arranged. Any of the tasks for the care may be altered due to new data, case manager override, or other circumstances. The system reacts to changes by attempting to satisfy the care plan as currently provided.
The management of the care occurs across different medical providers associated with different institutions. The system may interact with different formats or systems at the different entities.
512
412
In act , the processor generates an alert, reminder, or task as part of the care before, during, and/or after admission to a healthcare facility. The alerts, reminders, or tasks are handled in the same way as provided in act . For care outside the healthcare facility, the alerts may more likely be to the patient or family member of the patient. The alerts may make compliance more likely. The alerts or reminders may be provided to a case manager, such as to arrange for a call or face-to-face consultation to increase the rate of compliance.
514
In act , information associated with the care before, during, or after a stay at a healthcare facility is presented to a user, such as a case manager, nurse, physician, administrator, patient, or other party. The information presented may include the care plan, the schedule or timeline, or other information. For example, the information indicates completed and incomplete tasks. Probabilities of compliance, of meeting the care plan, of readmission, or of other events may be presented. The probabilities may be provided with possible modifications to the care plan or indication of variables most likely to mitigate risk.
The information presented as part of the case management system may be different for different people. People with different roles receive access to different information. For example, a physician may access information on scheduled care tasks, but not financial information. A patient may receive information on the care plan, but not reminders to others. A case manager may receive access to the entire care plan, including financial information.
516
In act , the processor determines and indicates performance information. The performance information may be used by the case management system or a case manager to provide more effective care and/or more cost effective care. Physicians with patients that more likely comply or avoid admissions may be utilized more than other physicians. Healthcare facilities using less costly procedures or resources with similar success or care may be used over other facilities.
The performance is calculated based on data. Any criteria may be used for measurement. The data from past patients for a given physician, healthcare facility, or other entity is obtained and used to determine statistics. For example, the rates of vaccination of patients by different physicians are determined. Since vaccination may avoid later costs, the cost benefit associated with this statistic or the statistic itself is used to control the management workflow. The computer attempts to schedule visits with the physicians with a greater rate of vaccination first.
The performance may be indicated to a case manager for review. Workflows or limitations of operation of the management system may be altered to account for performance.
FIG. 5
100
is a block diagram of an example computer processing system for implementing the embodiments described herein, such as computer-based patient management for healthcare. The systems, methods and/or computer readable media may be implemented in various forms of hardware, software, firmware, special purpose processors, or a combination thereof. Some embodiments are implemented in software as a program tangibly embodied on a program storage device. By implementing with a system or program, completely or semi-automated workflows, predictions, classifying, and/or data mining are provided to assist a person or medical professional.
100
100
100
The system may be for generating a classifier, such as implementing machine learning to train a statistical classifier. Alternatively or additionally, the system is for applying the classifier. The system may also or alternatively implement associated workflows.
100
100
102
104
102
102
102
The system is a computer, personal computer, server, PACs workstation, imaging system, medical system, network processor, or other now know or later developed processing system. The system includes at least one processor (hereinafter processor) operatively coupled to other components via a system bus . The program may be uploaded to, and executed by, a processor comprising any suitable architecture. Likewise, processing strategies may include multiprocessing, multitasking, parallel processing and the like. The processor is implemented on a computer platform having hardware such as one or more central processing units (CPU), a random access memory (RAM), and input/output (I/O) interface(s). The computer platform also includes an operating system and microinstruction code. The various processes and functions described herein may be either part of the microinstruction code or part of the program (or combination thereof) which is executed via the operating system. Alternatively, the processor is one or more processors in a network and/or on an imaging system.
102
The processor is configured to learn a classifier, such as creating a classifier for severity, diagnosis-related grouping, cohort grouping, predicting (e.g., compliance, readmission, ability to meet a timeline or other event), or clustering from training data, to mine the electronic medical record of the patient or patients, and/or to apply a machine-learnt classifier to implement case management or accountable care optimization. Training and application of a trained classifier are first discussed below. Example embodiments for mining follow.
102
For training, the processor determines the relative or statistical contribution of different variables to the outcome—severity, diagnosis-related group, or cohort. A programmer may select variables to be considered. The programmer may influence the training, such as assigning limitations on the number of variables and/or requiring inclusion of one or more variables to be used as the input feature vector of the final classifier. By training, the classifier identifies variables contributing to establishing a workflow. Where the training data is for patients from a given medical entity, the learning identifies the variables most appropriate or determinative for that medical entity. The training incorporates the variables into a classifier for a future patient of the medical entity. For example, the training provides a classifier to output one or more diagnoses for a given patient so that the diagnoses may be used to select, in combination, the appropriate care workflow.
102
For application, the processor applies the resulting (machine-learned) statistical model to the data for a patient. For each patient or for each patient in a category of patients (e.g., patients treated for a specific condition or by a specific group within a medical entity), the classifier is applied to the data for the patient. The values for the identified and incorporated variables of the machine-learnt statistical model are input as a feature vector. A matrix of weights and combinations of weighted values calculates a class, diagnosis-related group, or cohort.
102
The processor associates different workflows of care with different possible classes of the classifier. The diagnosis-related grouping, cohort, probability, severity, and/or most determinative values may be different for different patients. One or a combination of these factors is used to select an appropriate workflow or action. Different classes may result in different jobs to be performed, different timelines and/or different cost considerations.
102
102
102
102
102
The processor is operable to assign actions or to perform management workflow actions. For example, the processor initiates contact for follow-up by electronically notifying a patient in response to identifying a care plan. As another example, the processor requests documentation to resolve ambiguities in a medical record. In another example, the processor generates a request for clinical action likely to provide better care and/or utilization. Clinical actions may include a test, treatment, visit, other source of obtaining clinical information, or combinations thereof. To implement case management, the processor may generate a prescription form, clinical order (e.g., test order), treatment, visit, appointment, activity, or other workflow action.
102
102
102
In a real-time usage, the processor receives currently available medical information for a patient. Based on the currently available information and mining the patient record, the processor may indicate a currently appropriate class and/or establish a patient-appropriate workflow of care. The actions may then be performed during the treatment or before discharge. The processor may arrange for actions to occur outside of a healthcare facility.
FIG. 6
310
350
380
382
384
382
388
386
384
382
386
386
388
In one embodiment represented in , a database storing a patient record is mined by a miner implemented by a processor to output structured data . A same or different processor uses the structured data, such as in an input feature vector of a machine-learned classifier, as values for variables used to establish a workflow for care, or to implement the workflow for care as a case management system. For example, a severity is predicted. The diagnosis-related group is predicted from the structured data and/or is derived from the severity . Each of the patients is classified into diagnosis-related groups based on respective data for each of the patients, whether indirectly through classification of severity or directly by classification into the group. A care plan and a timeline are identified based on the diagnosis-related group and/or the severity . For case management at a healthcare facility, the timeline is to discharge and is selected as a function of the diagnosis-related group for each of the patients. The processor may then implement the timeline and care plan .
386
The diagnosis-related group may be altered for at least one of the patients. The altering is based on a utilization and new data not previously used in the classifying. The utilization may be realized by considering a cost factor in the creation of workflows, in the classification of the patient, and/or in actions selected in the timeline. The diagnosis-related group for the one or more patients is changed. The changing is a function of cost of the tasks, reimbursement for the tasks, and/or the new data. The new data may be obtained after the classifying and while the patient is being treated at a healthcare facility. Due to the change, the timeline for the patient may change.
The processor monitors tasks across multiple medical professionals as a function of the timeline. To manage the care, an alert is generated in response to the monitoring, such as to more effectively implement the timeline.
FIG. 5
102
100
106
108
110
112
114
104
102
104
110
112
Referring again to , the processor implements the operations as part of the system or a plurality of systems. A read-only memory (ROM) , a random access memory (RAM) , an I/O interface , a network interface , and external storage are operatively coupled to the system bus with the processor . Various peripheral devices such as, for example, a display device, a disk storage device(e.g., a magnetic or optical disk storage device), a keyboard, printing device, and a mouse, may be operatively coupled to the system bus by the I/O interface or the network interface .
100
112
112
112
The computer system may be a standalone system or be linked to a network via the network interface . The network interface may be a hard-wired interface. However, in various exemplary embodiments, the network interface may include any device suitable to transmit information to and from another device, such as a universal asynchronous receiver/transmitter (UART), a parallel digital interface, a software interface or any combination of known or later developed software and hardware. The network interface may be linked to various types of networks, including a local area network (LAN), a wide area network (WAN), an intranet, a virtual private network (VPN), and the Internet.
114
114
102
114
102
114
114
114
The instructions and/or patient record are stored in a non- transitory computer readable memory, such as the external storage . The same or different computer readable media may be used for the instructions and the patient record data. The external storage may be implemented using a database management system (DBMS) managed by the processor and residing on a memory such as a hard disk, RAM, or removable media. Alternatively, the storage is internal to the processor (e.g. cache). The external storage may be implemented on one or more additional computer systems. For example, the external storage may include a data warehouse system residing on a separate computer system, a PACS system, or any other now known or later developed hospital, medical institution, medical office, testing facility, pharmacy or other medical patient record storage system. The external storage , an internal storage, other computer readable media, or combinations thereof store data for at least one patient record for a patient. The patient record data may be distributed among multiple storage devices or in one location.
The patient data for training a machine learning classifier is stored. The training data includes data for patients that have been classified, such as by a case manager or physician. The training data may additionally or alternatively include records of timeline implementation. The patients are for a same medical entity, group of medical entities, region, or other collection.
Alternatively or additionally, the data for applying a machine-learnt classifier is stored. The data is for a patient being managed. The memory stores the electronic medical record of one or more patients. Links to different data sources may be provided or the memory is made up of the different data sources. Alternatively, the memory stores extracted values for specific variables.
The instructions for implementing the processes, methods and/or techniques discussed herein are provided on computer-readable storage media or memories, such as a cache, buffer, RAM, removable media, hard drive or other computer readable storage media. Computer readable storage media include various types of volatile and nonvolatile storage media. The functions, acts or tasks illustrated in the figures or described herein are executed in response to one or more sets of instructions stored in or on computer readable storage media. The functions, acts or tasks are independent of the particular type of instructions set, storage media, processor or processing strategy and may be performed by software, hardware, integrated circuits, firmware, micro code and the like, operating alone or in combination. In one embodiment, the instructions are stored on a removable media device for reading by local or remote systems. In other embodiments, the instructions are stored in a remote location for transfer through a computer network or over telephone lines. In yet other embodiments, the instructions are stored within a given computer, CPU, GPU or system. Because some of the constituent system components and method steps depicted in the accompanying figures are preferably implemented in software, the actual connections between the system components (or the process steps) may differ depending upon the manner in which the present embodiments are programmed.
FIG. 4
200
Healthcare providers may employ automated techniques for information storage and retrieval. The use of a computerized patient record (CPR)(e.g., an electronic medical record) to maintain patient information is one such example. As shown in , an exemplary CPR includes information collected over the course of a patient's treatment or use of an institution. This information may include, for example, computed tomography (CT) images, X-ray images, laboratory test results, doctor progress notes, details about medical procedures, prescription drug information, radiological reports, other specialist reports, demographic information, family history, patient information, and billing (financial) information.
A CPR may include a plurality of data sources, each of which typically reflects a different aspect of a patient's care. Alternatively, the CPR is integrated into one data source. Structured data sources, such as financial, laboratory, and pharmacy databases, generally maintain patient information in database tables. Information may also be stored in unstructured data sources, such as, for example, free text, images, and waveforms. Often, key clinical findings are only stored within unstructured physician reports, annotations on images or other unstructured data source.
FIG. 2
102
114
102
Referring to , the processor executes the instructions stored in the computer readable media, such as the storage . The instructions are for mining patient records (e.g., the CPR), computer-based patient management for healthcare, predicting readmission, assigning workflow jobs, other functions, or combinations thereof. For training and/or application of the classifier or management of care, values of variables are used. The values for particular patients are mined from the CPR. The processor mines the data to provide values for the variables.
FIG. 3
FIG. 4
102
350
310
330
350
352
354
356
380
Any technique may be used for mining the patient record, such as structured data based searching. In one embodiment, the methods, systems and/or instructions disclosed in U.S. Published Application No. 2003/0120458 are used, such as for mining from structured and unstructured patient records. illustrates an exemplary data mining system implemented by the processor for mining a patient record to create high-quality structured clinical information. The processing components of the data mining system are software, firmware, microcode, hardware, combinations thereof, or other processor based objects. The data mining system includes a data miner that mines information from a CPR using domain-specific knowledge contained in a knowledge base . The data miner includes components for extracting information from the CPR , combining all available evidence in a principled fashion over time , and drawing inferences from this combination process . The mined information may be stored in a structured CPR . The architecture depicted in supports plug-in modules wherein the system may be easily expanded for new data sources, diseases, and hospitals. New element extraction algorithms, element combining algorithms, and inference algorithms can be used to augment or replace existing algorithms.
The mining is performed as a function of domain knowledge. The domain knowledge provides an indication of reliability of a value based on the source or context. For example, a note indicating the patient is a smoker may be accurate 90% of the time, so a 90% probability is assigned. A blood test showing nicotine may indicate that the patient is a smoker with 60% accuracy.
330
330
Detailed knowledge regarding the domain of interest, such as, for example, a disease of interest, guides the process to identify relevant information. This domain knowledge base can come in two forms. It can be encoded as an input to the system, or as programs that produce information that can be understood by the system. For example, a study determines factors contributing to timeline completion or class assignment. These factors and their relationships may be used to mine for values. The study is used as domain knowledge for the mining. Additionally or alternatively, the domain knowledge base may be learned from test data.
The domain-specific knowledge may also include disease- specific domain knowledge. For example, the disease-specific domain knowledge may include various factors that influence risk of a disease, disease progression information, complications information, outcomes, and variables related to a disease, measurements related to a disease, and policies and guidelines established by medical bodies.
The information identified as relevant by the study, guidelines for treatment, medical ontologies, or other sources provides an indication of probability that a factor or item of information indicates or does not indicate a particular value of a variable. The relevance may be estimated in general, such as providing a relevance for any item of information more likely to indicate a value as 75% or other probability above 50%. The relevance may be more specific, such as assigning a probability of the item of information indicating a particular diagnosis based on clinical experience, tests, studies or machine learning. Based on the domain-knowledge, the mining is performed as a function of existing knowledge, guidelines, or best practices regarding care. The domain knowledge indicates elements with a probability greater than a threshold value of indicating the patient state (i.e., collection of values). Other probabilities may be associated with combinations of information.
Domain-specific knowledge for mining the data sources may include institution-specific domain knowledge. For example, information about the data available at a particular hospital, document structures at a hospital, policies of a hospital, guidelines of a hospital, and any variations of a hospital. The domain knowledge guides the mining, but may guide without indicating a particular item of information from a patient record.
352
352
310
330
The extraction component deals with gleaning small pieces of information from each data source regarding a patient or plurality of patients. The pieces of information or elements are represented as probabilistic assertions about the patient at a particular time. Alternatively, the elements are not associated with any probability. The extraction component takes information from the CPR to produce probabilistic assertions (elements) about the patient that are relevant to an instant in time or period. This process is carried out with the guidance of the domain knowledge that is contained in the domain knowledge base . The domain knowledge for extraction is generally specific to each source, but may be generalized.
The data sources include structured and/or unstructured information. Structured information may be converted into standardized units, where appropriate. Unstructured information may include ASCII text strings, image information in DICOM (Digital Imaging and Communication in Medicine) format, and text documents partitioned based on domain knowledge. Information that is likely to be incorrect or missing may be noted, so that action may be taken. For example, the mined information may include corrected information, including corrected ICD-9 diagnosis codes.
Extraction from a database source may be carried out by querying a table in the source, in which case, the domain knowledge encodes what information is present in which fields in the database. On the other hand, the extraction process may involve computing a complicated function of the information contained in the database, in which case, the domain knowledge may be provided in the form of a program that performs this computation whose output may be fed to the rest of the system.
Extraction from images, waveforms, etc., may be carried out by image processing or feature extraction programs that are provided to the system.
Extraction from a text source may be carried out by phrase spotting, which requires a list of rules that specify the phrases of interest and the inferences that can be drawn there from. For example, if there is a statement in a doctor's note with the words “There is evidence of metastatic cancer in the liver,” then, in order to infer from this sentence that the patient has cancer, a rule is needed that directs the system to look for the phrase “metastatic cancer,” and, if it is found, to assert that the patient has cancer with a high degree of confidence (which, in the present embodiment, translates to generate an element with name “Cancer”, value “True” and confidence 0.9).
354
The combination component combines all the elements that refer to the same variable at the same time period to form one unified probabilistic assertion regarding that variable. Combination includes the process of producing a unified view of each variable at a given point in time from potentially conflicting assertions from the same/different sources. These unified probabilistic assertions are called factoids. The factoid is inferred from one or more elements. Where the different elements indicate different factoids or values for a factoid, the factoid with a sufficient (thresholded) or highest probability from the probabilistic assertions is selected. The domain knowledge base may indicate the particular elements used. Alternatively, only elements with sufficient determinative probability are used. The elements with a probability greater than a threshold of indicating a patient state (e.g., directly or indirectly as a factoid), are selected. In various embodiments, the combination is performed using domain knowledge regarding the statistics of the variables represented by the elements (“prior probabilities”).
The patient state is an individual model of the state of a patient. The patient state is a collection of variables that one may care about relating to the patient, such as established by the domain knowledgebase. The information of interest may include a state sequence, i.e., the value of the patient state at different points in time during the patient's treatment.
356
The inference component deals with the combination of these factoids, at the same point in time and/or at different points in time, to produce a coherent and concise picture of the progression of the patient's state over time. This progression of the patient's state is called a state sequence. The patient state is inferred from the factoids or elements. The patient state or states with a sufficient (thresholded), high probability or highest probability is selected as an inferred patient state or differential states.
Inference is the process of taking all the factoids and/or elements that are available about a patient and producing a composite view of the patient's progress through disease states, treatment protocols, laboratory tests, clinical action or combinations thereof. Essentially, a patient's current state can be influenced by a previous state and any new composite observations. The diagnosis-related group, severity, cohort, or other item to be predicted, classified or clustered may be considered as a patient state so that the mining determines the item without a further application of a separate model.
The domain knowledge required for this process may be a statistical model that describes the general pattern of the evolution of the disease of interest across the entire patient population and the relationships between the patient's disease and the variables that may be observed (lab test results, doctor's notes, or other information). A summary of the patient may be produced that is believed to be the most consistent with the information contained in the factoids, and the domain knowledge.
For instance, if observations seem to state that a cancer patient is receiving chemotherapy while he or she does not have cancerous growth, whereas the domain knowledge states that chemotherapy is given only when the patient has cancer, then the system may decide either: (1) the patient does not have cancer and is not receiving chemotherapy (that is, the observation is probably incorrect), or (2) the patient has cancer and is receiving chemotherapy (the initial inference—that the patient does not have cancer—is incorrect); depending on which of these propositions is more likely given all the other information. Actually, both (1) and (2) may be concluded, but with different probabilities.
As another example, consider the situation where a statement such as “The patient has metastatic cancer” is found in a doctor's note, and it is concluded from that statement that <cancer=True (probability=0.9)>. (Note that this is equivalent to asserting that <cancer=True (probability=0.9), cancer=unknown (probability=0.1)>).
Now, further assume that there is a base probability of cancer <cancer=True (probability=0.35), cancer=False (probability=0.65)>(e.g., 35% of patients have cancer). Then, this assertion is combined with the base probability of cancer to obtain, for example, the assertion <cancer=True (probability=0.93), cancer=False (probability=0.07)>.
1. <cancer=True (probability=0.9), cancer=unknown probability=0.1)>
2. <cancer=False (probability=0.7), cancer=unknown (probability=0.3)>
3. <cancer=True (probability=0.1), cancer=unknown (probability=0.9)> and
4. <cancer=False (probability=0.4), cancer=unknown (probability=0.6)>.
Similarly, assume conflicting evidence indicated the following:
In this case, we might combine these elements with the base probability of cancer <cancer=True (probability=0.35), cancer=False (probability=0.65)> to conclude, for example, that <cancer=True (prob =0.67), cancer=False (prob=0.33)>.
9
(a) ICD- billing codes for secondary diagnoses associated with diabetes;
(b) drugs administered to the patient that are associated with the treatment of diabetes (e.g., insulin);
(c) patient's lab values that are diagnostic of diabetes (e.g., two successive blood sugar readings over 250 mg/d);
(d) doctor mentions that the patient is a diabetic in the H&P (history & physical) or discharge note (free text); and
(e) patient procedures (e.g., foot exam) associated with being a diabetic.
As can be seen, there are multiple independent sources of information, observations from which can support (with varying degrees of certainty) that the patient is diabetic (or more generally has some disease/condition). Not all of them may be present, and in fact, in some cases, they may contradict each other. Probabilistic observations can be derived, with varying degrees of confidence. Then these observations (e.g., about the billing codes, the drugs, the lab tests, etc.) may be probabilistically combined to come up with a final probability of diabetes. Note that there may be information in the patient record that contradicts diabetes. For instance, the patient has some stressful episode (e.g., an operation) and his blood sugar does not go up.
Numerous data sources may be assessed to gather the elements, and deal with missing, incorrect, and/or inconsistent information. As an example, consider that, in determining whether a patient has diabetes, the following information might be extracted:
330
The above examples are presented for illustrative purposes only and are not meant to be limiting. The actual manner in which elements are combined depends on the particular domain under consideration as well as the needs of the users of the system. Further, while the above discussion refers to a patient-centered approach, actual implementations may be extended to handle multiple patients simultaneously. Additionally, a learning process may be incorporated into the domain knowledge base for any or all of the stages (i.e., extraction, combination, inference).
The system may be run at arbitrary intervals, periodic intervals, or in online mode. When run at intervals, the data sources are mined when the system is run. In online mode, the data sources may be continuously mined. The data miner may be run using the Internet. The created structured clinical information may also be accessed using the Internet. Additionally, the data miner may be run as a service. For example, several hospitals may participate in the service to have their patient information mined, and this information may be stored in a data warehouse owned by the service provider. The service may be performed by a third party service provider (i.e., an entity not associated with the hospitals).
380
Once the structured CPR is populated with patient information, it will be in a form where it is conducive for answering questions regarding individual patients, and about different cross-sections of patients. The values are available for use in classifying the patient to determine a workflow for care.
The domain knowledgebase, extractions, combinations and/or inference may be responsive or performed as a function of one or more variables. For example, the probabilistic assertions may ordinarily be associated with an average or mean value. However, some medical practitioners or institutions may desire that a particular element be more or less indicative of a patient state. A different probability may be associated with an element. As another example, the group of elements included in the domain knowledge base for a case manager may be different for different medical entities. The threshold for sufficiency of probability or other thresholds may be different for different people or situations.
Other variables may be use or institution specific. For example, different definitions of a primary care physician may be provided. A number of visits threshold may be used, such as visiting the same doctor 5 times indicating a primary care physician. A proximity to a patient's residence may be used. Combinations of factors may be used.
The user may select different settings. Different users in a same institution or different institutions may use different settings. The same software or program operates differently based on receiving user input. The input may be a selection of a specific setting or may be selection of a category associated with a group of settings.
The mining, such as the extraction, and/or the inferring, such as the combination, are performed as a function of the selected threshold. By using a different upper limit of normal for the patient state, a different definition of information used in the domain knowledge or other threshold selection, the patient state or associated probability may be different. User's with different goals or standards may use the same program, but with the versatility to more likely fulfill the goals or standards.
Various improvements described herein may be used together or separately. Although illustrative embodiments of the present invention have been described herein with reference to the accompanying drawings, it is to be understood that the invention is not limited to those precise embodiments, and that various other changes and modifications may be affected therein by one skilled in the art without departing from the scope or spirit of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1
is a flow chart diagram of one embodiment of a method for computer-based patient management for healthcare;
FIG. 2
is a flow chart diagram of one embodiment of a method for computer-based accountable care optimization for healthcare;
FIG. 3
shows an exemplary data mining framework for mining clinical information;
FIG. 4
shows an exemplary computerized patient record (CPR);
FIG. 5
shows a block diagram of a system for patient management for healthcare according to one embodiment; and
FIG. 6
is another exemplary data mining framework for mining in computer-based patient management for healthcare. | |
Collapsing Interstellar Cloud Fragment
As the huge interstellar cloud collapses into many fragments, it is useful to consider the processes inside one of the individual cloud fragments as it continues to develop into a star. Most of these fragments are about one to two solar masses but can be about 100 times the size of the Earth's present solar system. The temperature of the cloud is about the same as when it started to condense, but it would have an increased density of about 1012 particles per cubic meter in the center of the cloud fragment. The temperature has not changed because the density of the cloud is still too low to capture photons emitted from the gas, and the energy of the photons escapes to space instead of being absorbed by the cloud. However the very center of the cloud may experience significant warming by this stage, perhaps to 100 K, as the gas is denser there and can absorb more of the radiation produced in the gas. As the cloud continues to shrink, it becomes denser so that eventually the cloud begins to trap the radiation across large regions, and the temperature of the whole cloud increases. This causes an increase in the internal pressure (equated with temperature and speed of particle movement), which grows strong enough to overcome the force of gravity that was pulling the cloud together. At this stage, the contraction of the cloud stops and the fragmentation of the original cloud stops. The orion nebula, in the constellation orion, provides beautiful examples of cloud fragments that are lit by the absorption of radiation produced in the cloud.
The time from initial contraction to the end of fragmentation of the interstellar cloud may take only
30,000
10,000 6,000 Temperature (Kelvin)
3,000
Blue
6 Infobase Publishing
30,000
10,000 6,000 Temperature (Kelvin)
3,000
Spectral type
Hertzsprung-Russell diagram showing stellar evolution. The diagram plots luminosity on the vertical axis and surface temperature or spectral class on the horizontal axis.
a few tens of thousands of years, and by this stage the size of the cloud fragment is roughly the same as Earth's present solar system, but the central temperature of the cloud has now reached about 10,000 K, while the peripheral temperatures at the edge of the cloud are still close to their starting temperatures. since the edges of the cloud are cooler than the denser interior, they are also thinner, and the cloud takes on the shape of a thick ball in the center surrounded by a flattened and outward thinning disk.
The central density in this stage may be 1018 particles per cubic meter.
The center of the collapsed fragment is now roughly spherical, dense, and hot and begins to resemble an embryonic protostar, as it continues to grow in mass as gravity attracts more material into its core, although the size of the internal embryonic protostar continues to shrink since the force of gravity in the core remains greater than the internal pressure generated by the gas temperature. At this stage
Infrared and visible light composite of Orion Nebula, 1,500 light-years from Earth, taken by Spitzer and Hubble Space Telescopes, November 7, 2006. The image shows a region of star birth, including four massive stars at the center of the cloud, occupying the region that resembles a yellow smudge near the center of the image. Swirls of green reveal hydrogen and sulfur gas that is heated and ionized by the intense ultraviolet radiation from the massive young stars at the center of the cloud. The wisps of red and orange are carbon-rich organic molecules in the cloud. The orange-yellow dots scattered throughout the cloud are infant stars embedded in a cocoon of dust and gas. The ridges and cavities in the cloud were formed by winds emanating from the four super massive stars at the center of the cloud. (NASA Jet Propulsion Laboratory)
the embryonic protostar develops a protosphere, a surface below which the material is opaque to the radiation it emits.
Protostar
The protostar in the center of the collapsed disk continues to shrink and grow in density, its internal temperature increases, and the surface temperature on its protosphere continues to rise, generating higher pressures. About 100,000 years after the initial fragment formed from the interstellar cloud, the center of the protostar reaches about 100,000,000 K, and free electrons and protons swirl around at hundreds of miles (km) per second, but temperatures remain below the critical value (107 K) necessary to start nuclear fusion to burn hydrogen into helium. The protosphere has a temperature of a few thousand K, and the radius of the protosphere places it at about the distance of Mercury from the sun.
At this stage the protostar can be plotted on an H-R diagram, where it would have a radius of 100 or more times that of the present sun, a temperature of about half of the sun's present temperature, and a luminosity that is several thousand times that of the sun. The luminosity is so high because of the large size of the protostar, even though the temperature is lower than the present temperature of the sun. The energy source for the luminosity and elevated temperatures in this protostar is from the release of gravitational potential energy during collapse of the interstellar cloud.
Pressures build up inside the protostar at this stage from the elevated temperatures, but the gravitational force is still stronger than the thermal pressures, so contraction continues albeit at a slower rate. Heat diffuses from the core of the protostar to the surface, where it is radiated into space, limiting the rise in temperature but allowing contraction to continue. If this did not happen then the temperatures would rise in the star and it would not contract enough to reach densities at which nuclear fusion would begin, and it would not form into a true star, but would remain a dim protostar.
The protostar continues to move down on the evolutionary H-R diagram toward higher temperature and lower luminosity as the surface area shrinks. Internal densities and temperatures increase while the surface temperature remains about the same,
Horsehead Nebula in the Orion molecular cloud complex. This image was produced from three images obtained with multimode FORS2 instrument at the second VLT Unit Telescope. (European Southern Observatory)
but the surface can be intensely active and be associated with intense solar winds such as those that characterize T-Tauri stars. A possible example of a protostar in this stage of evolution in the orion molecular cloud is known as the Kleinmann-Low Nebula, which emits strong infrared electromagnetic radiation at about 1,000 times the solar luminosity. some protostars are surrounded by dense dark dust clouds that absorb most of the ultraviolet radiation emitted by the protostars. This dust then reemits the radiation at infrared wavelengths, where it appears as bright objects. since the source of the radiation is cloaked in a blanket of dust, these types of structures have become known as cocoon nebula.
By the time the protostar has shrunk to about 10 times the size of the sun, the surface temperature is about 4,000 K and its luminosity is now about 10 times that of the sun. However, the central temperature has risen to about 5,000,000 K, which is enough to completely ionize all the gas in the core but not high enough to start nuclear fusion. The high pressures cause the gravitational contraction to slow, with the rate of continued contraction controlled by the rate at which the heat can be transported to the surface and radiated away from the protosphere. strong presolar winds at this stage blow hydrogen and carbon monoxide molecules away from the protostar at velocities of 60 miles per second (100 km/sec). These winds encounter less resistance in a direction that is perpendicular to the plane of the disk formed from the flattened interstellar cloud fragment, since there is less dust in these directions. In this stage, therefore, some protostar nebulae exhibit a strong bipolar flow structure, in which strong winds blow jets of matter out in the two directions perpendicular to the plane of the disk. This strange-looking structure eventually decays, though, as the strong winds blow the dust cloud away in all directions.
Approximately 10 million years after becoming a protostar, when the temperature reaches about 10,000,000 K and the radius about 6,200 miles (10,000 km), nuclear fusion begins in the center of the protostar. At this stage, a true star is born and is typically a little larger than our present sun, the surface temperature is about 4,500 K, and the luminosity is about two-thirds that of the present sun.
Continue reading here: Main Sequence Star
Was this article helpful? | https://www.climate-policy-watcher.org/plate-tectonics/collapsing-interstellar-cloud-fragment.html |
Innovation is the collective term for anything that aids the progression of science, technology, medicine, engineering, and also various other fields. Technological systems are generally defined as systems that have technical facets. Modern technology is additionally utilized to refer to a number of developments, the application of brand-new methods in different fields, or the process of adapting well-known practices in brand-new areas. Technical systems are typically established to serve a social or sensible requirement, causing brand-new expertise, new methods, and new systems of organizations. In most cases, technical systems are created to boost human well-being and also quality of life.
Technological systems are developed to aid individuals resolve issues. Modern technology helps us make points quicker, supply far better solutions, reduce prices, increase performance, and also offer a variety of brand-new as well as improved solutions. To put it just, technical systems promote individuals’s interaction with each other. This way, modern technology can be seen as the application of certain clinical as well as mathematical understanding into tasks that improve individuals’s lives in even more ordinary means.
Technological systems can be found in various kinds. A few of the most typical include making use of modern technology to: construct, analyze, design, and also preserve points such as vehicles and also airplanes; use scientific knowledge to gather, collect, shop, as well as disseminate data as well as information; use scientific knowledge to build and also apply local area network; usage scientific knowledge to develop as well as execute information management systems; and utilize scientific knowledge to style as well as manufacture items such as vehicles and also airplanes. Various other prospective applications include using technology to: interact with others; utilize electricity; create as well as harness power; provide security solutions; transfer and get signals and also information; as well as control and also utilize the physical structures of the globe around us. Various other much more esoteric applications are technically challenging to define, yet have no lesser a result on culture.
In regards to what is being altered or enhanced by modern technology, there is a wide variety of possibilities. One example is education, which is often considered a high technology endeavor. Teachers in schools make use of training as well as discovering methods such as checking out to share information and also knowledge to pupils; producing as well as constructing things like college networks of pupils; and using digital gadgets and innovation for the purpose of sharing large quantities of information. Every one of these points can be thought about advancements in education and learning. In addition, education is an industrialized task, suggesting that numerous procedures occur at a really high-rate in factories and also other areas that do not always consist of universities.
At the same time, some think that negative results of digital modern technologies can only be thought about as negative surfaces. There are 2 main sights on this. One sight is that the existence of digital innovations has had adverse consequences, whether we choose to take a look at it in this way. The 2nd view is that electronic modern technologies have not necessarily had negative consequences, although the adverse ones that exist currently may conveniently be altered. It is normally the 2nd sight that is utilized to justify the unchecked development of high technology. However, the presence of things like Facebook and Twitter reveal that there are still parts of the globe where the function innovation is being used without problem, recommending that the development of innovation may not always have any type of negative repercussions.
For several teachers as well as plan manufacturers, the trouble with the growth of electronic innovations, especially artificial intelligence is that it is not neutral. Rather, expert system is something that is developed for one purpose, which objective is to serve as a maker that can do tasks for people, giving them with solution to their inquiries or giving them with remedies to their troubles, supplied that those points are allowed by the customers and also their personal privacy is safeguarded. Many people fret about the ramifications of synthetically intelligent computers connecting unethically with human beings. On the other hand, individuals additionally are afraid about the prospective danger of expert system ending up being too human-like, and also obtaining so comfortable that it starts to take control of instead of being managed by human beings.
One disagreement that seems to represent the extra downhearted view of the influence of the 2nd industrial revolution is that of Max Weber, that holds that in the second industrial change, modern technology was merely the tool which enabled the development of the human condition. In his sight, there would be no long-lasting influence of technological progression, because as individuals ended up being less bad, they became smarter, hence, innovation merely added on to the capacities of the human condition, giving individuals access to new technical know-how. In his sight, there would be little to be worried about as long as people continue to utilize their minds to introduce and also continue to be productive.
Nonetheless, there are also those that are pessimistic regarding the effect of technology on the human condition. They argue that as modern technology comes to be extra complex, it becomes harder for people to control, indicating that one can conveniently end up being a victim of the system. This is because there might be unexpected repercussions, such as when synthetically intelligent software application begins to operate past the control of its designers. Such unforeseen consequences can present a risk to the people’s flexibility, and could bring about alarming repercussions. It is for these reasons that it is vital to have an equilibrium between the wish for technical improvement, and also concern for the well being of the human species.
The modifications that technology has actually made in society are large and continual. Developments in technology have created brand-new work chances, enhanced the economy, enhanced free time, and gave customers and businesses new means to engage with each other. As brand-new types of innovation emerge, the ways people interact, conduct service, and connect will certainly likewise transform. The impact of technology on society remains to expand.
One of the biggest changes in innovation is the cell phone. Cellular phone have come to be a kind of innovation that people carry about with them constantly. People can correspond with others while on the go. Cell phone companies are continuously working to make cellular phone devices better as well as attractive.
Adjustments in Modern technology impact almost every aspect of our lives. In addition to mobile phone, various other modern technology staples such as televisions and computers are likewise ending up being integral parts of our daily lives. While these modern technologies remain to boost, they additionally often produce new obstacles for those that use them. Computer game are becoming extra complicated, creating the more youthful generation to play video games that are more difficult, which can cause a boost in the number of cases of computer viruses. As well, as culture remains to establish better ties with technology, it is most likely that government law of this technology will certainly end up being more stringent in the future. get more info
The changes that innovation have actually made in society as well as in our lives are substantial. Adjustments such as these have developed tremendous opportunities for those who are proficient in operation computers as well as technology. Those who get entailed early in the advancement of innovation will be able to take advantage of brand-new technologies as well as make the most of their productivity. Those that want studying these impact the growth of society can find a number of internet sites that concentrate on this topic. A fantastic instance of a web site that shows exactly how innovation has actually impacted society can be found at TechCrunch. | http://rebeldreams.net/2021/11/02/just-how-will-modern-technology-be-in-the-future/ |
This study aimed to examine the influence of anxiety sensitivity on task performance and physiological stress response, and to assess the effect of depression in this process for the youth population.
We presented participants with an uncontrollable stress situation where they were required to perform mental arithmetic, based on the Montreal Imaging Stress Task (MIST). A total of 29 participants volunteered for this study. They completed the Anxiety Sensitivity Index-Revised and Patient Health Questionnaire-9 to measure their levels of anxiety sensitivity and depression. Two saliva samples, one before and one after the experiment, were collected to assess the change in cortisol levels as an index of physiological stress response.
Participants with high anxiety sensitivity showed lower performance on the mental arithmetic tasks and a significant increase in a salivary cortisol level, compared to those with low anxiety sensitivity. Furthermore, cortisol levels showed a remarkable increase where high anxiety sensitivity was coupled with depressed mood. In other hands, the levels of cortisol remained unchanged despite high anxiety sensitivity with low depressed mood.
Our results confirm that the interaction between anxiety sensitivity and depression affects participants’ task performance and stress response, as measured through behavioral tasks and physiological data with self-report indices. Also, through the physiological data, we examined that those who have a high level of anxiety sensitivity showed maladaptive responses under high stressful situation.
Numerous studies have accumulated findings indicating that the way of coping is a critical mediator in the relation between these traumatic events and psychological outcomes. This study investigates the associations between coping strategies and post-traumatic stress symptoms (PTSS) in Korean adults.
Through an online survey, 554 non-clinical adult respondents were recruited. We assessed PTSS using the Impact of Event Scale-Revised (IES-R) scale and measured individual coping strategies using the Ways of Coping Checklist (WCCL). Based on the IES-R standard cut-off score, we categorized the respondents into 3 groups: normal (n=255), non-PTSS (n=185) and PTSS (n=144) after exposure to traumatic events.
The scores of each coping strategy in a PTSS group were generally higher than in either the normal or non-PTSS group. In the logistic regression analysis, PTSS group was 2.77 more likely to use Tension-reduction coping compared to the other two samples.
Our findings suggest that PTSS is associated with high inclination to apply emotion-focused coping such as tension-reduction contributing to psychological distress. These results point to the potential value of coping strategies in prevention of and therapeutic approach to PTSS for non-clinical adults. | https://www.stressresearch.or.kr/articles/author_index.php?index_name=Jung%20Hyun%20Lee&term=search&syear=&mode=view |
What is Eudaimonic life?
Eudaimonic well-being refers to the subjective experiences associated with eudaimonia or living a life of virtue in pursuit of human excellence. The phenomenological experiences derived from such living include self-actualization, personal expressiveness, and vitality.
What are the 3 levels of happiness?
Some psychologists have suggested that happiness consists of three distinct elements: the pleasant life, the good life, and the meaningful life (Seligman, 2002; Seligman, Steen, Park, & Peterson, 2005).
What is eudaimonia example?
Ascribing eudaimonia to a person, then, may include ascribing such things as being virtuous, being loved and having good friends. But these are all objective judgments about someone’s life: they concern a person’s really being virtuous, really being loved, and really having fine friends.
How do you operationalize anger?
Since one of the measures of anger is loudness, the researcher can operationalize the concept of anger by measuring how loudly the subject speaks compared to his normal tone. However, this must assume that loudness is uniform measure. Some might respond verbally while other might respond physically.
Is operationalize a real word?
A: The verb “operationalize” may be clunky and relatively new, but it’s a legitimate word, with roots in ancient Rome. This more recent sense of “operationalize,” defined by the Oxford English Dictionary as “to put into effect” or “to realize,” dates from the early 1980s.
What is the highest good in life?
Happiness is the highest good because we choose happiness as an end sufficient in itself. Even intelligence and virtue are not good only in themselves, but good also because they make us happy. We call people “good” if they perform their function well.
How do I get Eudaimonia?
For Aristotle, eudaimonia was achieved through living virtuously – or what you might describe as being good. This doesn’t guarantee ‘happiness’ in the modern sense of the word. In fact, it might mean doing something that makes us unhappy, like telling an upsetting truth to a friend. Virtue is moral excellence.
What are two qualities of happiness according to Aristotle?
The Pursuit of Happiness as the Exercise of Virtue. According to Aristotle, happiness consists in achieving, through the course of a whole lifetime, all the goods — health, wealth, knowledge, friends, etc. — that lead to the perfection of human nature and to the enrichment of human life.
What are examples of happiness?
The definition of happiness is the state of joy, peace and tranquility. An example of happiness is a bride’s feeling of joy on her wedding day. The emotion of being happy; joy. (archaic) Good luck; good fortune; prosperity.
What is operationalization in business?
The definition of operationalize is to put into operation or use. To be operationalized, the plan should engage the whole company in the work, not just the C-suite. Once the plan is finished, the top level projects roll into actions that roll down to departments, etc.
What are the 7 keys to happiness?
7 Remarkably Powerful Keys to Happiness and Success (According to Science)
- Live (or work) in the moment. Don’t get stressed thinking about things you have to do later on.
- Find resilience.
- Manage your energy.
- Do nothing.
- Treat yourself well.
- Venture outside your comfort zone.
- Show compassion outwardly.
Can we measure how happy we are?
Can You Test and Measure Happiness Scientifically? A great number of measures for happiness are self-report assessments. This might make most us think that happiness cannot be measured scientifically. These self-assessments are often created in a scientific manner through research, testing, and norming.
How do you operationalize an idea?
How to operationalize concepts
- Identify the main concepts you are interested in studying.
- Choose a variable to represent each of the concepts.
- Select indicators for each of your variables.
What is the concept of happiness?
Happiness is an emotional state characterized by feelings of joy, satisfaction, contentment, and fulfillment. While happiness has many different definitions, it is often described as involving positive emotions and life satisfaction. Happiness is generally linked to experiencing more positive feelings than negative.
What is the first step to happiness?
10 Simple Steps to a Happier You
- GIVING. Do things for others.
- RELATING. Connect with people.
- EXERCISING. Take care of your body.
- APPRECIATING. Notice the world around you.
- TRYING OUT. Keep learning new things.
- DIRECTION. Have goals to look forward to.
- RESILIENCE. Find ways to bounce back.
- EMOTION. Take a positive approach.
What does Eudaimonia mean in English?
Eudaimonia, also spelled eudaemonia, in Aristotelian ethics, the condition of human flourishing or of living well.
What is the most important step in Operationalisation of the strategy?
Well-implemented strategic planning provides the vision, direction and goals for the organization, but operational planning translates that strategy into the everyday execution tactics of the business that will ultimately produce the outcomes defined by the strategy.
What are the two types of happiness?
The first type, known as eudaimonic well-being, is happiness associated with a sense of purpose or a meaning in life. The second, known as hedonic well-being, is happiness as the result of “consummatory self-gratification” or happiness not associated with a purpose but rather a response to a stimulus or behavior.
What are the 4 levels of happiness?
Four Levels of Happiness
- Level 1: Pleasure. The first level of happiness includes the fundamental drivers in your life — physical pleasure and immediate gratification.
- Level 2: Passion.
- Level 3: Purpose.
- Level 4: Ultimate Good.
- How 7 Summit Pathways Can Help.
Why is Eudaimonia not for everybody?
Eudaimonia is an end, we use all other goods to achieve it, thus eudaimonia is the highest end for human beings (requires reason which is strictly human). Many people will not reach eudaimonia because they do not have adequate resources, they may well know they will never reach eudaimonia.
What brings true happiness?
True happiness is enjoying your own company and living in peace and harmony with your body, mind and soul. True happiness is state of mind constantly being in love with yourself. For being truly happy you neither need other people nor materialistic things. “Happiness is the consequence of personal effort.
How do you operationally define anxiety? | https://www.kingfisherbeerusa.com/what-is-eudaimonic-life/ |
Poseidon is the Greek god of the sea who also had the ability to create storms and earthquakes. He is represented holding a trident, a large three-prong spear. He was married to Amphitrite, the queen of the sea, but had many children through other extramarital relationships.
Poseidon was one of the Twelve Olympians, and his brothers were Zeus and Hades. Zeus would become the god of the skies while Hades would become the god of the underworld. These are some of the most interesting facts about the Greek god of the sea, Poseidon.
Fact #1: Poseidon had Children with Medusa
Medusa was a Gorgon monster with snakes for hair who could turn anyone to stone when they looked into her eyes. Medusa was once a beautiful woman. Medusa was a maiden who served Athena in her temple.
As Medusa was in Athena’s temple, Poseidon raped her and impregnated her with two children. Poseidon fled the temple right before Athena discovered that Medusa had been unwillfully intimate with the god of the sea within her temple.
When Athena discovered Medusa had been intimate with Poseidon in her temple, Athena cursed her with an ugliness and snakes for hair and cast her out of the temple. She would come to live on an island near the Hesperides and Sarpedon.
When Perseus beheaded Medusa after she was turned into a monster, two children sprang from her neck. The two children were children of Poseidon. The children were born at the same time, and they were known as Pegasus and Chrysaor. Pegasus is a white-winged horse, and Chrysaor is a young man with a golden sword.
Fact #2: Poseidon Competed Against Athena for the City of Athens
The city of Athens was in need of a deity to protect and oversee it. Cecrops was the king of Athens and asked Poseidon and Athena to present him with a gift that would benefit the city. This gift would determine who was worthy of overseeing the city of Athens.
In some versions of the myth, Poseidon presents the city of Athens with a horse, while other versions say he struck the earth with his trident, which caused a stream of saltwater to enter the city.
Athena used her power of knowledge and wisdom to present her gift to Cecrops and the city. Her gift was the olive tree. The olive tree represented peace and prosperity on earth.
Cecrops was impressed with Athena’s gift and how it would greatly benefit the city of Athens. Cecrops granted Athena the city of Athens, for whom the city would be named after.
Poseidon was angry with the decision. He and Athena quickly became rivals.
Fact #3: Poseidon, Greek God of the Horses
Poseidon was attempting to win the love and affection of Demeter, Greek goddess of harvest, to no avail. Poseidon was repeatedly rejected by Demeter until she finally gave Poseidon a request. He requested that he create the most beautiful animal in the world.
Poseidon went to work and took a great deal of time creating the animal. He took such a long time creating the horse that when he finally created it, he presented it to Demeter and found he was no longer in love with her.
Poseidon would come to be known as the god of the horses in addition to being the god of the seas and earthquakes.
When Poseidon rides in his chariot, he is pulled by horses; a symbolic connection to the animal that he spent so much time creating. | https://www.theoi.com/articles/poseidon-facts-some-interesting-points/ |
Artist’s rendering of a GRB, where life-damaging gamma rays are produced by two relativistic jets powered by a new-born black hole at the heart of an exploding massive star.
Image credit: ESO/A Roquette.
A new study confirms the potential hazard of nearby gamma-ray bursts (GRBs), and quantifies the probability of an event on Earth and more generally in the Milky Way and other galaxies. The authors find a 50% chance that a nearby GRB powerful enough to cause a major life extinction on the planet took place during the past 500 million years (Myr). They further estimate that GRBs prevent complex life like that on Earth in 90% of the galaxies.
GRBs occur about once a day from random directions in the sky. Their origin remained a mystery until about a decade ago, when it became clear that at least some long GRBs are associated with supernova explosions (CERN Courier September 2003 p15). When nuclear fuel is exhausted at the centre of a massive star, thermal pressure can no longer sustain gravity and the core collapses on itself. If this process leads to the formation of a rapidly spinning black hole, accreted matter can be funnelled into a pair of powerful relativistic jets that drill their way through the outer layers of the dying star. If such a jet is pointing towards Earth, its high-energy emission appears as a GRB.
The luminosity of long GRBs – the most powerful ones – is so intense that they are observed throughout the universe (CERN Courier April 2009 p12). If one were to happen nearby, the intense flash of gamma rays illuminating the Earth for tens of seconds could severely damage the thin ozone layer that absorbs ultraviolet radiation from the Sun. Calculations suggest that a fluence of 100 kJ/m2 would create a depletion of 91% of this life-protecting layer on a timescale of a month, via a chain of chemical reactions in the atmosphere. This would be enough to cause a massive life-extinction event. Some scientists have proposed that a GRB could have been at the origin of the Ordovician extinction some 450 Myr ago, which wiped out 80% of the species on Earth.
With increasing statistics on GRBs, a new study now confirms a 50% likelihood of a devastating GRB event on Earth in the past 500 Myr. The authors, Tsvi Piran from the Hebrew University of Jerusalem and Raul Jimenez from the University of Barcelona in Spain, further show that the risk of life extinction on extra-solar planets increases towards the denser central regions of the Milky Way. Their estimate is based on the rate of GRBs of different luminosity and the properties of their host galaxies. Indeed, the authors found previously that GRBs are more frequent in low-mass galaxies such as the Small Magellanic Cloud with a small fraction of elements heavier than hydrogen and helium. This reduces the GRB hazard in the Milky Way by a factor of 10 compared with the overall rate.
The Milky Way would therefore be among only 10% of all galaxies in the universe – the larger ones – that can sustain complex life in the long-term. The two theoretical astrophysicists also claim that GRBs prevent evolved life as it exists on Earth in almost every galaxy that formed earlier than about five-thousand-million years after the Big Bang (at a redshift z > 0.5). Despite obvious, necessary approximations in the analysis, these results show the severe limitations set by GRBs on the location and cosmic epoch when complex life like that on Earth could arise and evolve across thousands of millions of years. This could help explain Enrico Fermi’s paradox on the absence of evidence for an extraterrestrial civilization. | https://cerncourier.com/a/gamma-ray-bursts-are-a-real-threat-to-life/ |
Limiting or even stopping misinformation from spreading is a concern that is currently challenging social media platforms, media players and citizens in general. Distinguishing facts from fiction, though, is not always a deterministic task; subjective forces like human values, beliefs and motivations come to play when judging a piece of information and deciding whether or not to promote it.
Problems of manual and automated fact-checking
Fact-checkers have played an important role in this battle by critically evaluating claims and making the information about their veracity and credibility available for citizens. They are usually journalists equipped with skills and a methodology to detect verifiable claims and look for pieces of evidence, committing to a set of principles that maximise neutrality and adherence to the truth. However, the volume and speed of information circulating on social media are beyond the reach of any network of fact-checkers, especially due to the fact that most of the curation assessment is still done manually. And there is also the problem of reaching the citizens, as can be seen by comparing the reach of debunked hoaxes with the corresponding fact-checking articles.
Automated and AI-featured tools are expected by some to be the solution for assessing the “truth” and defeating the different forms of misinformation. Regardless of the speed and the ability to process huge volume of data of computational approaches, it is very likely the expectation will fail if the intelligence in place is purely artificial.
At the current state-of-the-art, AI-based tools still face several limitations including their accuracy, when the output provided is wrong; the eventual non-neutrality in the judgments inherited by design, leading to bias and prejudice and, connected to that, the lack of transparency to end users on how the decisions are made.
As pointed out by the European Parliament, such limitations may pose a threat to freedom of expression, pluralism and democracy. For this reason, the EU Parliament argues in favour of regulating AI for content moderation only with strong human participation, a practice already adopted for some fact-checking agencies.
Extended Intelligence: combining efforts of fact-checking with the power of AI
Extended Intelligence is an approach that combines AI with the expertise of fact-checkers or citizen’s critical thinking, not replacing humans in decision-making processes.
The collaboration between fact-checkers and AI needs to be quite tight. Experts’ knowledge can be used to train AI models in learning features and extracting insights. A number of openly available fact-checked claims can also be used to this end (i.e. Google Fact Check Explorer and DataCommons). Based on learned cues, AI models can predict the credibility of new stories, or they can also provide related documents to speed up the fact-checkers analysis.
Transparent models allow experts to investigate and refine the assumptions and features that led to results. The experts then can also verify the predictions and validate the results before publishing them, as the accountability belongs to the journalists.
AI and experts merge their intelligence in perfect tandem, avoiding repetitive work by the experts and wrong decisions by AI tools.
Promoting critical thinking
Trusting information from social media platforms has been an increasing challenge to citizens in general. Major platforms (e.g. Facebook, YouTube) and other tools (e.g. browser extensions, searchable collections) are trying to better connect citizens with fact-checked information.
Properly providing accurate information is an essential measure. However, research has shown that being aware of the factual information is not enough to stop misinformation spreading. In a scenario of media pluralism, technical solutions have to take humans’ subjectivity into account and invite people to critically think, estimate risks and judge the impact of that information to the society.
Instead of censoring or banning content, tools should instigate users’ curiosity, inviting them to act as active thinkers by investigating details and explanations and, consequently learning how to discriminate good quality content.
Open challenges
With extended intelligence, AI is a bridge to convey and amplify expert opinion, both in terms of coverage and reach. Extended coverage means being able to predict the credibility of a certain piece of news with a certain level of confidence. Instead, the extended reach manifests in making the expert judgment more accessible, allowing more users to be in contact with it.
However, a set of challenges remains open. One example is in defining the boundary between opinion and facts. How to distinguish opinion and partisan interpretation from the misinforming manipulation of events, in a scenario where points of view are and must be, many and different?
The answer to this question sets the direction between improving society by promoting transparency and dialogue or, on the other side, to block freedom of expression by using censorship. This is a problem that is not new with manual fact-checking, and with Extended Intelligence, we need to be sure to empower the efforts in the right direction.
For this reason, the tools that will be developed during the Co-Inform project will focus on widening the points of view, conveying the expert knowledge and stimulating critical thinking.
Subscribe to our newsletter
Get our latest project updates, news and events announcements first! | https://coinform.eu/combating-misinformation-the-importance-of-human-judgement/ |
One Minute Weight Loss
Although exercise for both fitness and recreation has become a routine activity for many women, the effects of physical exertion on pregnancy outcome are not known, since there has not been a prospective, randomized trial studying this issue. Decreased rate of weight gain and subcutaneous fat deposition in the third trimester has been demonstrated in one (nonrandomized) study, although overall weight gain remained within the normal range. 25 In general, uncomplicated pregnancy should not limit the ability to engage in moderate physical exercise. The American College of Obstetricians and Gynecologists recommends some modifications in exercise routines in view of the physiologic and morphologic changes of gestation. 5 Non-weight-bearing activity and activities that minimize the chance of even mild abdominal trauma are preferable. Exercise in the supine position should be avoided completely after the first trimester due to potential for decreased cardiac output. Although there does not appear to be a need to alter goal intensity as judged by heart rate, exercise should be stopped at the onset of fatigue rather than continuing to exhaustion. Extra attention should be given to augmentation of heat dissipation, hydration, and appropriate clothing during activities. Adequate dietary intake should be ensured. As for all individuals, regular activity is preferable to sporadic exertion. Subjective benefits of exercise both preconception and during early pregnancy have been reported.26 Specific recommendations for exercise during pregnancy need to be individualized and made with the knowledge that there is no conclusive evidence on which to base recommendations.
Was this article helpful?
Are You Tired Of Failed New Year's Weight Loss Resolutions That Leave You Even More Overweight Than Ever Before? Not Anymore Finally Succeed With Your New Years Weight Loss Resolution Once And For All And Get The Body Of Your Dreams. | https://www.barnardhealth.us/emergency-medicine/exercise.html |
Here's a letter I wrote in response to the article below (which was never published):
I read with great interest the recent "Scientist at Work" piece in Science Times focusing on Nick Patterson. I work in a similar area to Dr. Patterson and liked the way the piece developed the concept of a "Data Analyst". In particular, I thought it important that it showed that in seemingly disparate fields such as Military Intelligence, Finance, and Genomics, there was a common thread of having to grapple with and analyze large amounts of data. Increasingly, the modern world is being transformed by vast sea of social and commercial information being tracked on the internet and by the huge amounts of data being generated by high-throughput scientific experimentation. Because of this, we are increasingly being confronted with large data sets in many fields. The challenge is how to mine them as Dr. Patterson does. A related issue is how we should educate a new generation of ace analysts who can take a more straightforward path to their problem than Dr. Patterson has.
http://www.nytimes.com/2006/12/12/science/12prof.html
Scientist at Work | Nick Patterson
A Cryptologist Takes a Crack at Deciphering DNA's Deep Secrets
By INGFEI CHEN
Published: December 12, 2006
Thirty years ago, Nick Patterson worked in the secret halls of the Government
Communications Headquarters, the code-breaking British agency that unscrambles
intercepted messages and encrypts clandestine communications. He applied his
brain to "the hardest problems the British had," said Dr. Patterson, a
mathematician. Today, at 59, he is tackling perhaps the toughest code of all —
the human genome...Genomics is a third career for Dr. Patterson, who confesses
he used to find biology articles in Nature "largely impenetrable." After 20
years in cryptography, he was lured to Wall Street to help build mathematical
models for predicting the markets. His professional zigzags have a unifying
thread, however: "I'm a data guy," Dr. Patterson said. "What I know about is how
to analyze big, complicated data sets." In 2000, he pondered who had the most
interesting, most complex data sets and decided "it had to be the biology
people."... | http://blog.gerstein.info/2007/01/letter-in-response-to-cryptologist.html |
On 8 July 2020 the Chancellor announced a temporary Stamp Duty Land Tax (SDLT) holiday for purchases concluded by 31 March 2021.
Those who are at an advanced stage in the buying process are likely to be able to take advantage of the lower SDLT rates.
However, it is generally acknowledged that those who have instructed solicitors after 01 January 2020, time is running out for the new transactions to complete in time to take advantage of the Chancellor’s concession and that they will have to pay SDLT.
The increase in sales volume generated by the SDLT holiday has created a number of capability gaps. Solicitors and surveyors are fully committed; however, banks and local authorities are struggling with the reduced capacity due to the pandemic forcing home working as well as an increase in the number of applications. This slows down the process, increases stress and makes buyers less discerning and more likely to accept sub optimal deals in the race to meet the 31 March deadline.
Latest position as at 03 March 2021
On 03 March 2021, the chancellor announced that there will be an extension to the temporary SDLT holiday.
The Stamp Duty rates that came into effect on July 8 2020 were originally proposed to end on 31 March 2021. However, these rates will now remain in place until 30 June 2021.
There will also be a further tapered reduction of Stamp Duty rates. The nil-rate band of £500,000 will continue until the end of June, following which the nil-rate band will be adjusted to £250,000 from 30 June 2021 to 30 September 2021. From 01 October 2021, Stamp Duty will return to its previous levels with the nil-rate band set at £125,000.
Buyers purchasing a second home or a buy-to-let investment can still take advantage of the widened Stamp Duty bands during the temporary cuts. However, such buyers will still be obliged to pay the Stamp Duty surcharge of 3% (England and Northern Ireland) on each band.
The effective date for SDLT payments
HMRC’s SDLT Manual provides that the rate of SDLT payable is the rate in force on the “effective date” of a transaction.
Ordinarily, the effective date is the date of the property transaction completion i.e. when the contract is completed. An SDLT return must be submitted and any SDLT due must be paid within 14 days of the effective date.
How can a buyer bring forward an effective date?
Depending on the circumstances of each case, there may be a couple of options available to bring forward the effective date, but each option is less than ideal and brings about its own complications. These options may only be available for cash buyers and for those willing to take the associated risks. They are not for the feint hearted.
The two options:
- Complete a contract early; or
- Substantially perform the contract.
Completing the contract early
Where a property is not yet built or where a seller is in a chain and cannot move out by 31 March, you could consider completing the contract early and defer the payment of the purchase price until the actual handover of the property.
The issues that could potentially arise from completing the contract early are:
- If purchasing with a mortgage, some lenders may refuse to lend;
- If the purchaser subsequently decides to withdraw, it would have to transfer its legal interest back (providing it has this option reserved as a right in the contract and no contractual disputes arise) and this may cause a second SDLT charge to arise which would have to be paid by the seller;
- If the seller is currently occupying the property, the buyer would have to grant a licence to occupy and if the seller subsequently refuses to leave, the buyer would have to start eviction proceedings in the courts.
Substantial performance of the contract
Where a contract is substantially performed the effective date will be the date of substantial performance, which is triggered by performance of the consideration specified in the contract. A contract may be deemed to be substantially performed if one of the following criteria are met:
- Substantial payment of the purchase price is made (i.e. more than 90% of the total consideration due under the contract);
- Any payment of rent is made;
- The purchaser is entitled to possession of the property (difficult for new builds but potentially.
There are no fixed requirements for substantial performance as this will differ in each transaction and it is for the buyer to establish substantial performance has occurred. HMRC can potentially challenge assumptions.
The advantage of substantial performance over completing the contract is that should the actual land transfer not occur, the transaction is deemed not to have happened and the buyer can request a refund from HMRC.
What happens when a contract is rescinded or annulled?
Where SDLT is paid upon substantial performance and the contract is rescinded or annulled later on, the purchaser may be able to claim a refund of the SDLT paid plus any interest incurred. If the purchaser is eligible for a refund of the SDLT paid then a claim would need to be made within 12 months of the filing date of the SDLT return.
Where a contract has been rescinded or annulled and you submit a claim for a refund of SDLT paid, Monarch Solicitors will not be held liable if you cannot reclaim back any SDLT paid.
What happens at legal completion?
Where a contract has been substantially performed, the actual transfer on completion will be a separate notifiable land transaction. Therefore, another SDLT return will need to be submitted at legal completion irrespective of whether any SDLT payment is due. If the SDLT payable at legal completion exceeds the amount at substantial performance than the deficit will need to be repaid within 14 days of legal completion. If the SDLT amount due at legal completion is lower, the SDLT paid at substantial performance cannot be reclaimed.
In the instance where the SDLT payable at legal completion exceeds the SDLT payable at substantial performance, you will need to pay the SDLT payment shortfall and Monarch Solicitors will not take any responsibility for the SDLT payable shortfall.
Weighing up costs against the benefits
Consideration should be given to the additional cost of both tax advice and the additional legal documentation required against any potential savings, and the risks of a challenge by HMRC and the risk of the seller not performing their part of the contract.
In addition, consideration should be given to market factors to. After the chancellor’s announcement of the SDLT holiday, the increased demand of property transactions has resulted in house price inflation beyond the normal range.
Under the current scheme, the maximum saving available is £15,000 for properties with a value of over £500,000. According to Nationwide Building Society, the average property sale price in England and Wales was £230,920 in December 2020, which would ordinarily mean that SDLT to be paid is £2,118, or less than 1% of the purchase price.
If the increased demand for property during the SDLT holiday causes property prices to increase by over 1%, the increase in property prices could exceed any costs savings associated to the SDLT holiday. Therefore, in reality the buyer could potentially end up having to pay more for a property purchase. Similarly, where a property price has increased by more than £15,000, rather than benefiting from the SDLT holiday the buyer would in fact be in a net loss position even if the full SDLT discount is applicable.
According to Right Move, property prices have fallen in the months prior to 9 January 2021. This may be correlated to a decrease of activity in the property market due to the realisation that any new property transactions may not be possible to complete before 31 March 2021.
It is predicted that from Spring onwards the property market will quieten down as a result of a number of factors such as the end of the SDLT holiday, the pandemic and the economic downturn. With this in mind, there may be a decline in property prices and as a result, there may be a shift in terms of bargaining power in favour of the buyers, which may result in more opportunities and bargains in the property market.
For the majority of people, buying a property represents the largest single financial transaction they will have to make in their lives, it is important to consider all the issues and risks and strike the right balance to make an informed decision before proceeding with any property transaction.
Disclaimer
If you do choose to pay over 90% of the property purchase price before the legal completion date to benefit from the SDLT holiday, this will be at your own risk and discretion.
Monarch Solicitors shall not be liable if HMRC later challenge your submission and find that your actions do not constitute as substantial performance of the contract.
Monarch Solicitors are not tax specialists and so do not provide advice on tax. We recommend that you to take independent tax advice from a tax specialist, such as an accountant or financial advisor who has the appropriate indemnity insurance in place.
Although Monarch Solicitors will strive to legally complete your property purchase before 31 March 2021 there is no guarantee that this will be the case, due to the increase in demand and the delays caused by homeworking and COVID-19.
Monarch Solicitors shall not be liable if your property purchase attracts a higher rate of SDLT if it completes after the SDLT holiday. | https://www.monarchsolicitors.com/guides-articles/sdlt-holiday-for-post-31-march-completions/ |
Get the Involve Newsletter
Published on October 26, 2016
Can more public participation fix our broken democracy?
Simon Burall is a Senior Associate of Involve. He has extensive experience in the fields of democratic reform, governance, public participation, stakeholder engagement, and accountability and transparency.
I’m fortunate to be attending the two week long Open State Festival in Adelaide, South Australia. There are over 60 events covering the future of leadership, of cities, money, technology and democracy. I’m speaking and participating; it’s been both exciting and really challenging to my thinking. In this blog post I thought I’d digest part of what I said to the annual conference of the International Association for Public Participation (IAP2) Australasia.
I started out drawing on two areas of our work. The first was the Open Government Partnership in the UK, and the second the UK government sponsored Sciencewise Programme. Both of these provide strong examples demonstrating how well-designed public and stakeholder participation can have significant impacts in a number of different ways, on:
Public officials and politicians by developing their capacity to design and commission public engagement processes that have more impact, or take place earlier in the policy process; and
Participants, many of who discover that, contrary to their expectations, they have valuable contributions to make to complex, highly technical public policy decisions.
IAP2 members all have their own compelling stories from local, regional and national level that would add depth to this. However, exciting and profound as some of these impacts are, I’m not sure that they get to the root causes of the problems we are seeing in many democracies across the world. Drawing on our experience of designing and delivering public engagement processes, there are two key reasons we need to avoid complacency, and indeed need to think and act differently.
Firstly, both of the pieces of work I drew on, and many of the strong examples around the world, are fragile. They work around current democratic processes and systems. As a result, they are highly dependent on the sponsorship of key people and are at risk when these people move on.
Secondly, except in the rarest of cases, awareness of these processes is extremely limited and often non-existent. They often appear expensive when looked at in isolation, and their rigour and robustness lies hidden. As a result, the critical role that the public and civil society have played in influencing key decisions is effectively unknown. This means that controversial decisions appear less democratic than they were.
There are of course more immediate and obvious reasons that we need to shake off our complacency. The vote to leave the EU is dominating much of the political discussion in the UK. However, we are not alone and populism, on both sides of the political spectrum, is obvious in the US with the rise of both Donald Trump and Bernie Sanders, and across the rest of Europe with the Five Star Movement in Italy and Front National in France, for example.
Too many people draw the wrong conclusions from these political phenomena, in fact they often draw offensive conclusions. The vast majority of people who voted for Brexit, who are supporting Trump or the comedian in charge of the Five Star Movement are not stupid. They are not rejecting evidence or experts just for the sake of it. They have lost out through globalisation and the impact of government policies; rightly or wrongly they have been ignored and taken for granted for decades, and they are angry.
My challenge to public engagement professionals is that we are worrying about the wrong things when we design and deliver public engagement processes. We need to worry less about designing a robust process in the room; what I call the ‘how’ of engagement. Instead we need to worry much more about two very different things:
the ‘why’ of engagement. Are we dealing with the real issues that the public, particularly the most disenfranchised, are facing in their lives?
the ‘where’ of engagement. Are we intervening in the democratic system in a place that will have a sustainable impact on the way decisions are taken in the future? Are we making it more likely that a wider range of views will influence future decisions?
The first is that we have to move away from talking as if our democracy rests solely on democratic elections. A strong democracy does not just have lots of people turning up once every couple of years to put a cross in the box. The shift we need to make when we think about democracy is to think in terms of deliberative capacity; the extent to which a wide range of views and perspectives are visible, interacting and influencing decisions.
Deliberative Capacity; the extent to which a wide range of views and perspectives are visible, interacting and influencing decisions.
Next, we need to stop thinking of democracy as only happening in the chambers where elected officials sit and in the offices of government ministers. Drawing on the work of the academic John Dryzek, Room for a View identifies seven components of the deliberative system.
This system can be used to assess the strength of democracy at any level of decision making, from the lowest level of government, to local government, council or state, national all the way through to the international level. When we are invited to join the intergalactic federation, it’ll probably apply there too!
As professional facilitators, we can run the best processes in the world, but if they aren’t helping to bring new views and perspectives, particularly from those groups whose voices are rarely if ever heard, into a part of the system where real power lies, then we’re just papering over the cracks; we’re helping to hide the hollowing out of our democracies, to disempower people and communities. And we therefore shouldn’t be surprised if they vote in ways that appear destructive (to us), but may in fact be the only way to be heard and have their concerns recognised.
One Response to “Can more public participation fix our broken democracy?”
Leave a Reply
About Involve
Involve is a charity and think tank established in 2003 to improve the quality of democracy between elections. We want to build a democracy that works for everyone – that gives people real power to effect change in their lives, communities and beyond. More about Involve »
| |
In today’s world, there is an ever-increasing need for intelligent systems, systems that can learn and adapt to a changing environment. In order to achieve this, intelligent systems usually incorporate or are based on models or simulations of the environment. The intelligent system track will provide a forum for the presentation and discussion of recent research and applications of simulations for intelligent systems. It aims to encourage and to facilitate interdisciplinary communication amongst university researcher and industry professionals applying techniques such as evolutionary and genetic algorithms, fuzzy logic and artificial neural network models.
Topics of interest include but are not limited to:
- computational intelligence and cognitive systems
- fuzzy systems
- evolutionary and genetic algorithms
- artificial neural networks
- intelligent machine learning systems
- rule-based systems
- case-based reasoning
- methodologies, models and algorithms, tools
Papers dealing with the applications of computational intelligence in a widest sense are welcome but are not limited to:
- Classification
- data analysis
- decision support
- forecasting
- knowledge acquisition
- planning
- pre-processing of data
- process control
- robotics
- speech and image recognition
- web intelligence
- hybrid systems which combine different techniques
- simulation tools for research, education, development
- neural networks models of systems
- optimization
- interdisciplinary applications using soft computing methods. | https://ecms.unibs.it/tracks.html |
The invention relates to a shield tunnel segment variable thickness steel ring. The shield tunnel segment variable thickness steel ring is formed by splicing steel sheets; starting from the right above of segments at 0 degree, the degree of the spliced steel sheets is increased clockwise, the areas between 345 degrees and 15 degrees, 60 degrees and 90 degrees, and 270 degrees and 300 degree of thesegments are special areas; areas on both sides of every special area within 10 degrees are gradient areas, and the other areas of the segments are common areas; the thickness of steel ring in the special areas is twice as much as that in the common areas. The shield tunnel segment variable thickness steel ring determine a reasonable and scientific reinforcing steel ring thickness, thereby improving the reinforcing effects and meanwhile reducing consumption of steel ring materials as well as the construction costs; the segments and the steel ring are anchored through expansion bolts, the areas clockwise between 330 degrees and 0 degree, 120 degrees and 150 degrees, and 210 degrees and 240 degrees of the segments serve as reinforced anchoring areas and areas outside the reinforced anchoring areas as common anchoring areas, the segments and the steel ring are anchored through the expansion bolts at an interval of 20 degrees in the reinforced anchoring areas and an interval of 30 degreesin the common anchoring areas, so that the tensile strength of the segments and the steel ring can be enhanced effectively. | |
The Swiss center will initially focus on two research projects – integration of central bank digital currencies (CBDCs) into a distributed ledger technology infrastructure and analysis of the rising requirements for tracking fast-paced electronic markets by central banks, the annoucement notes.
The Swiss National Bank (SNB) and Bank for International Settlements (BIS) have signed an agreement to cooperate on the BIS Innovation Hub Centre in Switzerland.
Two major projects
According to an official press release on Oct. 8, the BIS’s first three innovation hubs will be established in Switzerland, Hong Kong and Singapore.
The first project will be conducted as part of a collaboration between the SNB and the Switzerland’s major financial service provider SIX Group in the form of a proof-of-concept. The press release says that blockchain-based CBDC would be “aimed at facilitating the settlement of tokenized assets between financial institutions.”
Monitoring tech insights for central banks
Meanwhile, the underlying objectives for the new hub would be identifying and developing insights in critical trends in technology affecting central banks and serving as a focal point for a network of central bank experts on innovation, the press release reads.
Thomas Jordan, Chairman of the Governing Board of the SNB, said that the central bank has been closely following the trend of digitization of the financial sector and technological innovation. He added that the new cooperation will allow the banks involved to further expand expertise in financial markets and their infrastructure.
Attitudes towards CBDCs
In early September, Jordan claimed that stablecoins pegged to foreign currencies could hamper Switzerland’s monetary policy in some circumstances. He argued that providing the general public with access to a CBDC could pose a threat to financial stability by increasing the likelihood of a bank run.
Similarly, BIS’ general manager Agustin Carstens previously expressed a negative stance towards CBDC, claiming that they could facilitate a bank run, enabling people to move their funds from commercial banks to central bank accounts faster, which will destabilize the system.
Swiss Digital Exchange Plans ‘Initial Digital Offering’ in 2020
Swiss Digital Exchange (SDX), a digital asset trading platform by Switzerland’s principal SIX Swiss Exchange, will reportedly launch its initial digital offering (IDO) in 2020.
SDX security token
The not-yet-launched SDX has reportedly set up a global consortium of financial institutions to back its IDO in the middle of 2020, Coindesk reports on Sept. 30.
Thomas Kindler, who took over as SDX CEO on Sept. 1, elaborated that the consortium comprises a group of investors such as banks and market infrastructure providers intending to legitimize the technology and raise capital.
According to the report, the IDO would be similar to a traditional initial public offering, except that the shares will be issued in the form of security tokens on SDX.
Consortium members not revealed
While not specifying the consortium members or the amount expected to raise, Kindler stated that SDX plans to have one large-scale level of investment featuring four or five big investors and another level with 10 smaller potential investors, the report notes.
The firm is reportedly planning to introduce its own SDX security token, shifting from its original plan to first tokenize traditional banking assets to other assets such as real estate. The SDX token will be issued on a blockchain that is based on the enterprise version of R3’s Corda technology.
The news comes a week after SIX postponed the launch of SDX for Q4 2020 after previously projecting to roll out the platform in mid-2019. However, on Sept. 23, SIX launched a prototype of its digital exchange and the central securities depository (CSD), noting that the objective of the prototype is to showcase the future of financial markets to the community as well as to demonstrate the integration of CSD with a central order-book stock exchange model. | https://stupen.com/blockchain/swiss-national-bank-to-research-cbdcs-at-new-bis-innovation-hub-centre/ |
Many articles have been written, and will continue to be written, on the topic of controlling food and beverage costs in private clubs. There is no lack of information on procedures for purchasing, receiving, storage, issuance, preparation, portion controls, serving, staffing and labor scheduling. The subject has been analyzed, scrutinized, sliced and diced by the top experts in the field, to the point where there is not much left to say.
Except for this: All of these treatments have missed the most important ingredient in cost controls—people. The human factor. The traditional approach has stressed a controlling atmosphere of strict procedures and measurement devices, assuming that employees cannot be trusted to do the right thing on their own.
Anyone who has looked at this subject from the human perspective understands that, if employees are not internally driven to control costs, no amount of pressure from external controls will be effective. This seems like an obvious point, but clubs often tighten the screws, in the form of more stringent procedures and controls, instead of taking the time to build employee loyalty and a sense of personal responsibility.
Why do clubs (and many other organizations) lean toward procedures and away from people?
First, building loyalty takes time. It also requires the personal involvement from management. Today’s club managers and department heads are so swamped with meetings, discussions and other administrative chores that they often lose sight of the need to spend time with their employees—all of their employees.
Second, some managers haven’t yet realized that, long term, people don’t respond well to heavy-handed control tactics. Used over a long period of time, these practices will drive away the best performers and leave the club with the rest – often a group of underperformers with nowhere else to go.
Here are some fundamental approaches used by the most successful of clubs to increase employee loyalty and personal responsibility:
- Acknowledge that employees are the club’s most valuable asset and treat them accordingly.
This point seems self-evident. Yet, this simple measure of human respect is often cast aside—replaced by layers of impersonal procedures, rules and guidelines. The bottom line is simple: employees who feel valued are more likely to adhere to the club’s rules and regulations than those who feel unappreciated.
- Treat all employees with dignity and respect.
Another obvious point, but countless stories are told of management’s insensitivity to basic employee needs. Angry tirades by the prima donna executive chef, scheduling favoritism by the dining room manager, an unwillingness to address difficult people issues by the clubhouse manager—all lead employees to lower their respect for management as a whole. As respect declines, so too goes concern for controls.
- Encourage employee involvement.
There are two good reasons for adopting this practice. First, involving employees in operational discussions and decisions makes them part of the solution, rather than part of the problem. Second, employees know things management does not. Many good ideas and suggestions will surface from employees if they are offered an opportunity to participate.
- Reward productivity and performance with higher wages.
Compensation politics in a club can destroy morale. Often, certain individuals are unjustly rewarded for longevity or “because the members like them” (even when they are insufferable to colleagues). This type of favoritism sends a negative message to other employees, who are often performing at a higher level, but paid less. Weeding out special cases may be the best thing for a club, as it focuses on fair and equal treatment of all employees.
- Expect higher employee productivity and lower costs.
A common complaint of employees in private clubs (as well as in other industries) is a lack of clear work objectives from management. Often, employees are unclear as to what is expected of them. Communicating specific expectations is critical to creating a productive dialog with employees and involving them in operational discussions.
In addition, it may be a good idea to raise performance expectations. In fact, behavioral research with groups has repeatedly shown that the mere expectation of better performance can produce dramatic—and lasting—improvements in actual performance. Simply asking more of employees can reap good results, but only when management’s appreciation of employee value is strong.
None of this is to say that procedures, rules and regulations are unnecessary. Controls are needed. But the effectiveness of those controls is tempered by the loyalty of the people expected to comply with them. Treat your employees with respect, encourage their involvement, reward them fairly and expect more of them. Those are the human keys to cost control success. | https://rsmus.com/our-insights/newsletters/eclubnews/the-human-side-of-food-and-beverage-cost-controls.html |
It has been hypothesized that both problematic drinking and communication problems can negatively affect the quality and selection of social relationships. One of the goals of some alcohol treatments is to improve patient social functioning by focusing on communication skills during the treatment process. A number of studies have focused on the association between relationship functioning and alcohol treatment outcomes. However, no studies were identified in which discrepant assessments, regarding relationship functioning across subject and partner, were used to "match" a patient to treatment. Equity Theory, an economic based theory, will be used to explain the merits, limitations and rationale for using relationship discrepancy scores to match patients to alcohol treatments. According to the Equity Theory equation, relationships consist of a series of ongoing exchanges receiving positive and/or negative consequences. Within the framework of Equity Theory, an ideal state occurs in dyadic relationships when there are equitable exchanges. Researchers in the alcohol field report that no couple based matching research exists and theoretically driven matching studies should be tested.
Learning Objectives: 1) understand the relationship between problematic drinking and interpersonal problems 2) recognize the merits and limitations of the Equity Theory 3) apply the Equity Theory to couple's social functioning discrepancy scores 4) understand the rationale for matching social functioning discrepancy score to treatment
Keywords: Alcohol Problems, Treatment
Presenting author's disclosure statement:
Organization/institution whose products or services will be discussed: None
I do not have any significant financial interest/arrangement or affiliation with any organization/institution whose products or services are being discussed in this session. | https://apha.confex.com/apha/128am/techprogram/paper_8222.htm |
BNA™ leverage machine-learning, advanced algorithms, and large-population databases to provide a revolutionary new way to understand how the brain’s neural networks are activated and inform about brain function.
BNA™ processes post-hoc neural patterns of time, location, amplitude and frequency data points in the brain related to specific functions evoked by repeatable tasks to create high-resolution, three-dimensional representations of the functional neural pathways that are activated in response to those tasks and represent specific brain functions such as sensory processing, attention and memory.
By comparing a patient’s test results to their previous healthy-state baseline – or to a Reference Brain Network Model (RBNM) generated from an extensive population database – BNA™ can give physicians a unique perspective on brain function not available through traditional methods. These include:
- snapshot mapping of brain network function in comparison to a healthy/normative group;
- the ability to compare multiple tests over time;
- objective information to assist with better-informed medical decisions. | https://elminda.com/thesolution/ |
Behind the Scenes with Jenn Bruyer
We recently had the pleasure of hosting Jenn Bruyer as a guest artist this past month. She taught local workshops and performed in our local show (that was broadcast on Facebook live– hopefully you got to see it!), but the real work was done when she got behind the camera for 10 grueling hours of filming. She was the first guest artist to cover three apparatuses! We took photos and video of her on fabric, sling and trapeze. These videos are now in the editing queue and will hit the video library as soon as we get them done. We hope that you can get to know the artist behind the camera as we did. She’s a joy to be around.
Before you found aerial arts, who were you? How did the discovery change your life path?
Before I found aerial arts, I was primarily a rock climber. But, more to the point, I was someone to pushed my own personal limits with my chosen activities until I burned out and moved on… over ten years later… I’m still loving every minute of my air time in this circus life. Finding aerial changed my life in every imaginable way. It’s my work, my life and my love. It gives me a venue for creative expression and also satisfies my need for physicality… the two things my body and brain crave most.
What are your pre-performance habits/routines? How do you get “in the zone”?
I usually don’t do much… unless you consider the incredible amount of prep work that goes into creating something that you really feel is ready to present. Other than that… I tend to just show up and try to be the best version of myself I can be that day.
What motivates you to train? When do you feel the most creative?
I’m motivated by the need to solve problems and answer the question: what’s next!?!
What skill is your nemesis?
Trapeze – Pull over to ankles. I mean, what’s wrong with my butt?!?
What is your favorite trait to discover in a new student?
The willingness to accept change.
Below is the trapeze piece that she debuted while she was here at our Born to Fly Curriculum headquarters. The following quote sums up the intent behind the dance:
“One day, whether you are 14, 28 or 65,
you will stumble upon someone who will start a fire in you that cannot die.
However, the saddest, most awful truth you will ever come to find––
is they are not always with whom we spend our lives.”
― Beau Taplin
And below is an example of one the videos that just got released in our video library from Jenn. This combination puts together moves from our Born to Fly Sling Curriculum. She has given us lots of new sling things to inspire and explore!
Look for more fun things from Jenn in the months to come in our video library!
Jenn is driven by her focus on fabric and sling (hammock) but also enjoys exploring cord lisse, cloud swing, trapeze, lyra, net and rope & harness. She has coached, choreographed and performed across the US from New York to Alaska. And has recently resettled in Seattle, WA after completing a 5 month 25 city workshop teaching tour, which you may have met her on! You can learn more about on her website: heelhang.com
| |
Veda in Sanskrit is an extension of the word Vid which means ‘to know’. The four Vedas in Hinduism – Rig Veda, Yajur Veda, Sama Veda and Atharva Veda, forms the four pillars of the civilisation as we know it. Maharishi Vedavyasa is known to hand down the knowledge of these religious Vedas to his disciples, who in-turn further cascaded it to their pupils orally over centuries. It is only later, between 1500 to 1100 BCE, these texts were documented into transcripts and compiled into a collection.
Of all, Rig Veda is known to be the oldest written record of the Aryan Civilization, documented in an Indo-European language. It is a compilation of various verses which are known as ‘Rik’ in Sanskrit. A vivid collection of sacred hymns for praising gods, it also contains various mythic and poetic versions of the beginning of the world. While the original text was much smaller, over a period of time Rigveda had undergone metamorphosis with new additions and changes that lasted till 1100 BCE.
The entire collection is in 10 different books, known as Mandala. Each mandala is divided into subsections known as Anuvakas, Each Anuvaka consists the hymns, i.e., Sukta, which is nothing but a collection of verses or Mantras. The number of mantras varies across Suktas. Together it forms the largest text of all four Vedas with 10 Mandalas, 85 Anuvakas, 1028 Sukta, and 10552 mantras. These mantras are arranged according to the deities for whom the praises are sung. Hence the referral mechanism is quite handy with the convenient classification system adopted on the basis of the age-old practice.
Although ancient, Rigveda is a foundation of Hindu philosophy and religious practices, as it documents various puja rituals and other sacrificial procedures followed during Iron Age, much of which is prevalent even today. Many of the texts consists the hymns of praises dedicated to many Gods – whether it is praising their valour in a battle, or asking their blessings for wealth, good fortune, long life, guarding against evil. A large number of hymns are pre-dominantly dedicated to Lord Indra – the supreme sky lord, Lord Agni – the fire god, Lord Surya – the sun god and Lord Rudra, which is the earliest known reference of Lord Shiva. Most of the mantras in Rig Veda are used even today during puja and yagna ceremony.
Rig Veda also documents the way of life adopted by many during the Iron Age including the religious belief or caste system along with the practice of Vedic science like yoga, meditation and ayurveda. The rituals followed during Hindu marriage or the cremation or burial ceremony also found its mention in the old sacred texts.
The religious records were often seen as an interaction between man and god, a practice which is alive even today after centuries. These rituals were then considered sacred and essential to build an order in a nomadic way of life prevalent back then.
Rigveda represents a pool of knowledge along with a philosophy, which beats like a pulsating vein through generations, breathing more than just life in them. | https://www.thehinduportal.com/2015/10/rig-veda-short-descrption-about-rig-veda.html |
Every rental car company in Vila Real has a different cancellation policy. At momondo, we will always disclose in the car rental search results whether or not an agency has a cancellation fee. Some of the top car rental companies in Vila Real include Sixt, Hertz, and Europcar, so be sure to check if they offer free cancellation.
Vila Real’s speed limit is 50 km/h when you are driving in the city. This speed limit might also apply to surrounding towns and neighbourhoods. The suburban speed limit in Vila Real is 100 km/h and the speed limit for motorways is 120 km/h. However, be sure to look for posted signage as these speeds may vary. | https://www.momondo.co.nz/car-rental/Vila-Real-49909 |
Job Description:
Job Title: Kitchen Assistant
Contract Type: Temporary
Starting Date: ASAP
Job Location: SE1
Salary Rate: £10.00-£12.00 per hour
We are looking to hire a dedicated and reliable kitchen assistant for our client to assist the cook with ingredient preparation as skillfully as perform all washing and cleaning duties required in the kitchen. The kitchen assistant’s responsibilities enlarge assisting once inventory control, removing the garbage, washing garbage cans, and clearing refrigerators, freezers, and storage rooms. You should moreover be accomplished to autograph album notable food wastages as seen from customer’s leftovers.
To be wealthy as a kitchen assistant, you should exercise exceptional time organization and ensure that whatever duties are completed in a timely manner. Ultimately, an outstanding Kitchen Assistant should be practiced to assent with all food health and safety regulations. Kitchen Assistant Jobs In Cricklewood
Responsibilities Of Kitchen Assistant Jobs In Cricklewood:
- Properly cleaning and sanitizing all food preparation areas according to established standards of hygiene.
- Washing and as a result storing whatever cooking appliances, instruments, utensils, cutting boards, and dishes.
- Assisting the Cook in the same way as the preparation of meal ingredients, which includes washing, cleaning, peeling, cutting, and chopping fruit, vegetables, poultry, and meat.
- Sweeping and mopping the kitchen floors as without difficulty as wiping all along kitchen walls.
- Assisting like the unloading of delivered food supplies.
- Organizing and correctly storing food supplies.
- Promptly transferring meal ingredients from storage areas to the kitchen as per the Cook’s instructions.
- Stirring and heating soups and sauces as capably as preparing hot beverages.
Kitchen Assistant Requirements:
- High literary diploma or GED.
- Proven experience assisting in kitchens.
- A food handler’s license.
- Sound knowledge of food health and safety regulations.
- The skill to stand for extended periods.
- The deed to conduct yourself in a fast-paced environment.
- The realization to conduct yourself in a team.
- Excellent organizational and time government skills.
- Effective communication skills.
To apply deal with your CV to arrange an terse placement. | https://catering.jobs/jobs/kitchen-assistant/in-london/kitchen-assistant-jobs-in-cricklewood/ |
Evaluate the usability of a product by asking participants to complete tasks and then observing their behavior.
Varies depending on the number and breadth of tasks. Rapid usability tests can take minutes, while more in-depth tests can take up to an hour per participant.
Evaluating the usability of a website or digital product at any stage. You can test the usability of products early in the design process to inform your design direction, as long as you have an interactive prototype. You can also test products that already exist out in the world to identify pain points and areas of improvement.
Identify your research questions. What do you want to learn? If you are testing a website for the first time and don't know where to start, try identifying your users' top tasks and prioritize those for testing. Web analytics data, user interviews, and surveys can help you identify the top 5-10 tasks.
We recommend testing tasks that:
Tasks are what you want participants to do and should be succinct. These are for internal use and it's ok if they include jargon.
Scenarios are what you tell participants to do and should be believable and unambiguous. Scenarios should avoid leading language, such as the link label you expect the participant to click on. Try out your scenarios in a practice run to ensure they are worded clearly. | https://theuxcookbook.com/85314c48c30f4382b81975873a4e091e |
Itchy Skin at Night
Itchy skin at night may not only cause you a lot of discomfort, but also ruin your sleep and keep you lethargic all through the next day! Have you ever wondered why the itchy feeling intensifies during the night?
Mamta Mule
Last Updated: Mar 23, 2018
Two types of Itchiness
The medical term for itchy skin is pruritus. There are two kinds of itchiness that normally occur - one, where there is a change in the texture of the skin at the site where you scratch (scaly skin, bumps, dryness, etc); and two, where there is an itchy sensation but the skin appears to be normal. This classification helps us bring the next important point; i.e., the causes of itchy skin could be external or internal.
It is very important to note the changes in your environment when the itching begins. Is there any activity that you do around the time when the problem occurs? Since when have you been dealing with this problem? Was any change in your lifestyle made during the time? If you are unable to identify the cause behind your itchy skin, then pondering over these questions is necessary.
Itchiness Caused by External Factors
By external factors, I mean the causes which are not related to your health. The external contributors to your skin irritation are usually the elements present in your surrounding. Even certain activities which you tend to do, may make your skin prone to itchiness.
Mentioned below are some of the commonly observed reasons that may disturb your sleep at night.
Do you take a hot shower at night?
When you expose your skin to extremely hot temperatures, it tends to get dry. A hot shower may be a good way to wash off all the tiredness of the day, but it also washes off the natural oils present in the skin, leaving it dry and stretched out. Dry skin is one of the main reasons behind itchiness! Another point to be noted here is that many people experience itchy skin if the water supply in their area is that of hard water.
What can you do?
If you think this is the reason behind your problem, then try taking a cold shower, or use lukewarm water for bathing. Use mild soaps for washing your body. Soaps that have very strong fragrance are usually harsh on the skin. Do not forget to moisturize your body properly to prevent the dryness.
Could it be an insect biting you in the night?
Though this may not be a pleasant news to you, but there are several instances wherein the itching is caused by none other than bedbugs biting you all night. If this is the case, you will see small red bumps on your skin, very similar to that of a mosquito bite. Another cause could be the presence of ticks. Do you sleep with your pets? Pets like dogs and cats tend to have ticks and fleas that can bite us humans, causing itchy bumps on the skin.
What can you do?
Check for bedbugs in the house or on your bed. The most common places where you can find them include the corners of the mattresses and pillows. If you see black spots on these areas, you have found the culprit! Call for professional assistance immediately to get rid of them. If you have pets in the house, then make sure you sleep with them only after the ticks and fleas have been removed completely. Consult your vet for advice.
Could it be an allergy?
As I mentioned before, observe the changes around you, especially during the time when you start feeling itchy. Could it be an allergy due to something that you eat during the time? Or any medication that you take before going to bed? Many times, itchiness may be caused due to an allergic reaction from the fabric of the bed sheets or blankets. Also, if detergents with harsh chemicals have been used to wash the bed linens, or, if you haven't washed them at all, then you may experience itchiness resulting from an allergy.
What can you do?
Notify your doctor about the medications and foods you are consuming, especially at night. There may be an ingredient that you are allergic to. If you think the source of allergy may be the fabric of the blanket, bed sheets, or the clothes you are wearing, then it would be best to switch to a mild detergent. Also, wear a fabric that is comfortable for the skin, like cotton.
Itchiness Caused by Internal Factors
Internal factors are those relating to the health of your body. There are many health conditions that may cause itchiness, which may become intense during the evening hours. If you were unable to associate with the causes mentioned above, then there are chances of an underlying condition causing the problem. It is mandatory to check with your physician for proper diagnosis without any delay.
Some of the causes mentioned below are also fatal in nature.
Thyroid Problems
Thyroid problems, like hyperthyroidism -- the thyroid gland ends up producing excessive hormone, may cause itchiness. Hyperthyroidism caused due to Graves' disease may result in the thickening of the skin in certain areas of the body. The affected area might have skin that is hard, red and itchy. The itching is likely to get intense during the night. Even hypothyroidism -- when the thyroid glands are underactive, may also contribute to itchy skin.
Kidney Failure
Problems related to damaged kidneys causes itching in various parts of the body. People who are going through hemodialysis often experience itchiness, especially after the treatment session. Also, when your kidneys do not function properly, there could be elevated amount of phosphorus, and low amounts of calcium in the blood, which may lead to itching. When the body identifies low calcium levels, it triggers the parathyroid glands to produce more parathyroid hormone. This hormone is responsible for extracting the calcium from the bones into the blood. An increase in the parathyroid hormone levels may also cause itchy skin.
Liver Disease
Did you know that itchy skin is one of the major symptoms of liver diseases? Our liver produces a fluid called bile. This fluid helps in the process of digestion. In case of a liver disease, the bile acid deposits in the blood, and travels to the surface of the skin causing itchiness, which usually aggravates during the nighttime.
Cancer
Cancers like leukemia and lymphoma are known to cause itchy skin. In fact, many experts consider this symptom to be among the early signs of cancer. The itchy sensation may either be in a specific area (most commonly in the lower legs), or may be experienced all over the body.
Skin Problems
There are many forms of skin disorders which may be responsible for itchy skin. Conditions, like scabies, psoriasis, hives, folliculitis (inflammation of the hair follicle), eczema, etc., may be responsible for the same. Among this list, scabies is specifically known to cause itching that intensifies during the night.
Menopause
If you are a woman going through menopause, then that may be the reason behind your discomfort. During menopause, a lot of hormonal changes take place in the body. One such change is the lowered levels of the hormone estrogen. Estrogen is responsible for regulating the moisture and natural oils present in the skin. If the levels are lowered, the skin becomes dry, and we already know that dryness leads to itchy skin.
Fungal Infections
Fungal infections may lead to worm infestations including the presence of ringworm in the body. In this case, the pattern of the skin changes, and the infected area appears to be like a round patch along with itchy skin. Other conditions like athlete's foot, jock itch, swimmer's itch, and infestation of pinworms and body lice may also cause itching.
Other Causes
Conditions like anemia, sexually transmitted diseases (causing itchy skin in and around the genital areas), stress and anxiety, hemorrhoids, nerve disorders (diabetes, multiple sclerosis, shingles, etc.), and pregnancy, may also act as causative agents of an itchy skin.
Seek Doctor's assistance
If you think that the specific cause for itchy skin in your case is a medical condition, do not delay by looking for ways to self-treatment. It is strictly advised to seek a doctor's assistance, so that proper diagnosis and treatment could be done without any delay. The treatment may include topical application of ointments, creams and lotions. Oral antihistamines and topical anesthetics may also be suggested depending upon the condition.
Additional Tips and Remedies for Itchy Skin
While professional assistance is necessary, there are certain precautionary steps that may be followed to prevent the itchiness. The tips mentioned below will help you cope with the problem to some extent.
Prevent dryness
The first precautionary measure that you can take is to prevent dryness. Apart from hot showers, constant exposure to hot temperature (direct sun exposure) can make your skin dry. Wise application of sunscreen lotions is recommended while going out in the sun.
Cold/lukewarm showers
Have cold/lukewarm showers in the evening. Use oatmeal bath powders while bathing; they prove to be helpful to treat itchy skin.
Moisturize your skin
To prevent itchiness, moisturize your skin thoroughly. You may use natural oils including coconut oil, olive oil, and avocado oil, which can make the skin soft and supple. The pulp of the aloe vera plant is also great for your skin. Although, make sure that you are not allergic to any of these substances. Some people have reported of being allergic to aloe vera.
Minimal scratching
Another tip that would help you cure the itchiness is to keep scratching to the minimum. In most of the cases, vigorous scratching has caused severe bacterial infections leading to breakage of the skin and bleeding! Do not use your fingernails to scratch; try using the tip of your fingers, instead. This may seem difficult but it will ensure less complications and quicker healing.
Apple cider vinegar
Using apple cider vinegar is another remedy. It has antifungal and antiseptic properties that can help you get rid of the itchiness. Just soak apple cider vinegar in a cotton ball and apply it on the affected area. If a large area is affected, then add 2-3 cups of apple cider vinegar in your bathtub and soak your body in it for 30 minutes. Using organic apple cider vinegar would be more beneficial. Consult your doctor before using this remedy, in case you have a sensitive skin.
Ice packs
So, what should you do when you feel like scratching? Apply cold packs! Yes, application of cold packs will reduce the irritation and make you feel better. You can keep an ice pack nearby, in case of emergency. You can also wash the area with cool tap water, or place a clean cold cloth over the area that itches.
From the information given above, it is clear that a simple (yet annoying) problem, like itchy skin can also be a sign of some serious ailments. If the preventive measures mentioned above are not proving to be of help, then please contact your healthcare provider as soon as possible.
Disclaimer: This HealthHearty article is for informative purposes only, and should not be used as a replacement for expert medical advice. | https://healthhearty.com/itchy-skin-at-night |
FIELD OF THE INVENTION
BACKGROUND OF THE INVENTION
SUMMARY OF THE INVENTION
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
Method of Preparation
EXAMPLES
Example 1 Facial Wrinkle Cream
Method of Preparation
Example 2 Face and Neck Cream
Method of Preparation
Example 3 Hand and Body Cream
Method of Preparation
Example 4—Wrinkle Cream (Olive Oil Reduced by 50% and Replaced With Mineral Oil)
Method of Preparation
Example 5—Wrinkle Cream (Olive Oil Replaced Entirely by Mineral Oil)
Method of Preparation
The present invention relates to all material skin cream compositions, and more particularly to all-natural skin cream compositions containing bovine colostrum to improve skin appearance and texture and improve or relieve various skin conditions including urticaria, dryness, eczema, and psoriasis,
The beauty market spurred by a desire to maintain youthful looks, even as one ages has given rise to numerous creams, treatments, and the like that seek to avoid or ameliorate formation of wrinkles. The aisles of drug stores, department stores, and even discount stores are lined with numerous products, each claiming to have a beneficial effect on the skin, or to bleach age spots, moisturize, condition, soften, or firm the skin, particularly the skin on women's faces and around their eyes, where signs of aging appear most accurate or noticeable. Many skin care products, however, are also being marketed to men.
Of course, numerous skin creams claim numerous benefits. For example, U.S. Pat. No. 5,658,580 (Mausner), discusses a skin cream that claims to minimize age spots and improve skin color, using a formula that includes sodium lactate, fatty acid esters of vitamins C and E, witch hazel, and horsetail extract. U.S. Pat. No. 7,557,094 (Wadstein), discusses skin creams featuring chitosan derivatives to treat atopic dermatitis, eczema, and to improve skin thickness and elasticity, U.S. Pat. No. 8,637,075 (Pedersen) provides a skin cream formulation comprising colostrum and a hydrocolloid, which are bioconjugated. These compositions also said to be helpful for treating various skin conditions.
Other references also discuss use of colostrum in skin care products. For example, WO 1994021225 (Wadstein) adds colostrum in a composition including a conjugate of a vitamin derivative and cylodextrum among other ingredients. CN 1098677 relates to a skin care product including colostrum, surfactants, including glycerol monostearate, and PEG 400 dioleate and water.
Each of the foregoing references is incorporated herein by reference in its entirety. There exists a need for an all-natural skin care cream that includes colostrum without including any artificial ingredients.
The invention provides a skin cream comprising: about 0.5 to 15% of colostrum; about 0.5 to 15% of a polyacrylamide based emulsifier; about 1.0 to 20% of an emulsifier containing lecithin or a derivative thereof; about 1.0 to 20% of olive oil, having an acidity ranging from about 0.2 to 2.0% and a usage range of 5.0% to 50%; about 1.0 to 10% of dimethicone; about 0.01% to 4.0% of alpha-tocopherol or an active derivative thereof; about 0.01% to 0.5% of vitamin A or an active vitamin A derivative; about 0.5 to 4.0% of a natural preservative; and about 10 to 90% of purified water to make up 100 percent, The invention further provides a skin cream wherein the colostrum is bovine, ovine, porcine, seuine, or human derived, or a mixture thereof. The invention also provides a skin cream wherein the dimethicone is cetyl dimethicone. Application of the skin cream of the present invention should smoothly lubricate and moisturize the user's skin, and lessen the appearance of wrinkles. The skin cream of the present invention will make the skin appear smoother and more vibrant.
This invention relates to the preparation of a soft, smooth, easy to apply cosmetic cream for the enhancement and preservation of skin freshness and suppleness by the means of the addition of colostrum to the cream preparation. Such cream, being composed with all natural ingredients and to include specific amounts of ingredients such as vitamins, emulsifiers, antioxidant and others essential for the preservation of skin freshness, it is the ideal body treatment for the temporary preservation of epidermal freshness and subjective beauty. Colostrum, which is the first 48 hours post-partum milk, is incorporated in the various formulations of the creams and of specific compositions of the present invention depending of the intended targeted skin area to which it will be applied. The colostrum being used in these product preparations is of mammalian origin and in preferred form of bovine origin. Bovine colostrum is similar in its proteins, antibody and growth hormone and chemical composition to other mammals' colostrum, including that of humans. Bovine colostrum is the most convenient, abundant and commercially available type for this application. Colostrum has a GRAS status as it finds applications in food preparations for food enrichment. The present invention incorporates this ingredient into a topical class of cosmetic-like creams.
Current Research in Nutrition and Food Science,
Bovine colostrum, also referred to as “colostrum” is the first milk secreted at the time of parturition, differing from the milk secreted later. It contains more lactalbumin and lactoprotein, and is also rich in antibodies that confer passive immunity to the newborn, it is sometimes called “foremilk.” Human Colostrum (HC) and Bovine Colostrum (BC) are both rich in protein, immunoglobulin, lactoferrin and growth factors. Recent studies suggest that colostrum components, immunoglobulin and growth factor benefit physically active persons and can be used to treat autoimmune and a wide variety of gastrointestinal conditions, including non-steroidal anti-inflammatory drug-induced gut injury, H. pylori infection, immune-deficiency related diarrhea, as well as infective diarrhea. See, http://www.foodandnutritionjournal.org/pdf/ vol1 no 1/1_1_4_p37_47_Colostrum_MEENA.pdf, and vol. 1(1), 37-47 (2013), both of which are incorporated by reference,
Humans have had a desire to preserve body freshness as an expression of reduced evidence of body aging. Throughout civilization, preservation of body freshness has been practiced to the extent that the Egyptian civilization extended the art of body preservation after physical death. Women were the first group of humans to practice body embellishment and primarily on the face and hands with the addition of botanical preparations to be applied to the skin, As time went on and knowledge increased, body beauty treatments became more and more sophisticated. Today, even in the most underdeveloped countries, we see the enhancement of feminine beauty by the means of cosmetics.
Today the beautification of the body, by the aid of creams, lotions, or perfumes (all identified as body treatments), has developed into a significant industry that includes both males and females. Cosmetic preparations have been and are being developed for the sole purpose of enhancing the illusion of beauty and the preservation of youthful appearance. Many creams are on the market today making a multitude of performance claims and consumers subjectively select the one that best fits their illusion of youth preservation.
The present invention relates to a cream that is primarily composed of all-natural ingredients. This cream has been formulated as a functional cream for different parts of the human body with the intent to help the epidermis to remain soft, supple and fresh against the challenge of time. The basic cream formula has been modified to be most effective for each different targeted area of the body. It is well known that the skin is basically the same throughout the body, but it is also known that it has different functional properties relating to specific part of the body. Skin elasticity is the most evident and apparent property that decreases in time and is at its best at a young age. Therefore, the current invention involves the addition of colostrum throughout the cream line independently of its specifically designed point of application. As we have seen, colostrum has significant valuable properties including the capability to restore sonic lost epidermal freshness through the absorption of specific proteins present in the colostrum, such as growth hormone, Igg, Iga, and Interferon, among others. The cream containing colostrum also delivers beneficial quantities of vitamins, emulsifiers and surfactants, all incorporated in a smooth, easy to apply, all natural formula.
Cream formulations are made to contain all natural ingredients and they are present in the following ranges. Bovine colostrum (Immuno Dynamics Inc.) in the range of about 0.5% to 15%; the emulsifier Sepigel EI 305 (Essential ingredients Inc.) in the range of about 1.0% to 20%; Lecithin (an emulsifier) (American Lecithin Inc.) in the range of about 0.1 to 5.0%; Olive Oil (Bertolli or any commercially available olive oil) with an acidity range from about 0.2% to 2.0% and in a usage range of about 5.0% to 50.0%; and Vitamin E as Alpha Tocopherol (HMS Sutton Labs) in the range of about 0.01% to 4.0%; Vitamin A as Retinol Palmitate or acetate (Glambia/BASF The Chemical Co.) in the range of about 0.001% to 0.5%; Linatural MBS-I, natural preservative (Lincoln Fine Ingredients) in the range of about 0.5 to 4.0%; and Menthol, a skin toner and a topical bacteriostatic complimentary to Linatural, the preservative (Flavor and Fragrance Specialties) in the range of about 0.01% to 2.0%; Fragrance (Flavor and Fragrance Specialties) in the range of about 0.1% to 0.8%; dimethicone, as cetyl dimethicone (BASF, The Chemical Co.) in the range of about 1.0% to 10%; purified water in the sufficient amount to make a working cream, ranging from about 10% to 90% of the preparation. Cetiol Ultimate may be used instead of dimethicone. All these ingredients contribute to the composition of a facial and body cream composed of all natural ingredients. Other ingredients like Mineral Oil (Witco Co.) of any type and different viscosities, but preferably the one identified as “Carnation,” can be used as a partial or total replacement of olive oil.
The colostrum powder is dissolved in water heated at about 60° C. and preferably 50° C. to 90° C. When all dissolved, the preservative is added (Linatural MBS-1) and mixed very well. Separately, the oil soluble ingredients are dispersed at room temperature into the olive oil (lecithin, vitamin E, vitamin A, dimethicone and menthol) to a very liquid state.
To the colostrum-water solution which has now a temperature of about 40° C. to 60° C., add Sepigel and start to agitate with any type and suitable high sheer mixer to form a cream mixture. Slowly add the olive oil blend and allow to disperse and integrate into the water phase. When. very well dispersed add the fragrance and continue to agitate at high sheer. When all agitated and the cream forms a nice creamy peak, the cream is ready for packaging. The cream is than dispensed by the means of a piston filler in pre-measured quantities into selected jars or plastic tubes or any suitable consumer container package.
The following examples indicate the composition of different functional specific creams to be specific for beautification of different part of the body, and are not meant to be limiting:
12162014-C-1
Batch size
Formula
%
350 gr.
Water (DI and hot)
61.630
308.15
Colostrum (Bovine & pwd)
0.500
2.50
Sepigel E1-305
5.000
25.00
Olive oil (acidity 1.5%)
25.000
125.00
Lecithin
0.300
1.50
Vitamin E (700 IU)
0.500
2.50
Vitamin A/Palmitate
0.010
0.05
Linatural MBS-1 (preserv)
2.000
10.00
Menthol (Crystal)
0.030
0.15
Fragrance FFS #55174
0.030
0.15
Cetyl Dimethicone
5.000
25.00
Total
100.000
500.00
Colostrum and preservative are dissolved into a relatively warm purified water to form a very uniform solution. The oil soluble ingredients (including vitamins A and E, lecithin menthol, dimenthecone and fragrance) are dispersed into the olive oil at room temperature, Sepagel, the emulsifier of choice, is added to form a cream under sustained sheer while slowly the oil blend is added to further intensify the creamy texture being formed under high sheer. The resulting cream is then placed into a suitable container or jars as desired at about a temperature range of 40° C. to 60° C. and allowed to cool.
12162014-C-2
Batch size
Formula
%
350 gr.
Water (DI and hot)
65.200
326.00
Colostrum (Bovine & pwd)
0.500
2.50
Sepidel E1-305
5.000
25.00
Olive oil (acidity 1.5%)
25.000
125.00
Lecithin
0.300
1.50
Vitamin E (700 IU)
2.000
10.00
Linatural MBS-1 (preserv.)
2.000
10.00
Total
100.000
500.00
Colostrum and preservative are dissolved into a relatively warm purified water to form a very uniform solution. The oil soluble ingredients (including vitamins A and E, lecithin, menthol, dimethicone, and fragrance) are dispersed into the olive oil at room temperature. Sepagel, the emulsifier of choice, is added to form a cream under sustained sheer while slowly the oil blend is added to further intensify the creamy texture being formed under high sheer. The resulting cream is than placed into suitable container or jars as desired at about a temperature range of 40° C. to 60° C. and allowed to cool.
12162014-A-3
Batch
size 350
Formula
%
gr.
Water (DI and hot)
63.170
315.85
Colostrum (Bovine & pwd)
0.500
2.50
Sepigel E1-305
5.000
25.00
Olive oil (acidity 1.5%)
25.000
125.00
Lecithin
0.300
1.50
Vitamin E (700 IU)
4.000
20.00
Linatural MBS-1 (preserv)
2.000
10.00
Fragrance FFS #55174
0.030
0.15
Total
100.000
500.00
Colostrum and preservative are dissolved into a relatively warm purified water to form a very uniform solution. The oil soluble ingredients (including vitamins A and E, lecithin menthol, dimenthecone and fragrance) are dispersed into the olive oil at room temperature. Sepagel, the emulsifier of choice is added to form a cream under sustained sheer while slowly the oil blend is added to further intensify the creamy texture being formed under high sheer. The resulting cream is than placed into suitable container or jars as desired at about a temperature range of 40° C. to 60° C. and allowed to cool.
05071214 E-1 Complete
Batch
size 500
Formula
%
gr
Water
65.46
327.30
Colostrum (powder)
0.50
2.50
Sepigel 305 or (EI 305)
5.00
25.00
Mineral oil (Carnation)
12.50
62.50
Olive oil
12.50
62.50
Lecithin
0.30
1.50
Vitamin E 70,000 IU
0.50
2.50
Vitamin A/Beta Carotene
0.01
0.05
Linatural MBS-1 (preserv)
2.00
10.00
Menthol (crystal)
0.03
0.15
Fragrance FFS (1111-2)
0.20
1.00
Dimethicone
1.00
5.00
Total
100.00
500.00
Colostrum and preservative are dissolved into a relatively warm purified water to form a very uniform solution. The oil soluble ingredients (including vitamins A and E, lecithin menthol, dimenthecone and fragrance) are dispersed into the olive oil at room temperature. Sepagel, the emulsifier of choice is added to form a cream under sustained sheer while slowly the oil blend is added to further intensify the creamy texture being formed under high sheer. The resulting cream is than placed into suitable container or jars as desired at about a temperature range of 40° C. to 60° C. and allowed to cool.
05071214 D-1 Complete
Batch
size 500
Formula
%
gr
Water
65.46
327.30
Colostrum (powder)
0.50
2.50
Sepigel 305 or (EI 305)
5.00
25.00
Mineral oil (Carnation)
25.00
125.00
Lecithin
0.30
1.50
Vitamin E 70,000 IU
0.50
2.50
Vitamin A/Beta Carotene
0.01
0.05
Linatural MBS-1 (presev)
2.00
10.00
Menthol (crystal)
0.03
0.15
Fragrance FFS (111-2)
0.20
1.00
Dimethicone
1.00
5.00
Total
100.00
500.00
Colostrum and preservative are dissolved into a relatively warm purified water to form a very uniform solution. Sepagel, the emulsifier of choice is added to form a cream under sustained sheer while slowly the oil blend (including oil soluble ingredients) is added to further intensify the creamy texture being formed under high sheer. The resulting cream is than placed into suitable container or jars as needed at about a temperature range of 40° C. to 60° C.
These cream preparations provide a combination of all-natural ingredients for the beautification of the body and the preservation of skin freshness. | |
Regarding the rules as to how to put up a mezuzah here are the terms used by the poskim:
- Chelkat Yakov 161 - derech biyatcha, rov tashmish, door, rov kenisot. His reasoning is that the idea of rov kenisot is not found in the gemara, rishonim, or earlier poskim. The gemara when it says we follow the ragil it means only to determine if a doorway is obligated in mezuzah or not but not to determine the direction. The idea of rov tashmish helps determine which way is an entry.
- Rav Schachter - derech biyatcha, rov tashmish, rov kenisot, door. His reasoning is that rov tashmish helps clarify which way is an entry and rov kenisot does that also but it isn't as clear of a determining factor as rov tashmish since where a person spends more time determines the direction of entry and if that's not clear look at the way when majority of the time he actually walks.
- Igrot Moshe 4:43:4 - derech biyatcha, rov kenisot, rov tashmish, door. Teshuva Mahava 1:61, Minchat Yitzchak 1:89, Aruch Hashulchan 289:8, Chovat Hadar 8:1:4. Rov kenisot is the explanation of the gemara's idea of ragil and an explanation of the deoritta halacha of derech biyatcha. We follow that before we follow the doorway since the doorway was only used in the gemara when we're in doubt. The idea of rov tashmish is only an indication of what was the rov kenisot.
- Daat Kedoshim 289:11 - derech biyatcha, door, rov tashmish, rov kenisot. Since door is in the gemara we follow that before we look at any other factor. The other factors are just tie breakers and it is based on logic. It is logical that rov tashmish is more significant than rov kenisot.
Here is one piece of the argument of how to order these rules.
- The gemara Menachot 33a speaks about a house with two rooms one next to another one for the wife to do her work and one for the man to run his business and invite guests. The gemara says that the direction of the mezuzah is determined if a door is put up and see which way it opens. This is cited in Shulchan Aruch YD 289:3.
- Why didn't the gemara use the rule of usage? The Chelkat Yakov YD 161 writes that the rule of following whichever room is used more doesn't apply since the man and woman each use the room the whole day. Therefore, a door is necessary to establish the direction of the mezuzah.
- Why didn't the gemara use the rule of majority of walking? The Chelkat Yakov points out that even though many more times people will walk from the man's door into the woman's room rather than vice versa considering that the man is using his side for guests that doesn't establish which direction is an entrance. Rav Moshe YD 1:176, however, argues that the reason that the gemara needed the door to determine the direction is because the majority of her walking in one direction offsets the way that other walk. However, if everyone would use a doorway as an entrance a majority of the time then the mezuzah would be established in that direction. This dispute impacts whether the rule of majority of walking precedes the rule of doorway; according to Chelkat Yakov it doesn't, while according to Rav Moshe it does.
Rabbi Yitzchak Yaakov Weiss (1902-1989), ashkenaz dayan and posek, Rav and Av Beit Din in Romania, then in Manchester, England. Headed the Eidah Charedis in Yerushalayim, author of Sh"t Minchat Yitzchak.
Rabbi Yechiel Michel Halevi Epstein (1829-1908). He was a community rabbi and a posek in Novardok, Lithuania. | https://www.halachipedia.com/index.php?title=Talk:Mezuzah |
Faculty:
|Faculty of Informatics|
|Course unit code:|
BIAX10031
Course unit title:
|Data Structures and Algorithms|
Planned learning activities and teaching methods:
|Credits allocated:||6|
|Recommended semester/trimester:|
Applied Informatics, 1. year, 2. semester
|Level of study:||1.|
|Prerequisites for registration:||none|
|Assessment methods:|
|in-class tests and programming assignments - 40 %|
final written exam - 60 %
Assigned Marks
A = 94 - 100 points
B = 86 - 93 points
C = 76 - 85 points
D = 66 - 75 points
E = 56 - 65 points
FX = 0 - 55 points
|Learning outcomes of the course unit:|
Students gain basic knowledge about data abstraction, abstract data types and other data structures including stack, queue, tree, graph, list etc. involving their specifications and various implementations. In addition to that the course also deals with analysis of algorithms for sorting and searching stressing their complexity.
Course contents:
|1. Data abstraction, abstract data types, specification and implementation of abstract data types and their initialization.
|
2. Introduction into data structures, data types and data structures, overview of data structures: stack, queue, array, table, set, list, tree and graph.
3. Design and implementation of abstract data types, implementation of data structures: string, stack, array, hash table, set.
4. Pointers and dynamic data, pointer data type, concept of dynamic data, allocation and freeing of dynamic memory, dynamic programming.
5. Linked lists, their implementation via arrays and dynamic memory, doubly-linked lists, circular lists, trees and graphs.
6. Recursion: definition, recursive functions, infinite recursion, implementation and complexity of recursion.
7. Complexity analysis of algorithms: memory and operating complexity of algorithms.
8. Sorting algorithms classification.. Algorithms with quadratic operating complexity.
9. Sorting algorithms with complexity n log n, special sorting algorithms.
10. The searching problem.
11. Associative searching and search trees.
12. More dimensional search and more dimensional trees, search algorithms with return.
Recommended or required reading:
|Language of instruction:||Slovak, English|
Notes:
Courses evaluation:
|Assessed students in total: 382|
Name of lecturer(s):
prof. RNDr. Frank Schindler, PhD. (examiner, instructor, lecturer, person responsible for course)
Ing. Erich Stark, PhD. (instructor)
Last modification:
18. 12. 2019
Supervisor:
|prof. RNDr. Frank Schindler, PhD.|
Last modification made by Ján Lukáš on 12/18/2019. | https://is.paneurouni.com/katalog/syllabus.pl?predmet=50349;zpet=../pracoviste/predmety.pl?id=47 |
Root Cause Analysis (RCA)– Important Steps
Let’s look now at an important tool used in continuous improvement efforts. The tool is Root Cause Analysis (RCA). We will cover steps you can take when using this tool to help solve problems and resolve issues.
We should note that Root Cause Analysis is a way of finding a fundamental cause for a problem, not just a ways to address a symptom. For example, imagine you are manufacturing plastic cups and scrap 100 of them in a day. A cause of this could be that the technician did not maintain equipment properly. The root cause could be that the maintenance procedure is not clear and the training did not cover maintenance. To fix the problem properly you must maintain the equipment and improve the overall process of documentation and training.
We should also note that Root Cause Analysis, when done properly, is an iterative process to help with continuous improvement in an organization. Root Cause Analysis can use approaches that depend on the application. These can be quality industrial applications, safety concerns, process concerns, business processes, and systems approaches.
General principles
Here are some of the general best practices for Root Cause Analysis:
• Make the aim of Root Cause Analysis to determine how to fix a particular problem and other related problems.
• Use Root Cause Analysis in a systematic way.
• Remember a problem can have more than one root cause.
• Keep cost in mind when determining a fix to a problem.
• Keep in mind a fix needs to be sustained.
• Understand that Root Cause Analysis can cause a change to a culture and resistance from those who will implement the change.
Here are some steps to taking action based on Root Cause Analysis:
1. Define the problem.
2. Collect data.
3. Ask why. This means determining the factors that led to the problem.
4. Determine which factors are root causes and not just symptoms.
5. Identify corrective actions.
6. Identify solutions that will help the problem from recurring and do not cause other problems.
7. Implement the solution.
8. Determine if you can use this solution with other problems.
More guidelines
Remember that Root Cause Analysis is one of the basic tools you should use for continual improvement. Your goal using it is to understand an issue and what is causing it. You can then resolve the issue, not repeat the problem, and improve a process.
Here are some guidelines:
• Use Root Cause Analysis as soon as you recognize a problem.
• Do not wait until a problem becomes severe.
• Use Root Cause Analysis in an iterative way.
• Give priority to the problem that is most urgent.
• Be precise.
• Pick a problem that is solvable.
• Use the 5 Whys technique.
When you implement a corrective action to communicate to all involved the:
• reason for the action.
• benefits of the action.
• time needed to implement.
After implementing a solution remember to iterate.
Here are some related best practices:
• Ask if the solution is effective.
• Review the results of the action.
• Modify the action as needed.
• Recognize you may need a different approach if the problem continues.
• Update procedures.
After you update procedures check that everyone is following the procedures. In time revisit the issue to make sure the fix is still working and everyone is following the procedures properly. Remember that training could be involved once you determine a resolution of a root cause problem.
Remember also that when you do a root cause analysis the next step is critical. You must have an effective corrective action plan and a preventive action plan too. Make sure your organization follows the plan, provides proper documentation, trains all involved, and continuously follows up on additional improvements to the process.
We can positively help you plan and change the culture and operations of your organization. We offer Six Sigma Green Belt and Six Sigma Black Belt training programs, as well as we offer root cause analysis training
SixSigma.us offers both Live Virtual classes as well as Online Self-Paced training. Most option includes access to the same great Master Black Belt instructors that teach our World Class in-person sessions. Sign-up today! | https://www.6sigma.us/etc/root-cause-analysis-important-steps/ |
Nona Gaprindashvili. Fighting for Equality
A chapter from Nona Gaprindashvili's 1977 book I Prefer Risk. Many of her thoughts on women's chess still seem to be relevant even now, 40 years later.
I've included some chess games to divide the wall of text somewhat.
Fighting for Equality
Have you ever heard phrases such as "women's chess", "plays like a woman"? I have, many times. Sometimes I heard them from my male opponents, in quite a condescending tone. It never occurred to the people who said them that it was simply impolite. For them, these terms seemed absolutely normal, objectively reflecting the natural (from their point of view) phenomenon - that women chess players are obviously weaker than men.
Yes, women are weaker (at least currently). There are both objective and subjective reasons for that, which are, sadly, mostly ignored.
There's women's track and field, women's volleyball, women's cycling, but there's nothing derogatory or ironic in these terms. Even though discs and shot-putting shots for women are lighter, the volleyball net hangs way lower, and in women's cycling, some very interesting forms of competitions are absent, such as multi-day races.
Therefore, there are some objective physiological peculiarities that make it impossible for the "weaker sex" to compete with men on equal terms in most athletic disciplines.
Well, in chess, these physiological peculiarities of the beautiful half of humanity are just ignored. Like men, women play long tournaments that sometimes last for a full month - with sleepless nights for analysis, with special preparation for each opponent, anxiety attacks, etc. Like men, we get 2.5 hours for 40 moves - so, like men, women get into time trouble and experience the same stresses as the men.
Does that mean that I think that we should introduce some concessions for the female players, such as setting time control after 35 moves rather than 40? No! The most obvious reason is that the decrease in control moves will only lead to more adjourned games.
Female players don't need any concessions. Women, due to their physiological and psychological traits, possess less fighting qualities than the men, but they do have some other qualities to compensate.
Yes, as yet, women can't play chess as strong as men. But if we consider the creative side of things, which is probably the most important in evaluating the chess mastery, playing quality (not playing strength!), the gap between women and men isn't as big. In other words, the gap in playing quality between male and female players is smaller than the gap in playing strength. Some strategical ideas, combinations and tactics of female players are very deep and brilliant.
So, when I say that women obviously can't play chess as strong as men, I mean that this can be explained by the advantages that the men have in other aspects of chess struggle, chess sport, not the intellectual aspect. If one understands that, they would treat women's chess with greater respect - perhaps even with the respect it deserves.
The uninformed people usually think that chess puts only nervous, psychological strain on the players (it's usually considered that women can handle that strain as well as men), not physical. Actually, women are usually more emotional, so they feel more vulnerable under the strain, and for long tournaments with many players, you also simply need physical endurance.
In the book Three Matches of Anatoly Karpov, Botvinnik writes, "Chess is an intellectual, but very intensive work, a chess player must be able to withstand tough strain." Then Botvinnik tells us how much time and strength he spent to analyze the 20th game of his return match against Tal (the game was adjourned twice). He continues, "Can a man withstand such a pressure, if he's not a true chess player? Highly unlikely." And how do the true female chess players feel? Is it somehow easier for them?
No wonder that chess is called an amalgamation of science, art form and sport; in the 1970s, chess became much more of a sport, which somewhat suppressed the artistic, creative side of things.
The tough, even cruel qualification system that works on every level of the chess hierarchy, of course, gives order to the system of chess competitions, and it's very valuable and useful because of that. But the play-off qualification style, where the loser gets knocked out of the competition, makes many players ignore the purely creative goals, become more practical and sometimes risk-averse.
In the atmosphere of such a pressure, where every tournament is a "qualification" for something, where each mistake can be ruthlessly punished, it's very hard, if not outright impossible, for women to compete with men on equal footing. Because there seems to be something in men's biology, in their character that developed in the course of millennia of human history and allows them to withstand this pressure easier, to remain in top condition until the very last move of the very last round, to never relax unnecessarily.
Thankfully, women don't have to qualify through men's tournaments, the two qualification systems are separate. However, in the last years, we've seen quite a few mixed tournaments, even championships (for instance, the U.S. Championship) where men and women play together.
I have noticed that when I played in men's tournaments, I would often make disappointing errors in the last playing hour, particularly last 15 to 20 minutes. I make these errors most often when I get a big advantage. If I didn't blunder away wins so often, my results at men's tournaments would have been much higher!
So, am I just unprepared for the fifth hour? No, up until now, I've been in a good physical shape, and I didn't make more mistakes during the fifth hours than during any of the other hours.
But I have noticed that even in the most hopeless positions, men defend persistently and fiercely, until the last drop of blood - such is the man's character. Women, on the other hand, tend to quickly stop fighting in lost positions. I have lost many a half-point before I finally understood this trait of men's character, but even after that, I would still sometimes let them go easy. And this can be most probably explained by the fact that it's much more difficult for a woman with a softer character to remain fully concentrated and ruthless until the very last move.
In short, chess sport is as demanding for women as it is for men - and women don't get any concessions even in the mixed tournaments. Nevertheless, female players are fully capable of creative and sporting successes - even against men. But the traditional view of women's playing as imperfect and weak has taken such roots that most men can't even believe that a woman can develop and implement a spectacular idea, to play a brilliant game.
I want to remember an incident that happened during one of the Soviet Union - Yugoslavia matches. While Kira Zvorykina thought about a move for a long time, some grandmasters from our team became angry. There was apparently a simple, but pretty combination, and the men fumed: why can't Zvorykina see it?
Zvorykina continued to think and ultimately made a much more prosaic move that still led to a win. When the grandmasters smugly showed Zvorykina that combination in post-mortem, she, without even bothering to hide the irony, played out a refutation. Zvorykina understood the position deeper than her rash critics!
I hope that my readers won't think that I'm immodest, but I can also show you an example from my own practice. In 1974, I played in a tournament in Dortmund. In the third round, I played against the German master Servaty. In this game, I played a beautiful combination with a double rook sacrifice. Servaty had to resign at the 17th move due to impending mate.
This win gave me immense creative pleasure. The game was featured in all the world's chess media. Later, Mikhail Tal, the world's best combination expert, valued this game very high. Another famous grandmaster in the Ogonyok magazine gave a lot of compliments, even calling the game "immortal" and saying that "it should feature in any collection of the greatest chess masterpieces.
Of course, this Ogonyok article was very flattering, but still, I felt some resentment behind it. Like, look, a woman can also play beautiful! If a man played such combination, I think he would be commended for that, but without such an admiration.
By the way, there was also a curious (but very typical!) detail. Before my match against Nana Alexandria, one journalist wrote an article about me, mentioning this game and saying that "Gaprindashvili launched a vicious attack on Servaty and quickly forced her to resign". It seemingly didn't occur to him that I could defeat a man in such a spectacular style!
I must admit that this is not the only and, of course, not the most serious example of inadvertent discrimination of female chess players. There are much worse things: for decades, FIDE treated women's chess with open disrespect. FIDE's motto is "We're all a family". A family they might be, but they retained the Domostroy values for quite a long time.
It's enough to say that women's chess Olympiads have started 30 years later than men's, and in the first year, they were held quite irregularly, with big intervals. Only after 1976 (at least thanks for that!) the number of players in teams increased from two to three, with one reserve. (By the way, in the 1974 Soviet book about the history of chess Olympiads, there's not a single word about women's Olympiads, as though they never existed!)
Only in 1975 a Women's Grandmaster title was introduced, even though some female players defeated the strongest grandmasters in some of their games.
There's still no youth world championship for women, while there's even a European youth championship for men.
Let me show you another example of how women's chess treated as second-class, not deserving any serious respect. The famous Soviet master Ilya Kan wrote an article for 64 magazine in 1975, remembering the Moscow international tournament of forty years ago. Remembering Lasker, Kan wrote, "Despite being the oldest participant of the tournament, the veteran in private talks expressed his desire to challenge for the World Championship again. However, he said that he would only agree to play a match if he would have to play for no more than three days a week. Now, this wish might seem strange, because that's the standard current schedule for the Candidates' matches (even for women's)."
Oh, how cute this even sounds! That's so great: even the women got the right to play every other day and have one additional rest day per week! Just remember the 1975 Candidates' final between Alexandria and Levitina. The score was level after 12 games, so they had to play until the first win, and they had to play five more games under such a colossal strain! I can surely say that no men's Candidates' final ever was so dramatic and intense.
They played 17 games! Let that sink in. And then remember that, for instance, Botvinnik said that 20 games is the optimal number for men's matches. Still, the women withstood this heavy pressure, fighting until last breath.
Still, even this schedule, which seems too forgiving for women to the renowned master, had both opponents to push the limits of their physical and psychological strength. Small wonder: before that, they played in an Interzonal where they shared 2nd-5th places, then played a double round-robin to eliminate one of those four, and then played semi-finals!
It's not hard to see that the match between Alexandria and Levitina was difficult both for the players and their coaches. At any rate, Nana Alexandria's coach, GM Bukhuti Gurgenidze, was quite pale and haggard when he returned. In an interview given after this drawn-out fight, Gurgenidze said that in his opinion, the challenger for Women's Chess Championship should be determined in a tournament with six or four participants, and grueling series of matches should only remain in men's championship cycles.
If even a grandmaster came to conclusion that women had to play in more difficult conditions than the men, then it must be true. I don't want to discuss either Gurgenidze's suggestion or the championship cycle in general here. But it's a fact that the FIDE executives made a mistake when they copied men's competition system for women (which was, of course, the simplest thing to do).
It's obvious that the multi-tiered qualification system poses too difficult problems before female players. In comparison, let me remind you that I got the right to challenge Bykova after competing in just two tournaments: the Soviet championship (which doubled as a zonal) and the Candidates' tournament.
I don't doubt that this FIDE's mistake will be rectified, but still, it can be explained by the fact that the FIDE bosses can't or don't want to understand that female chess players shouldn't be playing in the same (or even harder) conditions as the male players. I'm not asking for any preferences: I only want the competition organizers to take physical and psychological traits of women's organisms into account. Who needs excessive strain, after all?
I think that someone might think that I'm being too vehement here. Perhaps I am. But I see my duty as a world champion to champion the rights and dignity of all female players.
But still, let's return to the reasons that still don't allow women to reach the same heights in chess as men. One of those reasons is that women have only recently - and only in the socialistic countries - achieved equal rights with men. For centuries, only select few high-class women took some interest (just interest!) in chess. So, women's chess have next to no tradition, next to no history, and that's very important.
Former world champion Bykova did a colossal job by unearthing extensive material about some talented female players of old. But still, those were only individual players. Women's chess movement formed only after the World War I. Some nations, however, encouraged women to play chess, it was seen as a sign of gentility, education, and delicate taste. In Georgia, for instance, there's a tradition to include a chess set in the bride's dowry. Still, this obviously can't be compared with the popularity of chess amongst men.
Women's chess traditions are essentially only formed right now. So it's no wonder that in some countries, women's chess have only recently been introduced or even still not introduced at all.
But even when women - in a country such as ours - have the same social rights as men, the conditions are still far from equal. Men usually don't burden themselves with doing home chores, they delegate them all, including raising children, to their wives. And this allows them to fully concentrate on chess, both during their preparation and during tournaments.
Female players lack this privilege. Even when it's all right at home, if there's someone to care for their children, they're still think back to their home and family - sometimes even during games! Should I even explain how this affects playing strength? Remember: "Chess require full, complete dedication..."
I know many great players, such as GMs Bronstein and Geller, who work 5-6 hours a day on chess; GM Portisch from Hungary works on chess for 8 hours a day - a whole work day! (By the way, Portisch's wife accompanies him to many important tournaments.) There's not a single female player in the world who can afford such a luxury! So, it's no wonder that in theoretic preparation and positional understanding, in endgame playing, in general playing technique, men are stronger than women.
For many talented female players, familial obligations and anxieties became the ultimate obstacle to further successes. I am personally very lucky in this regard, but even I, when I leave my home for extended periods of time and go to a tournament, still worry how does my family feel, how are my husband Anzor and little son Datiko. And in two men's (and many women's) tournaments, I've had to play without much preparation due to home work and university exams.
There are even more specific reasons that can seem ridiculous and laughable, but they do actually interfere with women's ability to fully concentrate on the game. A male player probably wouldn't notice which suit and tie did his opponent wear to the game, while the women - we can't do anything about that, the nature created us so - are interested in pretty much everything: dresses or suits of other players, their brooches, or rings, or whatever. You can laugh, but for female chess masters, however strong and dedicated to the game they are, these details aren't insignificant, and they do provide some distraction.
And still... The readers have probably noticed that I've said that women are now playing weaker than men. Yes, despite all specific difficulties and obstacles, the class of female players constantly grows, and the gap between women and men decreases. This isn't an isolated process or an achievement of singular gifted individuals, but a logical consequence of great societal changes in the world that allow women to equal and even surpass men in many aspects of cultural and spiritual life.
The participation of women in correspondence tournaments is also important. Not so long ago, 15 or 20 Soviet female players were taking part in men's correspondence tournaments, there were no women's tournaments of that type. But now, there are both individual and team Soviet correspondence chess championships, with five or six semi-finals, and there's even a world women's correspondence chess championship.
I'm also delighted that women's chess aren't just becoming stronger - they're also becoming younger.
I remember when I and Nana Alexandria played in the Soviet championships at the age of 15, this became a sensation. But now, Maya Chiburdanidze became an International Master at 13 and played in the Soviet Championship, and a year later, the 13 years-old Nino Gurieli also played in the Soviet Championship with her, then Nana Ioseliani, who's even younger, won a medal at the All-Union Central Council of Trade Unions championship, but all that was regarded as normal.
Because of that, the style of women's chess is changing before our eyes. The girls play with reckless bravery, not fearing anyone, without any piety before big names, and some of their attacks can be shown as examples to men. This may sound strange, but some young girl players are more bold, audacious and sharp in their games than the boys of the same age.
In 1973, during the USSR - Yugoslavia match, then 12 years-old Maya Chiburdanidze astonished everyone. Many people saw that she could play a combination against Kalchbrenner after some preparational moves, but Maya played this spectacular combination immediately, without any preparation!
Note: I don't know whether it's this game Gaprindashvili referred to, but it's still very pretty.
Natasha Ruchyeva, Nana Ioseliani, the aforementioned Gurieli and many other talented girl players are also very brave. "Women's chess" have become much younger, turned into, should I say, "girls' chess", and this affected their character - it became much more firm.
I should also say that by saying now, I also based it on my own practice and the practice of some other strongest female players. Nana Alexandria and I quite often play in men's tournament in Georgia, and sometimes (alas, quite rarely) in international men's tournaments too. I'm absolutely sure that if I played at least in three men's tournaments a year, I would get enough practice in the "men's style" to achieve men's grandmaster norms (Gaprindashvili became a "men's" grandmaster in 1978). Even though I'm rarely playing in men's international tournaments, I've managed to get rid of most women's chess "complexes" and, as I hope, earned some respect as a player.
It was quite a task to earn that respect from men, even though I've successfully competed with them many times. For instance, I've played with Geller in Gothenburg and finished third. In Hastings, I won the masters' tournament and earned the right to play in the main tournament, where I finished 5th next year (let's remember that Botvinnik shared 5th-6th in his first Hastings tournament). In Dortmund 1974 tournament, I shared 3rd-4th, and in Sandomir 1976, I finished second, just half a point behind Bronstein.
These are my best results in men's tournaments. But you should bear in mind that my conditions weren't equal to all other players. Not only because men, as I already said, have more stamina, both physical and psychological. Not only because men are always fighting until the last breath, clinging to all chances they get - this rarely occurs in women's tournaments. There are other reasons as well.
For instance, men seem to feel ashamed to lost to a woman, even to women's world champion. They're playing against me with all their strength, even risking to lose their next game due to exhaustion. That's why, for example, the players who come last in tournaments and lose to grandmasters without much fighting are always playing against me with such inspiration as though the fate of the first prize depends on this game. Much time had passed until I understood why was I losing so many half-points and even full points against the relatively weak players.
But it wasn't only the fear of losing to a woman that guided my opponents in men's tournaments. They obviously didn't believed in my skills. Do you think that the grandmasters show any chivalry when they play against women? Not at all! When playing with me, they even tend to forget the good chess manners, doing things they would never do if playing against another man.
I'll never forget one grandmaster, a gentleman with immaculate manners. In a completely equal position, he'd played me for hours, until the bare kings.
In Gothenburg, Spassky adjourned a game against me with some microscopic winning chances. After the first play-off, his chances became purely symbolical, and I'm sure that Spassky would have agreed to a draw with any other player. During the second play-off, I just had to be accurate. For two hours, I held very well, but then, either because of tiredness or resentment, blundered horribly and lost.
Perhaps my most principled rival was Victor Ciocâltea. He won an equal game against me at the Tbilisi international tournament. At a luncheon after the tournament, I publicly promised to him that I'll avenge my loss, and he replied, half-jokingly, half-seriously, that he'll never lose to a woman in his entire life. Later, we met at a tournament again.
He usually begins the game with the move of a King's pawn, but this time, he suddenly played 1. f4. I looked at him with a perplexed smile, and Ciocâltea said quite rudely, "Over the board, you have to think, not play out some prepared variants." His words offended me, and I've never forgotten them; Ciocâltea later said, however, that he was afraid of Scandinavian defence after 1. e4. Still, I lost the second game too, and the desire for revenge became even stronger.
And so, when at Vrnjacka Banja tournament Ciocâltea again played a different opening against me, beginning with 1. Nf3. I asked him, "So, you want to "think over the board" again?" I accepted the challenge; after the 15th move, White's position was so difficult that Ciocâltea... offered a draw. I, of course, refused that kind offer and won the game, finally keeping my word.
The ambitious Ciocâltea avoided me for three days after that, he didn't even greet me, and this became a subject of many jokes from other players. Then we finally mended fences, and Ciocâltea started to treat me seriously as an opponent after that.
There was another obstacle that adversely affected my performance in men's international tournaments. Most participants, at the very least, knew my women's world championship games, so they knew at least something about my playing style. I, on the other hand, had to learn the playing style of most of my opponents during the games. I only finally got an opportunity to prepare specifically for every opponent after I started to play in men's tournaments together with my coach GM Aivars Gipslis, a deep theoretician. After getting access to his immense database, I've finally breathed with relief, stopped wandering blind and understood how much advantage I gave to my male opponents.
Talking about women playing in men's tournaments and women's chess evolution, I should also mention the first women's world champion and the first female player to successfully play in men's tournaments. Vera Menchik, who was women's world champion from 1927 to 1944, was way ahead of her time and played an invaluable role in popularizing chess among women.
Menchik, the daughter of a Czech man and an English woman, was born in Moscow in 1906 and lived there until autumn 1921, before departing to England with her parents. She knew Russian language very well. Interestingly, at the first women's chess tournament in 1927, Menchik, since she came from Soviet Russia and spoke Russian, was written down in the table as representing Russia.
Vera Menchik retained her love for our country until the end of her life, and commanded immense respect here. When she played in the 1935 Moscow International tournament, the spectators loudly supported the women's world champion. And when Menchik drew against GM Flohr, who fought for the first place, she received a lot of telegrams with congratulations from many of our cities.
Menchik won seven Women's World Championships, four of them - with perfect score. It's little wonder that, finding no equal in women's tournaments, she turned her attention to men's competitions. According to Elizaveta Bykova's count, Menchik had played nine games against Capablanca, eight against Alekhine, five against Euwe, twice against Botvinnik; she also had games against Flohr, Keres, Fine, Reshevsky, Tartakower, Vidmar, Maroczy, Lilienthal, Najdorf and other grandmasters and masters. She played in total 50 men's tournaments (I'm saying that with envy!) and played 487 games - 147 wins, 147 draws, 193 losses.
People sometimes ask me how would I perform in a match against Vera Menchik. Some flatterers went as far as to proclaim that Gaprindashvili was undoubtedly stronger than Menchik. Even the late master Vasily Panov couldn't resist the temptation and wrote an article "Menchik vs. Gaprindashvili: The Match", even though he criticized us both quite severely.
I think that all those comparisons and discussions are ridiculous. Vera Menchik and I played in very different periods of chess history, and you cannot compare our playing level, because we have never even played against the same opponents (actually, both Menchik and Gaprindashvili played Paul Keres, but Gaprindashvili referred specifically to female opponents - Sp.), let alone against each other. I think that there's a place in chess history for us both, and we don't have to compete for it.
Perhaps the future chess historians will find some objective criteria to directly compare the playing strength and quality of female players of different eras. But Vera Menchik still has one achievement that no-one will ever replicate. She was the first woman to prove that even in such a sporting and intellectual sphere as chess, women can achieve the same successes as men.
Speaking without any false modesty, I think that I, with my playing style, attacking tendencies and some wins against strong male opponents, also played some part in making the female chess masters' chess thinking richer, their playing styles - braver, and making them feel less anxious and inferior when facing men.
In the earlier times, female players were doing everything to protect their king - safety first. They instinctively avoided the double-edged positions where everything is unclear, where even the smallest carelessness (or extreme carfeulness!) could lead to immediate catastrophe. Now, such players as Nana Alexandria or her Candidates' final opponent Irina Levitina play in a style that's a lot sharper than some male masters use.
Tatiana Zatulovskaya is an example of creative approach to chess. Her deep positional understanding is organically coupled with combinational talent. If Zatulovskaya wasn't as womanly emotional and didn't waste as much nervous energy after losing, she would surely play a World Championship match.
Unlike Zatulovskaya, who lacks the necessary sporting qualities, Levitina is a true sportswoman. Her understanding of chess is very mature, and in the same time, her playing style is bold, confident and sharp, with emphasis on the middlegame. Levitina is very talented, but, as it sometimes happens with gifted people, she's not diligent enough. If she manages to overcome this serious flaw and dedicates herself to determined chess work, Levitina would also surely challenge for the world championship.
Other female players that adopted "men's style" include Elena Fatalibekova (Olga Rubtsova's daughter), who was among the leaders in the Central Chess Club men's championship and ultimately shared the fifth place, the Hungarians Zsuzsa Verőci and Maria Ivanka, and many others.
So, the time of so-called "women's chess" is gone! And I'm proud of helping to promote women's creative emancipation in chess, helping them to overcome psychological barriers that separated them from "men's" chess... | https://www.chess.com/blog/Spektrowski/nona-gaprindashvili-fighting-for-equality |
This week your assignment will be focusing on conducting a community assessment and writing a health policy brief. You will need to have a community health provider to interview. The interview does not have to be in person but needs to be with a community health provider that works in the community being assessed. You will use the Functional Health Patterns document provided in the Study Materials Folder to conduct a “windshield” survey of your community. Note: you will need to do some research to find supportive data for many components of the Functional Health Patterns that cannot be determined through a windshield survey. Ex. Child abuse rates, STI rates, Obesity %, etc. The Functional Health Patterns data is part of your PPTX. Speaker notes should be provided for content slides.
The RN to BSN program at Grand Canyon University meets the requirements for clinical competencies as defined by the Commission on Collegiate Nursing Education (CCNE) and the American Association of Colleges of Nursing (AACN), using nontraditional experiences for practicing nurses. These experiences come in the form of direct and indirect care experiences in which licensed nursing students to engage in learning within the context of their hospital organization, specific care discipline, and local communities.
This assignment consists of both an interview and a PowerPoint (PPT) presentation.
Assessment/Interview
Select a community of interest in your region. Perform a physical assessment of the community.
1. Perform a direct assessment of a community of interest using the “Functional Health Patterns Community Assessment Guide.”
2. Interview a community health and public health provider regarding that person’s role and experiences within the community.
Interview Guidelines
Interviews can take place in-person, by phone, or by Skype.
Develop interview questions to gather information about the role of the provider in the community and the health issues faced by the chosen community.
Complete the “Provider Interview Acknowledgement Form” prior to conducting the interview. Submit this document separately in its respective dropbox.
Compile key findings from the interview, including the interview questions used, and submit these with the presentation.
PowerPoint Presentation
Create a PowerPoint presentation of 15-20 slides (slide count does not include title and references slide) describing the chosen community interest.
Include the following in your presentation:
1. Description of community and community boundaries: the people and the geographic, geopolitical, financial, educational level; ethnic and phenomenological features of the community, as well as types of social interactions; common goals and interests; and barriers, and challenges, including any identified social determinates of health.
2. Summary of community assessment: (a) funding sources and (b) partnerships.
3. Summary of interview with community health/public health provider.
4. Identification of an issue that is lacking or an opportunity for health promotion.
5. A conclusion summarizing your key findings and a discussion of your impressions of the general health of the community.
While APA style is not required for the body of this assignment, solid academic writing is expected, and documentation of sources should be presented using APA formatting guidelines, which can be found in the APA Style Guide, located in the Student Success Center.
This assignment uses a rubric. Please review the rubric prior to beginning the assignment to become familiar with the expectations for successful completion. | https://essaysolving.com/community-assessment-and-writing-a-health-policy-brief/ |
NWS Weather Alert
NWS Weather Alert
NOTE: This information is provided by the National Weather Service. Forecast may differ from local information provided by our own 69News Meteorologists
...HEAT ADVISORY IN EFFECT FROM NOON WEDNESDAY TO 8 PM EDT
THURSDAY...
* WHAT...Heat index values to around 100 on Wednesday and up to
104 Thursday.
* WHERE...In New Jersey, Sussex, Warren, Eastern Monmouth and
Coastal Ocean. In Pennsylvania, Berks, Lehigh and Northampton.
* WHEN...From noon Wednesday to 8 PM EDT Thursday.
* IMPACTS...Hot temperatures and high humidity may cause heat
illnesses to occur.
* ADDITIONAL DETAILS...An extended period of hot and humid
conditions is expected Wednesday and Thursday and may continue
through Friday. Overnight low temperatures in the low to mid
70s will not provide much relief from the heat. The hottest
period is expected Thursday and Thursday night.
PRECAUTIONARY/PREPAREDNESS ACTIONS...
Drink plenty of fluids, stay in an air-conditioned room, stay out
of the sun, and check up on relatives and neighbors. Young
children and pets should never be left unattended in vehicles
under any circumstances.
Take extra precautions if you work or spend time outside. When
possible reschedule strenuous activities to early morning or
evening. Know the signs and symptoms of heat exhaustion and heat
stroke. Wear lightweight and loose fitting clothing when
possible. To reduce risk during outdoor work, the Occupational
Safety and Health Administration recommends scheduling frequent
rest breaks in shaded or air conditioned environments. Anyone
overcome by heat should be moved to a cool and shaded location.
Heat stroke is an emergency! Call 9 1 1.
&&
SILVER SPRING, Md., Aug. 10, 2021 /PRNewswire/ -- As you prepare to send your children back to school, check out the new Nutrition Facts label. The Nutrition Facts label has been updated for the first time in more than 20 years to reflect updated scientific information and new nutrition research. Use the label to compare packaged foods and beverages as you make lunches, pack snacks, and prepare meals for your loved ones.
Understanding and using the Nutrition Facts label:
This fact sheet: Understanding and Using the Nutrition Facts Label was developed by the U.S. Food and Drug Administration (FDA) in collaboration with the American Academy of Pediatrics. The fact sheet explains how you can use the Nutrition Facts label to make informed food choices that contribute to lifelong healthy eating habits. Some tips to get the whole family involved:
Lunchtime is a great time to read the label. Make it a family habit when packing lunches to look at the Nutrition Facts label on packaged foods and drinks…and remind your children to check out the label in the school cafeteria.
Measure out single servings of snacks. When your kids reach for their favorite snacks, challenge them to measure out what they think is one serving. Then have them measure out the serving size according to the label. Keep single servings in resealable plastic bags or containers so you can quickly grab-and-go!
Make the shopping list together. Have your children read the label on food and beverage packages in your pantry and refrigerator and add items to your family's shopping list that are higher in nutrients to get more of and lower in nutrients to get less of.
Go food shopping together. Take your kids grocery shopping! It's a great chance for them to read the label and compare and contrast their favorite foods and drinks.
Encourage your kids to learn more about the Nutrition Facts label. The Snack Shack Online Games is a cool place where kids ages 9–13 can play two games that test their knowledge about using the Nutrition Facts label and making healthier snack choices. Parents can also check out the Read the Label Youth Outreach materials with fun tools and tips to explore the Nutrition Facts label together.
DISCLAIMER FOR COMMENTS: The views expressed by public comments are not those of this company or its affiliated companies. Please note by clicking on "Post" you acknowledge that you have read the TERMS OF USE and the comment you are posting is in compliance with such terms. Your comments may be used on air. Be polite. Inappropriate posts or posts containing offsite links, images, GIFs, inappropriate language, or memes may be removed by the moderator. Job listings and similar posts are likely automated SPAM messages from Facebook and are not placed by WFMZ-TV. | |
How Will Brexit Affect Oil Prices? Market Takes A Hit After EU Referendum, But It’s No Long-Term Blow
Crude oil prices dipped Friday as fear and uncertainty roiled financial markets in the wake of Britain’s decision to exit the European Union. But the drop is likely to be a temporary stumbling block for the steadily strengthening oil industry, analysts said.
Global benchmark Brent crude sank by more than 6 percent early Friday to below $48 a barrel, although its price recovered slightly to $48.50, representing a fall of 4.75 percent. U.S. benchmark crude was down 4.49 percent to $47.86 a barrel at 2:03 p.m. EDT.
Oil prices are still almost 80 percent higher than they were in the first quarter of 2016, when Brent and U.S. crude dropped to 13-year lows. Widespread disruptions in the oil supply and signs of rising demand are steadily chipping away at the global crude glut that’s weighed on markets the past two years.
While the Brexit vote Thursday may make oil prices more volatile in the short term, the U.K.’s decision is unlikely to drastically disrupt the oil market’s fundamental drivers, production and consumption, said Robert Johnston, CEO of Eurasia Group, a political risk research and consulting firm. “The market is mostly focused on what the Saudis are doing and what’s happening in the U.S. shale sector,” he said. “Brexit is less of a factor.”
Surging output by Saudi Arabia and U.S. shale oil producers in recent years helped fuel the supply glut that ultimately pushed down prices. Saudi officials in the last year have resisted calls by fellow OPEC members to cap or freeze production in a bid to boost oil prices. The oil-rich kingdom instead has protected its market share, in turn driving out some of its competition from the American oil patch.
U.S. crude production has steadily dropped in recent months as low oil prices forced drillers to cancel projects, delay tapping new wells of slash investments. Domestic production fell 39,000 barrels a day last week to 8.7 million barrels a day — about 1 million barrels a day below a peak of 9.7 million barrels a day in April 2015, the U.S. Energy Information Administration reported this week.
The drop in U.S. production is adding to the significant supply disruptions in Canada, Libya and Nigeria. Massive wildfires in Canada’s oil sands region in May knocked out as many as 1.5 million barrels a day in capacity as producers shuttered facilities and workers fled for safety. Attacks on Libya’s key production facilities has delayed the country’s attempts to restore oil output to pre-2011 levels. And in Nigeria, military action against facilities and pipelines in the oil-rich Niger Delta region pushed oil production down to 30-year lows.
Global oil supplies in May fell by 0.6 percent to 95.4 billion barrels a day compared to a year earlier, marking the first significant drop since early 2013, the International Energy Agency (IEA) reported June 14.
In contrast, oil demand nudged upward on signs that lower fuel prices were spurring higher consumption in emerging economies. Global oil demand grew an estimated 1.6 million barrels a day in the first quarter of this year over the same period last year, the IEA said.
“At halfway in 2016, the oil market looks to be balancing,” the Paris-based energy agency said in its monthly oil market report last week.
The world’s oil markets won’t be completely immune to any financial fallout from the Brexit. The referendum result could weaken oil demand by hampering Europe’s economic growth, and it may slow investment in the U.K.’s North Sea oil and gas operations.
But Europe isn’t the main driver of global oil demand growth, and the North Sea accounts for a small and shrinking share of the world’s total oil production. More likely to influence oil markets are the chances that U.S. oil production could quickly recover and the broader questions around Saudi Arabia’s long-term oil strategy.
The number of U.S. drilling rigs has risen for nearly three straight weeks as oil prices climb, fueling concerns that American producers could flood the still-oversupplied crude market. Meanwhile, Saudi Arabia has launched an ambitious agenda to diversify its oil-dependent economy and sell of stakes in its state-owned oil giant Saudi Aramco.
“Those are the two most important stories,” said Eurasia Group’s Johnston.
© Copyright IBTimes 2023. All rights reserved. | https://www.ibtimes.com/how-will-brexit-affect-oil-prices-market-takes-hit-after-eu-referendum-its-no-long-2386507 |
Example Finance Essay
Financial Ratio Analysis - Harry's Hamster Limited
Financial statements are useful as they can be used to predict future indicators for a firm using the financial ratio analysis. From an investor's perspective financial statement analysis aims at predicting the future profitability and viability of a company, while from the management's point of view the ratio analysis is important as it helps anticipate the future conditions in which the firm should expect to operate and facilitates strategic decision making (Brigham and Houston 2007, p. 77).
Profitability analysis
Harry's Hamsters Limited (HHL) experienced growth in its profitability from 2007 to 2008; however, the net income reduced significantly during 2009. The return on equity (ROE) was 4.24 percent in 2007, increased to 14.68 percent in 2008 and decreased back to 5.10 percent in 2010. Similarly, the return on assets (ROA) also initially increased and later declined in 2009; the decline was sharper compared to the decline in ROE as the ROA in 2009 of 1.73 percent is lower than 2.08 percent in 2007. The ROE comprises of two main components: the return on net operating assets (RNOA) and the return on debt (ROD). RNOA for HHL has also deteriorated during 2008 decreasing from 16.61 percent in 2008 to 5.08 percent in 2009. The RNOA is used to weigh the overall performance of the HHL management. The ROD component of the ROE has also deteriorated from 13.68 percent in 2008 to negative 3.32 percent in 2009 (Kemsley 2009, pp. 12-16).
The ROCE was the highest in 2008 estimated 11.39 percent. It implies that the capital employed by HHL yielded high returns before the expansion period and that the company was significantly profitable. A considerable decline in 2009 to 4.82 percent can be unfavourable for the investors; however, as the company has not sold its shares to the public a reduction in this ratio for a temporary period is not a major concern for the current owners.
The operating profit margins for HHL initially increased from 10 percent in 2007 to 17.45 percent in 2008; however, the company reported lowered margins of 8.53 percent in 2009. The decline in the operating profit margins of HHL is largely attributed to the increase in costs associated with the expansion of the business. The operating margins are expected to recover over the next year assuming that the new operations will become profitable as sales increase. The cost of goods sold have increased in absolute terms but the overall gross profit margins for the company have improved from 35 percent in 2007 to 42.01 percent in 2009. This implies that the company is effectively managing its relations with suppliers and has kept a control over the costs attached to buying the hamsters for breeding; but the operating costs have increased due to the low sales activity in the new operations.
Liquidity analysis
The current ratio of HHL remains above the minimum threshold of one and is currently 1.22; historically, the ratio has remained between 2.73 and 3.25 times. However, the quick ratio for the company reveals serious concerns as it has decreased from 1.67 in 2008 to 0.22 in 2009. The low quick ratio implies that a considerable portion of the current assets of the company are tied up as part of its inventory (Bragg 2007, pp. 14-16). This could also mean that HHL might be unable to sell the hamsters and sales might be suffering. The company must increase its working capital to meet its near term current liabilities and retain its solvency (Brigham and Houston 2007, pp. 42).
Efficiency analysis
The firm's efficiency has not necessarily decreased during the last year; an analysis of the efficiency ratios suggests a trend that is different from what is seen through the profitability and liquidity ratios. The inventory turnover has slightly deteriorated from 3.00 in 2007 to 2.89 in 2009; similarly impacting the day's inventory on hand from 121.67 to 126.35 during the same period. The long inventory holding period suggests that the company needs to improve its liquidity position to maintain its efficiency and aim to reduce its inventory turnover significantly (Brigham and Ehrhardt 2008, pp. 57-62). The days of accounts receivables have reduced from 45.63 in 2007 to 40.05 in 2009 and at the same time the days of accounts payables have reduced even more drastically from 40.56 to 28.08. The operating asset turnover for HHL has deteriorated considerably from 0.87 in 2007 to 0.60 in 2009, owing to a long inventory holding period and a quick payment of the accounts payables.
Capital structure analysis
The capital structure has significantly changed over the past two years as HHL has increased its financial leverage and is using a considerable debt to finance its expansion activities. The debt ratio of the firm has increase from 0.47 in 2007 to 0.60 in 2009; imply that HHL is now funding 60 percent of its assets through debt (Berry 2006, pp. 68-71). The interest coverage ratio of the company had improved considerably in 2008 and was 4.29, but it has deteriorated to 1.89 raising additional concerns for the banks. The ROD for the company has reduced considerably but remains positive implying that the current level of financial leverage is generating additional returns for the company. Operating cash flows (OCFs) for the company remain negative being typical of young firms experiencing a high growth rate, but the ability of HHL to raise additional financing is limited; therefore negative OCFs raise serious concerns for the bank management.
Report to credit committee
Analysis for reasons of results
HHL avails a long-term debt facility of £ 0.45 million and has also utilised an overdraft of about £ 35,000 from its current facility. The company performed exceptionally well during 2008, which led to an increase in its debt facility from £ 0.275 million to £ 0.45 million recently. The recent financial results revealed a tightening credit position of the company during 2009, which led to concerns regarding the excess usage of the overdraft facility by the company. Recent communication with the company reveals that it is facing liquidity problems due to its ambitious expansion program; however, the problem can be solved depending on the ability of the management to realise the seriousness of the situation (Madura 2006, pp. 17-32).
The company is running an overdraft without any immediate plans regarding its understanding to pay back the short-term loan. The overdraft is being utilised to fund the working capital needs of the company, which it did not anticipate during its expansion into southern England. The success or failure of the new operations is yet to be seen and the position will only be clear by next year. The current assets are largely financing the inventory requirements of the company, while the inventory cycles are long and not in a position to be liquidated on urgent need. The company needs to introduce additional capital in order to solve its working capital problems.
The working capital position of HHL can also improve by increasing the days of accounts payable ratio to higher levels or by reducing the inventory cycle if possible (Myers 1984, pp. 126-128). However, both options seem unlikely leading us to prescribe alternative solutions. The company has seen deterioration in the profitability ratios, which has reduced its ability to pay the interest commitments on the outstanding loan. However, the company still maintains an interest coverage ratio of 1.89 and should be able to regain its position once the new operations become profitable.
The efficiency ratios of the firm have remained relatively stable with a slight decrease in the inventory turnover, an improvement in the accounts receivables turnover and a significant drop in the operating assets turnover. The company maintains a high debt ratio and about 60 percent of its assets are funded using debt; however, this is typical of most firms under the initial expansion phase.
The company remains committed to making profits but has not considered rising outside capital by going public in the near future; the only way to maintain its current pace of growth will be either through an injection of personal equity or through the offering of company stock to the public (Ronen and Yaari 2007). The owners have invested most of their life savings into the business and the company cannot possibly raise any further internal financing.
Recommendations regarding bank arrangements
The credit committee is recommended to raise concerns regarding the current liquidity position of the company and to prepare a schedule for the repayment of the overdraft amount over the next six months. The company is expected to recover from the current situation during the next year, but it is important to remain cautious until the sales position appears to improve. Also developing a degree of pressure on the management should clearly communicate the banks position to the firm (Gibson 2009, pp. 212-216). The intention is to educate the company management about the gravity of this situation and ensuring that it is able to recover smoothly from the liquidity crunch, while at the same time minimising the bank's exposure to the business risk HHL is facing.
The Managing Director of HHL is consistent in maintaining regular contact with the bank; therefore we need to educate him with the possible solutions for recovering from the credit crunch faced by the company. The recommended solutions include a consolidation of the business before considering any further expansion projects, a reduction in the days inventory on hand, increase in the days accounts payables, the retention of profits into the business allowing for no dividend payments over the next quarters, an injection of equity from any other sources available, an increase in collateral to support the bank's claims and a phasing out of the bank overdraft over the next six months as revenues from the sales are realised (Harvard Business School 2006, pp. 3-12).
Recommendations to management about improving finances of the company
Mr. Michael,
Thanks for a quick response pertaining to the overdraft issue. We have analysed the situation faced by HHL based on the recent financial statements and the qualitative information that we received during our recent correspondence. It is understood that your company has recently gone a major expansion and the short-term impacts are apparent on the financial results in terms of lowered profitability as anticipated. The concern raised by the bank is not directly related to the profitability of your company and we remain concerned about the liquidity position of HHL in months to follow (Bissessur 2008, pp. 142-146).
The understanding between the bank and the company was that the expansion will be fully funded by the increase in the loan facility. This increase in loan was to support both the fixed investment in the expansion project as well as the working capital needs of HHL. However, as it is seen the actual expansion investment has exceeded the anticipated amounts and the company is facing a severe liquidity crunch that needs to be resolved.
The credit committee is concerned regarding the profitability of the expansion project and is not prepared to enhance the overdraft limit until the latest results for the company become available. HHL would have to independently solve this liquidity crunch by either an injection of equity to facilitate the increased working capital requirements or to raise additional external capital. The intention of the company to continue towards is expansion projects can be best facilitated through a public listing of the company to raise additional capital (Hill and Jones 2009, pp. 28-29).
The bank would require the company to pay the entire overdraft drawn in instalments over the next six months. This payment schedule has been drafted after a careful consideration of the credit history of your firm with the bank; in usual circumstances we would have required the repayment of the whole overdraft instantly. Moreover, it must be understood that this correction is in the best interest of your company as it serves to facilitate your understanding of the gravity of the situation faced by HHL.
A large proportion of the current assets held by HHL are tied up in the inventory and the company has no cash reserves available to pay for the maturing current liabilities including the bank's interest payments. It is important to understand that the company would have filed for bankruptcy if the current overdraft was not available. Therefore, it is a very serious concern which should be resolved as soon as possible (Capon 1990, p. 1145).
The company can adopt some emergency measures to immediately improve its cash position, including a maximum delay in the payment to creditors that might be possible without significantly harming the supplier relations, a quicker recovery of accounts receivables without significantly harming the sales position and an immediate sale of ready inventory on a cash payment discount (David 2006; Ebert and Griffin 2005). Moreover, the company must not withdraw any retained earnings in the form of dividends until the liquidity position is resolved.
Waiting for your response,
Nick Cameron
| |
UWCFA helps individuals, families and corporate and community groups find flexible volunteer opportunities with Cape Fear Area service organizations. UWCFA volunteers are at work every day of the year, building community and meeting critical needs in schools, shelters, senior centers, food banks, low-income neighborhoods and more.
If your group would like UWCFA to coordinate a volunteer project, please contact Phillip Hedgepeth at [email protected].
One of the best ways to accomplish three goals – develop leadership skills, make new connections, and volunteer in the community – is to join a board of directors. To help more women in our region join boards, WILMA’s Women to Watch Leadership Initiative is partnering with United Way of the Cape Fear Area and QENO (Quality Enhancement for Nonprofit Organizations).
The Get On Board! program offers training to potential board members and helps match them with board opportunities based on their skills and interests.
QENO, which will help train potential board members, is a partnership between UNCW and others in the community to assist nonprofit organizations. The two-hour “Get On Board!” Training will cost participants $20.
United Way will then help connect individuals with potential board opportunities. In the future, we plan to add connections to help people join government and for-profit boards. | https://www.uwcfa.org/volunteer-locally/ |
Mental health is a diverse topic and there’s a wide range of Mental disorders that affect people, (some of which are not even aware). In this article I want to address the most common disorders; A wide range of conditions that affect mood, thinking and behavior.
Most common types: clinical depression, anxiety disorder and bipolar disorder
1. CLINICAL DEPRESSION
Also called: Major depression
A mental health disorder characterized by persistently depressed mood or loss of interest in activities, causing significant impairment in daily life. The persistent feeling of sadness or loss of interest that characterizes major depression can lead to a range of behavioral and physical symptoms. These may include changes in sleep, appetite, energy level, concentration, daily behavior or self-esteem. Depression can also be associated with thoughts of suicide. | https://epilepsycerebralpalsy.com/2021/04/30/mental-health-3-common-disorders-you-should-be-aware-of/ |
Thousands of years ago all human beings hunted and gathered – today only a few remain of the hunting and gathering societies. The survival of these indigenous minorities is seriously threatened by the greed and insensitive economic requirements of those who rule and administer their land. The Wanniyala Aetto [also known as Veddahs in Sinhalese] of Sri Lanka and the Jarawas, who live in The Andaman Islands belonging to India, are the First Citizens of their respective habitats – they are the original Forest beings – people who understand and respect their environment as no other ‘progressive’ and ‘civilized’ group does. Their numbers are fast dwindling and with them will die the superior knowledge of their flora and fauna, their spiritual traditions, rituals, ceremonies, their social order, their expertise in indigenous medicine, and of course their language.
Indigenous societies such as the Wanniyala Aetto and the Jarawa have always lived in the same place for generations – forest is their home and animals and birds their neighbours and friends. They give back to the forest what they take from it. Unfortunately, these societies have been marginalised by political and economic greed, and their freedom violated.
The Wanniyala Aetto and the Jarawa and other tribes of Andaman Islands have been through an almost similar cycle of history and social exercise in rehabilitation at a very high cost. They have survived waves of migrants and colonists but fallen prey to Government policies which looked upon them as ‘primitive’ and in dire need of ‘development’. The development policy of the Governments meant encroaching on their traditional hunting grounds, clearing the forests to settle thousands of migrants , relocating the indigenous people to ‘settlements’, splitting communities that had always lived together, and introducing them to an alien way of life, language and religion. Such changes have impacted their physical and mental health. Contact with non-indigenous people exposed these groups to diseases to which they had no resistance. An epidemic of measles last year wiped away ten percent of the Jarawa population. [There are only 300-400 Jarawas ]. Alcoholism, obesity, diabetes, depression, are other ailments which are now appearing among those who have been ‘relocated’ to ‘civilisation’.
Most indigenous societies are highly evolved groups, that have, over thousands of years, developed a symbiotic relationship with their environment and live in close harmony with nature. Land is sacred. The Wanniyala Aetto, who had lived in their forest abode for time immemorial, clear and cultivate small plots of land within the forest for 1 or 2 years and then let the land rest for 7 to 8 years. They gather forest produce such as honey, plants, roots and hunt for jungle fowls and fish. Similarly, the Jarawa, who have lived in their rainforest home forever, hunt wild pigs, monitor lizards, fish and gather fruits and berries. Their lives are synchronised with their environment. More they do not need. | http://the-south-asian.com/July-Aug2000/First_People1.htm |
Passion with sincerity leads to greatness. This founding principle of Daio represents the basis for all our decisions, leading us to enrich lives around the world by taking on new challenges daily with sincerity and passion.
Our dedication to society and local communities drives us to innovate and deliver new value born from attention to regions, resources, and realizations.
Our attention to individual cultures and regions drives us to contribute and work in harmony with local communities, demonstrating our standing as good corporate citizens.
We will strive to maintain a diverse and friendly corporate culture that offers new challenges and a sense of security and trust to employees.
We will actively work to grow organically, solving environmental problems and realizing a sustainable society for the world over.
Supply high-quality and value-added products and services. As a manufacturer, we are most familiar with our customers’ needs around the world. So, what is made by Daio is sold only by Daio sales representatives, continuing in our founder’s spirit and giving us a direct connection to our customers to serve their needs and build trust.
Respond to all our stakeholders’ needs, including customers, partners, shareholders, society and the global community. We will be agile and flexible to respond to sudden changes in the management environment, and we will work to grow our business in a way that is sustainable while strengthening our management foundations.
Be good corporate citizens and earn the trust of the world where we work. We will take part in activities that contribute to society, including volunteer work, sporting events, and cultural activities, to grow together with the countries and regions and contribute to growth and development.
Work safely and energetically. We will continue to maintain safe and vibrant workplace environments that offer employees challenges and growth potential.
Respect diversity and personalities of employees and coworkers. We will strive to foster an environment that allows every employee to achieve their highest potential. We nurture employees who reflect our roots as a small company and understand the value of taking on responsibilities outside their sphere of work: Employees who act with consideration, good judgment, and proactivity.
Respect the laws of each country and region as well as international standards. We will conduct all corporate activities with consideration for cultures and customs while championing the advancement of lifestyles, industries, and cultures around the world.
Conserve biodiversity and contribute to the global environment. We will aim to reduce CO2 emissions and promote energy savings and recycling as per the DAIO Global Environment Charter. | https://www.daio-paper.co.jp/en/company/policy/ |
Attached the JD below:
Job description
The Presales Solutions Architect (SA) envisions, strategizies and defines overall solution architecture for the NextGen O/BSS Offerings, to solve customer challenges in O/BSS domain. The position works closely with sales, account managers, pre-sales, and delivery teams to develop proposals that include requirements, resources, schedules, and pricing. The emphasis is on supporting the winning of new business and strengthening existing business relationships.
The selected candidate will have excellent presentation, written and verbal communication skills and be able to build relationships with senior level decision makers (CIO, VP, and director). In addition, the candidate will have proven sales experience in both managed and advisory services.
The SA role requires a broad technology background in O/BSS applications and one or more emerging technologies such as Cloud Based System Integration, Software Defined Network (SDN), Master Data Management (MDM), Big Data, BI & Analytics, Digital Services. Specifically the candidate must have strong knowledge of telecom vertical from previous working experience at O/BSS product vendor or System Integrator. Previous experience in managing an application or hands-on enterprise architecture preferred.
ROLES AND RESPONSIBILITIES
Envison and define Next Generation O/BSS offerings and related solutions architecture
Define internal and external sales presentations for the NextGeneration Offerings
Play an advisory role to clients in O/BSS domain by providing business & technology consultancy including problem definition, discovery, solution blueprint, product/technology evaluation; driving feasibility PoC, solution recommendation & architecture definition
Evangelize the usage of existing reusable frameworks and artifacts to further extend the NextGen Offerings
For client engagements, define, design and document the end to end solution architecture of a IT business solution, ensuring the solution is aligned to client product road maps and defined enterprise architecture standards.
Participate in RFI and RFP response preparation from technical / architecture perspective; drive estimations and collate
Build an understanding of the customer's business, industry, and initiatives
Understand technology trends, products, and services and how they solve business challenges
Support marketing initiatives by creating and delivering presentations, workshops, and briefs
Desired Skills and Experience
JOB REQUIREMENTS
Minimum of 5 years in broad technology background across O/BSS applications, and enterprise architecture teams
Minimum 1 year of experience in one or more emerging technologies such as Cloud Based System Integration, Software Defined Network (SDN), Master Data Management (MDM), Big Data, BI & Analytics, Digital Services
Experience in defining eco-system solutions, modeling costs, and developing ROI and TCO analysis
Sales or presales experience
Excellent relationship building skills across customers, partners, and internal resources
Professional executive presence and image
Strong written and verbal communication skills
50 to 60% Travel
PERSONAL CHARACTERISTICS
* Professional executive presence and image.
* Motivated by challenges and has a strong sense of commitment to objectives.
* Helps to affect change and expect success; plans carefully to avoid the possibility of failure.
* An inquisitive nature that is open to learning from all sources.
* Integrity and clearly defined values; a demonstrated understanding of the importance of honesty, fairness, and civility.
* Detail oriented and organized in professional and personal pursuits.
SUPERVISION
* Reports directly to VP, O/BSS Practice Unit Head
* General setting of goals and objectives without direct day-to-day supervision. | http://www.headhonchos.com/jobs/pre-sales-solution-architect-telecom-services-sales-business-development-account-management-it-technology-telecom-software-chennai-245588.html |
Thank you for visiting. If you'd like to contact me, please email me at [email protected].
Want to get in touch?
Name:
E-mail:
*
Message:
*
Leave a Comment
No comments
Subscribe to:
Posts (Atom)
Follow Me
Facebook
Instagram
Shop
Pinterest
Bloglovin
Popular Posts
K.I.M. - A Highly Effective Strategy to Build Vocabulary Across Any Content Area
In my last post, I shared briefly about a strategy that I use to build vocabulary for my beginning English Language Learners (ELLs) ...
Teach Reading Comprehension Skills Using Short Films
I love using short films to teach reading comprehension skills, and my students love watching them! Why do I love using them? Show a s...
Setting Reading Goals with English Language Learners
Do you need to set a reading goal for your "newcomers," aka, beginning English language learners, that measures a certain level...
Small Group Work with Beginning English Language Learners
As an ESL teacher, I often hear from classroom teachers that they don't always know where to start when it comes to working ...
An Awesome Translating App for the Classroom
I recently found this AMAZING translating app and I just had to share it! I work with English language learners, and I have quite a...
Favorite Resources for Teaching English Language Learners
Many teachers have "tried and true" resources that they pull from year after year. I've been an elementary ESL teacher...
Math Strategies for Beginning English Language Learners
I teach 5th and 6th grade math to beginning English language learners. This post shares 3 critical strategies that support ELLs in the...
Teaching Beginning English Language Learners How To Retell
Once my beginning English language learners have built some vocabulary and are able to decode their emergent level books, it is time to t...
Classroom Organization: Setting Up a Guided Reading Binder
Hello! This summer has flown by, and the countdown for going back to school is on!! In just under a week I official go back, but I know th...
My New Student Doesn't Speak English - Part 3: How to Assess
Welcome to Part 3 in a series, "My New Student Doesn't Speak English." In case you missed Part 1 (Environment) and/or P...
Search
Shop My TPT Store
Instagram
Facebook
Pinterest
Favorite Resources
Vocabulary Workbook for Newcomers: Unit 1
Vocabulary Cards and Sorts
Vocabulary Mini Office
A Speaking and Writing Activity
Vocabulary Booklet
Facebook
Instagram
Teachers Pay Teachers
Pinterest
Bloglovin
© 2017
A Walk in the Chalk
.
Phoebe Template
designed by
Georgia Lou Studios
. All rights reserved. | http://www.awalkinthechalk.com/p/contact_1.html |
The workplace physical wellbeing dilemma: the need to work in partnership to drive genuine change
The impact on business of poor employee physical health has become clearer than ever over the last year, both through the direct effects of Covid-19, and as employees struggled through lockdowns with inappropriately designed workspaces, limited exercise or poor sleep.
While the pandemic might have focused employers’ minds, many physical health risks such as inactivity were already present in the workplace and will continue to have a damaging effect on employees’ ability to perform at their best, even after Covid-19 has started to recede.
To drive genuine change, employers and employees need to work in partnership. Our research showed that businesses clearly believe they have a significant role to play in employees’ wellbeing, with almost a third (32%) saying that it is their responsibility to a great extent.
But, while we found that almost all (91%) of our respondents said that they have a physical wellbeing strategy, more than half are yet to see it have an impact on employee behaviour. Encouraging all employees to engage long-term with their physical wellbeing - not just those that are already fit - is vital to ensure that individuals and the organisation as a whole benefit over time.
But employers can’t do this on their own. Employees also have to be sufficiently engaged with their physical health to take responsibility for their own wellbeing – and just over half of our respondents said that a key priority for next year is to encourage employees to do just that.
As approaches to hybrid working continue to evolve and other aspects of working life are reshaped through digitalisation, shifting skills needs and globalisation, the risks posed to business by physical ill-health will also continue to change.
Musculoskeletal conditions are still considered to be the biggest risk factor for businesses, but the basics of poor sleep, nutrition and lack of exercise are also causes for concern. These are all crucial factors in preventing more serious health conditions, such as diabetes, in future. Employers can support good health practices by understanding workforces’ genuine needs, offering appropriate support and measuring impact over time.
However, measuring the direct impact that these factors have on business performance is still work in progress. While 57% said that their strategy has yet to have an impact on their employees, 38% said that they can’t or don’t measure the impact of their physical wellbeing benefits.
There is evidence that this looks set to change in the future. A small yet significant one in ten respondents said that they are now looking at quantifying the risks posed to their business by physical ill-health. That, in time should feed through to helping define exactly what types of support employees need, how to deliver it to them, and how to help them engage with their physical wellbeing over the long term to drive genuine, long-lasting change.
The author is Maggie Williams, content director at REBA.
This article is taken from REBA’s The Workplace Physical Wellbeing Dilemma research, carried out with the support of Howden Employee Benefits & Wellbeing.
Associated Supplier
Read the next article
Five ways companies can inspire their staff to act on climate change
Topic Categories
- Benefits Technology
- Bonus & Pay
- Carers & family support Sponsored by Bright Horizons Work+Family Solutions
- Communication
- Company Cars
- Coronavirus actions
- Employee Engagement
- Employee Share Plans
- Financial Wellbeing Sponsored by Close Brothers
- Flexible Benefits
- For SME employers Sponsored by YuLife
- Future Predictions
- Group Risk Insurance
- Health & Wellbeing Sponsored by Aviva
- International Benefits
- Mental Wellbeing
- Responsible Reward
- Reward/benefits strategy
- Staff Motivation
- Total Reward
- Voluntary Benefits
- Workplace Pensions
- Workforce Demographics
- Research reports
- REBA Inside Track
- REBA news round-up
- REBA On Demand
- REBA webinars
- REBA professional members
Related Articles
How can employers help to boost the mental, physical and financial resilience of their teams?
How to build an inclusive physical wellbeing strategy for men and women
Finding a balance: can physical activity improve mental health?
Sponsored Articles
Editor's Picks
How to help employees who have had cancer return to work
Top 10 stories this week: inflation bites
Why gender-specific support should be on your wellbeing agenda for 2022
Sign up for REBA Professional Membership and join our community
Professional Membership benefits include receiving the REBA regular email alert, gaining access to free research and free opportunities to attend specialist conferences. | https://reba.global/content/the-workplace-physical-wellbeing-dilemma-the-need-to-work-in-partnership-to-drive-genuine-change |
Science Data Bank (ScienceDB) is a public, general-purpose data repository aiming to provide data services (e.g. data acquisition, long-term preservation, publishing, sharing and access) for researchers, research projects/teams, journals, institutions, universities, etc. Supporting a variety of data acquisition and licencing. ScienceDB is dedicated to promoting data findable, citable and reusable on the prerequisite of protecting the rights and interests of data owners and it is built and operated by Chinese Academy of Sciences Computing and Network Information Center.
- Our Mission
ScienceDB devote to become a repository of long-term data sharing and data publishing in China. The repository ensures the long-term management, persistent accessible to the data published. ScienceDB providing data publishing and accessing services to the international science community, academic journals, publishers, and other stakeholders. Its mission is to foster published scientific data compliance with agreed data standards and conventions, to promote the findability, accessibility, interoperability and reusability of scientific data, and cultivate data sharing culture in China science community.
- Key features
Data findability
ScienceDB is now equipped with an optimized search engine to enhance the findability of all published data.
Unique data identifier
ScienceDB is equipped with automated DOI registration and administration to ensure a unique identifier for each published dataset. It supports the registration and authentication by both DOI (Digital Object Identifier) and CSTR (Chinese Science and Technology Resource) systems.
Open and sharing data
While respecting authors’ intellectual property, ScienceDB advocates and supports data open and sharing under Creative Commons license CC0.
Data accessibility
ScienceDB provides online access to the metadata and data files of any published dataset.
Data citable
ScienceDB is equipped with tools for standardized metadata collection and automated data identifier registration. A standard citation format is recommended for each data resource to ensure its citable.
Data traceability
ScienceDB enables provenance of all the data resources uploaded, by recording, publishing, and annotating all the historical versions.
Compatibility
ScienceDB is a database covering all subjects that supports the uploading of data files of any format.
Permanent accessibility
ScienceDB guarantees that all uploaded data and published resources are permanently accessible.
Impact statistics
At ScienceDB, the impact of each dataset is represented from multiple dimensions and at various granularities.
Data manageability
ScienceDB provides data management services ranging from quality control, access control, version management, among others.
Open API
Programs and third-part services can use published datasets and open services of ScienceDB by Open APIs provided by the website. | http://www.scidb.cn/en/English/ours/use |
- Listen to the song on your headphones and pay attention to the beats of drums. Try to isolate the sound of other instruments and voice. If you’re new to this, try first with instrumental versions of the song if available. This usually makes things easier.
- Note that most of the drum beats in modern music (hip hop, house, funk, etc..) Come in bar of 4 by 4. This means that each measure has four times. This means that each turn has 4 times marked by two low and two shots of hi hat, bass, cymbal, bass, cymbal, etc..
- Take a stopwatch in one hand and press the play button (play) with the other. Press the two buttons at the same time and start counting, but instead of counting the beats of four, still has every drum beat or bass you hear: one, two, three, four, five … You must know when to tell time. Every time marks time with his head up and down counts as one.
- Stop the count when the timer dial 15 seconds. Now that you know that in those seconds were, say, 24 strokes, multiply this number by four and you get the number pulses per minute or BPM. For our example would be 24 x 4 = 96. Which means that our song is 96 BPM.
Tips
- Be sure to make this process two or three times for each song, this because you may find some differences in the results. Sometimes get 96, then 95 or 97 also. Be sure to check twice for each measure.
- Most of the songs on the hip hop genre are between 88 and 112. The house usually comes between 112 and 136.
- Some machines can calculate the BPM mechanically and are much more accurate. In addition, some mixers come with this device installed.
- A great help for any DJ is to write the BPM of each song in its respective label and sort from lowest to highest. That way you will know in what order to put them.
- Keep in mind that the mixture is not the only way to combine two songs, you can cut one and go aa follows with a space in between, that way the songs do not necessarily have to match the BPM.
- Do not mix songs that have more than 5BPMs apart and always mix up down, I mean, always raising the BPM (unless you are starting a new group of songs).
- If you’re mixing music was recorded before the eighties, find that BPM is not constant during the same song. These songs fluctuate because the drums were recorded live in the studio. | https://songwritingservice.com/calculate-beats-per-minute-bpm/ |
The Stone had an interesting post last week by Amy Allen on the ‘Mommy Wars’. (For those not familiar with the term, ‘Mommy Wars’ refers to the ongoing bitterness between working mothers and stay-at-home mothers on which lifestyle is most suitable for mothers and children, and more in sync with the ideals of feminism.) Allen offers a compelling genealogy of how each of the two positions emerged from different responses to the dichotomies identified by second-wave feminism:
Much work in second wave feminist theory of the 1970s and 1980s converged around a diagnosis of the cultural value system that underpins patriarchal societies. Feminists argued that the fundamental value structure of such societies rests on a series of conceptual dichotomies: reason vs. emotion; culture vs. nature; mind vs. body; and public vs. private. In patriarchal societies, they argued, these oppositions are not merely distinctions — they are implicit hierarchies, with reason valued over emotion, culture over nature, and so on. And in all cases, the valorized terms of these hierarchies are associated with masculinity and the devalued terms with femininity.
One response was then to claim the values traditionally associated with masculinity for women: women should become kick-ass, cold-blooded rational professionals – they should be more like men. At the other end, another response was to contest the implicit hierarchies valuing the ‘masculine’ values over the ‘feminine’ ones and to emphasize the beauty and superiority of ‘all womanly things’ such as pregnancy, childbirth, breastfeeding etc. And thus emerged these diametrically opposed positions, both claiming to embody the true spirit of feminism, and at war with one another. Allen then goes on to describe third-wave feminism as questioning precisely the very binary distinctions which both positions just sketched leave intact.
However, while she rightly emphasizes the need to dissolve these dichotomies and the false dilemma of modern motherhood – having to choose between being a cold-blooded professional or a soft, warm mother practicing attachment parenting – to my mind Allen then goes on to reinforce another false dichotomy. It is true that the ‘Mommy Wars’ to a large extent pertain to middle- and upper-class (predominantly white) women for whom the choice between pursuing a career or staying at home with the kids is exactly that, a choice: for the wide majority of women, it simply isn’t. So Allen thinks we should focus on
another kind of conflict, one that isn’t primarily internal and psychological but is rather structural. This is the conflict between economic policies and social institutions that set up systematic obstacles to women working outside of the home — in the United States, the lack of affordable, high quality day care, paid parental leave, flex time and so on — and the ideologies that support those policies and institutions, on the one hand, and equality for women, on the other hand. This is the conflict that we should be talking about.
As a fairly recent convert to feminism, it always struck me that feminist debates tend to focus on the public arena of economic and social structures at the expenses of, to my mind, a much needed private, domestic counterpart. Again, to view these two approaches as mutually exclusive is a false dichotomy. It seems fairly uncontroversial that these external structures become internalized, both in the domestic sphere of the negotiations within the family and in the very psychological makeup of women themselves; this follows quite straightforwardly if one is willing to grant even a modicum of Vygotskianism. The phenomenon of the ‘internal glass ceiling’ for example is very well known: women themselves tend to internalize the idea that they can only go so far as professionals, and at some point bump against their own internal glass ceiling.
In other words, as feminists (both women and men), we should be talking about all these different kinds of conflicts: the economic and social ones, but also the internal conflicts of women who are often at loss on how to position themselves in this world so as to better fulfill their potentials and be happier. For many of us, real happiness will consist in being able to combine a successful professional life with a fulfilling domestic life, often (though of course not always) involving motherhood. That this combination is a viable option must become evident, and this includes both the structural measures mentioned by Allen as well as a more equitable distribution of domestic responsibilities among partners in heterosexual relationships; and last but not least, the realization that a woman need not pick sides on the ‘Mommy Wars’. | https://www.newappsblog.com/2012/06/the-many-false-dilemmas-of-feminism.html |
As COVID-19 pandemic marches on in India with over 800 positive cases and 19 deaths at the time of this article, there is an urgent need for deploying out-of-the-box innovations that can help curb this menace.
As COVID-19 pandemic marches on in India with over 800 positive cases and 19 deaths at the time of this article, there is an urgent need for deploying out-of-the-box innovations that can help curb this menace.
A computer based model of neurons in the urinary bladder
In 2008, the Indian Space Research Organisation (ISRO) announced Aditya-1, India's first solar mission to study the Sun. This ambitious endeavour, with various indigenously-developed instruments onboard, holds much promise for our scientific community as they expect to unravel the mysteries of our closest star, the Sun. Now renamed as Aditya L1, the data from the instruments onboard are expected to be a treasure-trove of information on the dynamic processes on the Sun's surface and its atmosphere.
Nature is an enigma; an ensemble of complex structures and functions come together to form a variety of mesmerising artefacts, including life. Richard Feynman, the well-known American Nobel Laureate and physicist, famously said—"Nature isn't classical, dammit, and if you want to make a simulation of nature, you'd better make it quantum mechanical, and by golly it's a wonderful problem, because it doesn't look so easy".
Among all the calamities caused by climate change, an increase in the salinity of the soil is one. It is projected that, by 2050, about half of today’s arable land across the world will be affected by salinity. This increase would also hit India’s rice bowl, the Indo-Gangetic plains, which is projected to lose about 45% of the crop yield. When salinity increases, plants respond by absorbing less water, which affects their growth. How then do we help agriculturally vital crops cope with high salinity?
Today is National Science Day—a day to celebrate the spirit of science and scientific temper across the county. It is a day to commemorate Sir C V Raman’s discovery of the Raman effect. This year, the theme of National Science Day is ‘Women in Science’, celebrating the contributions of women scientists to the field of science in India.
Researchers show that the shape of dried paint or ink deposit is related to the concentration and size of particles in these colloids.
Ever wondered why we use only specific inks for the inkjet printer? Why not any random dye? The wrong ink may result in non-uniform and patchy printing. Printing inks are colloids—tiny solid particles suspended in a liquid. The size and the concentration of the solid particles in ink specified for printers are designed to deposit uniformly on paper.
In a recent study, researchers at the Indian Institute of Science Education and Research (IISER) Mohali, have studied the mechanism behind the phase separation of the tau protein fragment that forms characteristic aggregates associated with Alzheimer's disease.
Nanomaterials are revolutionising the way we do things with applications in medicine, electronics and biocompatible materials, to name a few. Scientists are studying various nanoforms of carbon—nanotubes, nanocones, nanohorns, two-dimensional graphene and even carbon onions! Now, researchers from the Indian Institute of Technology Bombay have added a new form to this list called nano carbon florets. These nano-sized florets, shaped like marigold flowers, have much more than just good looks to flaunt; they can help keep the environment clean by removing harmful heavy metal pollutants from industrial effluents. In a study published in the journal ACS Applied Nano Materials, Prof C Subramaniam and his team from the Department of Chemistry have designed nanocarbon florets that can remove up to 90% of pollutants containing arsenic, chromium, cadmium and mercury.
Artificial intelligence (AI) and machine-learning (ML) approaches are the present-day buzzwords finding applications in a host of domains affecting our lives. These approaches use known datasets to train and build models that can predict, or sometimes, make decisions about a task. In one such case, researchers at the Indian Institute of Technology Bombay (IIT Bombay), Mumbai, have in a recent study, developed ML approaches using molecular descriptors for certain types of catalysis that could find use in several therapeutic applications. | https://researchmatters.in/sections/technology |
Skip Gallop Game | Part 1/
Objective: To incorporate math into a dance game with simple addition problems. To practice traveling and transitioning between skipping and galloping. To introduce choreography and build memorization skills.
Age: Most appropriate for LNL 6 and up. LNL 5 will enjoy the game but must be skipping and galloping well for most success.
Props: Skip and gallop cards
Music: Galloping or skipping tracks from KIDS! or KIDS!2
Instructions
Ask four dancers to come forward and signify each dancer as 1, 2, 3, or 4. Have each one choose one card. The card that dancer number 1 chooses is step one and on down. Ask the dancers to stand in a line facing the other students. Have all students repeat the pattern read on the cards in the correct order. For example: gallop, skip, skip, gallop. If necessary, place the cards in the proper order in front of the room where all dancers can see them.
Movement
Ask dancers to practice and to keep repeating the sequence across the dance space. Hands can be placed on hips, in demi-seconde, or swing naturally. Ask dancers to try the sequence along with music.
Variation
Choose another four cards and create another sequence. For added difficulty, add the first card sequence to the second card sequence to increase memory challenge. | http://leapnlearn.com/members/fresh-ideas/2016/9/25/skip-gallop-part-1 |
As part of measures to step-forward awareness creation and reduce the spread of corona virus disease, 50 voluntary Peer Educators has been trained on COVID-19 and prevention in the Wassa East District of the Western Region.
The volunteers drawn from 25 communities are expected to compliment the work of other mandated agencies such as the Ministry of Health, Information Services Department and the NCCE.
The programme was organized by World University Service of Canada (WUSC), an NGO and facilitated by the District Health Directorate, Information Services Department and the Department of Social Welfare and community Development under the West African Governance and Economic Sustainability in Extractive Areas (WAGES) project.
Speaking at the inaugural opening, the Field Coordinator of WAGES, Mr. Rusmond Anyinah said although WAGES’ mandate is in mining and sustainable local economic development in Wassa East and Prestea Huni Valley, they are compelled to assist Government efforts in achieving health for all by containing the spread of the virus, adding that one way of doing that is by educating the people about the covid 19 pandemic.
The District Director of Health, Mr. Emmanuel AffelKum praised the volunteers for their selflessness in accepting to take up the responsibility to educate people within their community on covid 19, he explained that some people usually look out for financial rewards before they put themselves forward for any engagement.
He, however, advised the volunteers to conduct their activities based on information given to them by the facilitators and stay away from opinionated information.
A Health Promotion Officer and An Assistant Social Development Officer, Mr. Prince Atta Amanfo and Jewel Archer respectively schooled participants on topics such as how covid 19 can be prevented, symptoms, transmission, techniques for hand washing and stigmatization. The rest are domestic violence, child labour and child trafficking during the covid 19 period.
For his part, The Wassa East District Information Officer, Mr. Frank Kwabena Danso underscored the need for the covid- 19 education to be done in the dialect or language the people can comprehend, explaining that the education will yield the needed impact if volunteers communicate in the language of the people.
He further tasked them to factor in their target audience, the medium by which the education will be delivered to reach a larger audience as well as the time for the education.
According to the information officer, the covid- 19 education should target each and every individual member of the community and advised the participants to use the Community Information Centers, Churches, Mosques among other far reaching mediums.
He also said time is of essence in every endeavour and urged the volunteers to be strategic in using the appropriate time for the education, stressing that majority of the people in the district are farmers, hence the need to conduct the education either at dawn or in the early hours of the evening, where most of the people will be at home.
The volunteers were provided with educational materials, such as posters and leaflets. They also received two branded shirts, two face mask and hand sanitizers. | https://myafricatoday.net/50-voluntary-peer-educators-trained-on-covid-19-in-wassa-east-district/ |
The German Conference on Bioinformatics (GCB) is an annual, international conference devoted to all areas of bioinformatics and meant as a platform for the whole bioinformatics community.
Computomics is a highly innovative bioinformatics company focussing on agricultural challenges in plant breeding. Intense pressure is put on farmers, growers and agricultural companies to produce new plant varieties quickly while meeting the global needs of sustainability. Computomics has developed ⨉SeedScore®, a disruptive machine learning technology which transforms traditional plant breeding processes. xSeedScore supports traditional plant breeding methods and integrates additional variables such as information on the genotype, the cultivation environment, climate, soil microbiome, and the used field management to identify and predict the best-performing plants for any specific location. By applying Computomics' machine learning technology ⨉SeedScore® to a plant breeding program can help move the program way beyond current limitations. It increases the probability of success by identifying up to 10x more candidates to escalate into the commercial pipeline, and predicts best performers, both for today and future climates.
On Tuesday, 6 September 2022, Patrizia Ricca will present a poster on how we can gain insight into biological mechanisms using machine learning.
Poster Session
Title: Peek into the black box: How we can gain insight into biological mechanisms using machine
When: Tuesday 6 September 2022, 17:30 - 19:30
Patrizia Ricca, Scientific Product Manager at Computomics
Abstract:
Plant breeding must be accelerated to supply new varieties for a growing population and a rapidly changing climate. New breeding technologies like gene editing and genomic prediction help bring about this acceleration, but are often used independently without sharing useful existing knowledge. Here, we present a method for discovering both new gene editing targets and higher-accuracy predictions. By using interpretable machine learning models specifically developed for genomic data, complex genetic mechanisms can be rapidly understood and visualized. Multigenic traits show up in the visualization of feature importance and positional genomic importance. We apply this method to a dataset derived from a shelf-life experiments for 200 Capsicum varieties. Genotypes, manual scoring and plant image data are correlated to train a regression machine learning algorithm that identifies an ethylene-linked gene cluster responsible for shelf life and plant senescence. New breeding technologies require these kinds of insights into biological regulation to identify new editing targets quickly and reliably.
Meet and connect with Patrizia in Halle at GCB. In case of questions regarding the poster or to discuss your plant breeding challenges please feel free to contact Patrizia directly. | https://computomics.com/events-reader/GCB2022.html |
Editor’s Note: I was speaking to a very senior person at a major CPG company yesterday about trends in the industry from his perspective. After rattling off the usual list of “in-the-moment research, big data, new tracker models, etc.. ” he said something to the effect of ” That is all just process though, just how we get data. The biggest area for interest for us is understanding behavior. Behavioral science in all it’s forms is much more important for us. And it’s not about statistical rigor: if we can understand the core principles of behavior and apply them across the board, we know success will come.” This falls very much in line with what I’ve been hearing from clients for quite some time, and it’s backed-up by what we have seen in the levels of interest from clients, investors, etc.. into companies related to automating and scaling neuroscience, behavioral tracking, behavioral economics and anything else that helps fill in the gaps on understanding why we do what we do.
In today’s post Kevin Lonnie dissects the current dialogue in MR and zeros in on the same conclusion: although certainly there is value in the “who, what, when, where, & what” of data, for clients to get excited about the value of insights we need to deliver on the “why”.
By Kevin Lonnie
As an industry, we are shaken! I’ve read time and time again that all the new tools & techniques available to market researchers have shaken us to the core.
No point reviewing all those new tools here. Suffice it to say that big data, eye tracking, neuromarketing and behavioral economics are ready to make sushi out of traditional MR techniques.
Heck, we’ve been predicting Armageddon for traditional research so long, we might need to conduct a survey to see if anyone can remember our first “Evolve or Die” conference.
Which brings me to my point, while we as an industry might be shaken, exactly who are we stirring? It’s logical to assume that disruptive new technology would greatly enhance our standing in the business community. However, if we were to run a simple regression on this, I don’t think we would find the slightest correlation between all our shaking and our ability to stir.
As long as 90% of consumer based decisions are done without the input of marketing research, there’s (to quote Jerry Lee Lewis) just “a whole lotta shakin’ going on”.
I have a theory for why we’re failing to stir despite all our vigorous shaking. I would argue that our innovations have been sustainable and not disruptive. We are substituting A for B, typically because A is cheaper and easier. It’s an obvious improvement but not disruptive.
Changes in methodology may seem weighty to us but they aren’t changing the outcomes. Insights aren’t becoming more valuable just because we employ new methods, especially ones that fail to explain the ever elusive; “why?”.
As a result, we’re not moving up the food chain. The amount spent on consultants is 10X the amount that goes to our industry (roughly $15B US versus $150B for Management Consulting). As John Kearon of BrainJuicer pointed out “We do one-tenth the work because we present it badly; we charge one tenth of the price.”
New methods for capturing “Who, What & Where” are shaking our industry, but to stir the passions of our clients we need to answer “Why”. Unraveling the underpinnings of consumer decision making is the only clear path to better, bolder decisions (i.e. Profits!). | https://greenbookblog.org/2014/05/13/shaken-not-stirred-the-current-state-of-the-mr-community/ |
Fizzle or Finale: The Final Day of Class
Many courses end with a fizzle. Frank Heppner (2007) aptly says, “In most classes, The Last Lecture was about as memorable as the rest of the class had been – that is, not very.” The final class should bring the course to an appropriate conclusion or finale.
“For many..., the last day of class comes and goes without ceremony, yet it provides an opportunity to bring the student-teacher experience to a close in a way that students appreciate and enjoy” (Lucas and Bernstein, 2008).
How can you make the final day into a finale?
Summarize the course content
“Ask students to create concept maps illustrating major aspects of course content and showing how they are interrelated” (Lucas and Bernstein, 2008).
Give a Memento
Mementos do not need to be expensive to be meaningful. An instructor of Ecclesiastical Latin distributed postcard-size copies of da Vinci’s Last Supper to her students. I still have the memento on a bookshelf in my home.
Pass the Torch
Invite your current students to pass on advice about the course by writing brief letters to students who will take the course in the future. Instructors can use the letters to improve their teaching or excerpt the best advice into a section for future syllabi about “Succeeding in the Course: Advice from Former Students”
Make Emotional Connections
Christopher Uhl (2005) ends his large (400 students) Environmental Science course by inviting students to explore the emotions that they have encountered over the semester. He organizes reflection around four ideas: acceptance, gratitude, integrity, and hope. In exploring acceptance, Uhl asks his students to be truthful about their performance during the semester and to think about how they will change their study habits for future classes. “I invite my students to reflect on their disappointments. Specifically, I ask: How did you let yourself down? When did you hand in ‘BS’ instead of honest work? In what ways did you fail to honor your own potential?” Uhl then asks students to reflect on “what new action they might take in future courses to enhance their learning, given what they acknowledge as their shortfall in my class.” Next, Uhl asks students to explore their feelings of gratitude. He invites students to talk about what they might be thankful for because of the class. After exploring acceptance and gratitude, Uhl invites his students to explore integrity. He asks students to consider how taking the class will impact their future thinking, actions, and behaviors. Finally, Uhl concludes class by expressing his hopes for the students and asking them to share “their hopes for themselves and for each other.”
Encourage and Inspire
Frank Heppner (2007) describes Richard Eakin’s final lecture for a course in embryology: “Eakin’s Last Lecture was legendary, and students who had taken his course in previous years would come back to hear it again and be inspired. The lecture was a reminiscence of a life in science and the joy and thrill of having the opportunity, as a young man, to work in laboratories where discoveries about the fundamental nature of life were being made. He made a point of the fact that he had not been some sort of geeky super-genius as a youngster, but had instead been blessed with a strong sense of curiosity. I can still recall being amazed by that – surely such a man must have been an exceptional student? Why, that might mean that I might do such things some day.”
Celebrate Students’ Work
In writing-intensive courses, end the semester by celebrating the writing of your students. Before the last day, assign students to select a piece of their work to read aloud in 2-3 minutes. On the final day of class, each student reads the selection, and the class responds to each reading with applause. (https://teaching.berkeley.edu/last-day-class)
Resources:
- Heppner, F. (2007). Teaching the large class: a guidebook for instructors with multitudes. San Francisco: Joosey-Bass, 2007.
- Lucas, S. and Bernstein, D. (2005). Teaching Psychology: a step by step guide. Mahwah, NJ: Lawrence Erlbaum.
- Uhl, C. (2005). The last class. College Teaching 53(4): 165-166. | https://www.duq.edu/about/centers-and-institutes/center-for-teaching-excellence/teaching-and-learning-at-duquesne/fizzle-or-finale-on-the-final-day-of-class |
“Color is your friend,” proclaims the first chapter of Bright Bazaar: Embracing Color for Make-You-Smile Style by Will Taylor.
Maybe, but in a world where many folks play it safe with white or neutrals color schemes, color can be scary. We know. We recently decided our plain white walls in our Children’s Department needed an upgrade. Knowing how much kids love color, we chose yellow, orange, and red for the walls. Gulp.
Well, the results were stunning.
The Children’s Room is now a vibrant space filled with energy. And all for the price of a few cans of paint and some risk-taking in our color selections. We are now eyeing other areas of the library, deciding what we can “colorize” next.
If you’d like to take your decorating to a more colorful level but need a little help, try these titles. | https://cheshirelibraryblog.com/2016/06/27/dont-be-afraid-of-color/ |
When a student’s behavior gets in the way of their learning and/or the learning of others, the school is responsible to figure out how to support behavioral expectations. One way to do that is to assess why the student might be acting out and use that information to consider how positive behavioral interventions might teach the student what to do instead.
The end of this article includes a sample letter to ask the school to begin a specific evaluation called a Functional Behavioral Assessment (FBA). Data from the FBA is used to build a Behavior Intervention Plan (BIP).
PAVE provides a video training called Behavior and School: How to Participate in the FBA/BIP Process.
Ideally a school will notice if a student’s behavior has patterns of disruption and begin the FBA/BIP process before a student with disabilities is disciplined. PAVE provides an article: What Parents Need to Know when Disability Impacts Behavior and Discipline at School.
A teacher or school administrator might alert parents and request consent to begin an FBA. The Office of Superintendent of Public Instruction (OSPI) is the state agency for Washington schools. OSPI provides guidance about discipline in a Technical Assistance Paper (TAP #2). Included are best practices for schools to follow when there are persistent behavioral concerns:
- Develop behavioral goals in the Individualized Education Program (IEP)
- Provide related services needed to achieve those behavioral IEP goals (specific therapies or counseling, for example)
- Provide classroom accommodations, modifications and/or supplementary aids and supports (a 1:1 paraeducator, for example)
- Provide support to the student’s teachers and service providers (staff training)
- Conduct a reevaluation that includes a Functional Behavioral Assessment (FBA)
- Develop a Behavioral Intervention Plan (BIP), as defined in the Washington Administrative Code (WAC 392-172A-01031)
Manifestation Determination
If an FBA process begins after a student has been excluded from school through a disciplinary removal (suspension, expulsion, or emergency expulsion), families can review their procedural safeguards to understand rules related to a special education process called Manifestation Determination.
Here are the basics: When a behavior “manifests” (is directly caused by) a disability condition, then there is recognition that the student has limited fault for violating the student code of conduct. Management of behavior is part of the special education process. A Manifestation Determination meeting is to talk about how a student’s services can better serve their needs to prevent future behavioral episodes that are getting in the way of education.
Students with Individualized Education Programs (IEPs) may not be excluded from their regular educational placement, due to discipline, for more than 10 days in a school year without the school and family holding a Manifestation Determination meeting. According to the Washington Administrative Code (WAC 392-172A-05146),thestudent’s behavior is considered a manifestation of disability if the conduct was:
- Caused by, or had a direct and substantial relationship to, the student’s disability
- The direct result of the school district’s failure to implement student’s IEP
When these criteria are met, the school is responsible to review and amend the student’s services to ensure that the behaviors are addressed to prevent future escalations. If there isn’t a BIP, the school is required to develop one by initiating an FBA. If there is a BIP, the school is required to review and amend it to better serve the student’s needs.
Request FBA formally, in writing
Family caregivers can request an FBA/BIP process any time there are concerns that a student’s behavior is a barrier to their education. Families have the right to participate in all educational decision making for their students. See PAVE’s article: Parent Participation in Special Education Process is a Priority Under Federal Law.
Make any request for an evaluation in writing. This is important because:
- There will be no confusion about how/when/why request was made.
- The letter provides critical initial information about what is going on with the student.
- The letter supports a written record of family/school interactions.
If the family wishes, they can attach information from outside providers with their request. For example, if an outside therapist or counselor has recommendations for behavioral interventions at school, the family has the option to share those. The school district is responsible to review all documents and respond with written rationale about how the information is incorporated into recommendations. Families may choose to disclose all, a portion, or none of a student’s medical information. Schools may not require disclosure of medical records.
Family caregivers/guardians must sign consent for any school evaluation to begin.
The FBA/BIP might prevent a shortened school day
According to OSPI, serving a student through a Behavior Intervention Plan (BIP) is a priority. OSPI discourages schools from reducing the student’s schedule because of behaviors:
“District authorities should not use a shortened school day as an automatic response to students with challenging behaviors at school or use a shortened day as a form of punishment or as a substitute for a BIP. An IEP team should consider developing an IEP that includes a BIP describing the use of positive behavioral interventions, supports, and strategies reasonably calculated to address the student’s behavioral needs and enable the student to participate in the full school day.”
Special Education is a service, not a location within the school
Please note that a request for behavioral support is NOT a recommendation to remove a student from the regular classroom and move them into an exclusive learning environment. Federal and state laws require that students eligible for special education services receive their education in the Least Restrictive Environment (LRE) to the maximum extent appropriate.
Special Education is a service, while LRE refers to placement. PAVE’s article provides further information: Special Education is a Service, Not a Place.
General education classrooms and spaces are the least restrictive. A child may be placed in a more restrictive setting if an IEP team, which includes family participants, determines that FAPE is not accessible even with specially designed instruction, accommodations, modifications, ancillary aids, behavioral interventions and supports, and other documented attempts to support a Free Appropriate Public Education (FAPE) within the general education environment.
If the student was removed from their previous placement prior to a manifestation determination meeting, the school district is responsible to return the student to their placement unless the parent and school district agree to a different placement as part of the modification of the student’s services on their IEP and BIP.
Sample letter to request an FBA
Below is a sample letter family caregivers can use when requesting a Functional Behavioral Assessment (FBA). You can cut and paste the text into your choice of word processing program to help you start a letter that you can print and mail or attach to an email. Or you can build your letter directly into an email format. Be sure to keep a record of all requests and correspondence with the school.
Your Name
Street Address
City, State, Zip
Date
Name (if known, otherwise use title)
Title/Director of Special Education/Special Services Program Coordinator
School District
Street Address
City, State, Zip
Dear Name (if known, otherwise use district person’s title):
I am requesting a Functional Behavioral Assessment (FBA) for my [child, son, daughter], NAME, (BD: 00-00-0000).
I have concerns that (NAME) is not receiving full educational benefit from school because of their struggles to meet behavioral expectations due to their disability circumstances. Their condition includes [brief summary of any diagnoses], which makes it difficult to [brief summary of the challenges]. I believe this has become a pattern of behavior that needs to be addressed with a positive behavioral support plan so my child with special educational needs can receive a Free Appropriate Public Education (FAPE).
I understand that the FBA will look for triggers and seek to understand what is happening in the environment when my child’s behaviors become problematic. I have learned that these are “antecedents” that the school can identify through data tracking. I hope we can begin to understand how [name] may be trying to communicate their needs through these behaviors. Here are some of my thoughts about what might be going on:
- Use bullet points if the list is long.
- Use bullet points if the list is long.
- Use bullet points if the list is long.
I look forward to discussing the results of the FBA and working with school staff on development of a Behavioral Intervention Plan (BIP). I hope we can choose a small number of target behaviors to focus on in the BIP. I understand that we will work together to identify replacement behaviors that the school can teach [name] to do instead. I hope these will be skills we can work on at home also. I look forward to learning how we can partner to encourage the learning that I know [name] is capable of.
I have attached documentation from [any outside providers/therapists/counselors who may have provided letters or reports or shared behavioral recommendations].
I understand that I am an equal member of the team for development of educational services and that I will be involved in any meetings where decisions are made regarding my child’s access to a Free Appropriate Public Education (FAPE). I will also expect a copy of the FBA and a draft of the BIP before our meeting.
I understand you must have my written permission for this assessment to be administered, and I will be happy to provide that upon receipt of the proper forms.
I appreciate your help in behalf of [child’s name]. If you have any questions please call me at [telephone number] or email me at [email address, optional].
Sincerely,
Your Name
CC: (Names and titles of anyone else you give copies to)
You can email this letter or send it by certified mail (keep your receipt), or hand carry it to the district office and get a date/time receipt. Remember to keep a copy of this letter and all school-related correspondence for your records. Get organized with a binder or a filing system that will help you keep track of all letters, meetings, conversations, etc. These documents will be important for you and your child for many years to come, including when your child transitions out of school.
Please Note: PAVE is a nonprofit organization that provides information, training, individual assistance, and resources. PAVE is not a legal firm or legal service agency, and the information contained in this handout is provided for informing the reviewer and should not be considered as a means of taking the place of legal advice that must be obtained through an attorney. PAVE may be able to assist you in identifying an attorney in your area but cannot provide direct referrals. The contents of this handout were developed under a grant from the US Department of Education. The contents do not represent the policy of the US Department of Education and you should not assume endorsement by the Government. | https://wapave.org/tag/request/ |
Unequal chances? Inequalities in mortality in Ireland
This report has been peer reviewed prior to publication. The authors are solely responsible for the content and the views expressed.
|Attachment||Size|
|Download PDF||2.98 MB|
Life expectancy and mortality are some of the most widely available indicators of population health and are commonly used by governments and international organisations as key indicators of social progress. In addition to being unfair, inequalities in mortality and life expectancy across population groups are a key policy concern as they are potentially avoidable. In this report, data from a variety of sources are used to examine inequalities in mortality in Ireland over the period since 2000, focusing on two broad dimensions of inequality: socio-economic status (SES) (proxied by socio-economic group, which is derived from occupation), and ethnicity/country of birth/nationality. Due to data availability, the analyses of inequalities focus on two key population groups (young infants, and adults). An analysis of emerging patterns in relation to COVID-19 mortality is also undertaken.
This report was commissioned by the Institute of Public Health (IPH). | https://www.esri.ie/publications/unequal-chances-inequalities-in-mortality-in-ireland |
Mosaic in Northern Colorado provides the following supports:
Group Homes – Thirteen homes in Loveland and Fort Collins. Staff rotate in and out of each home to assist individuals with all daily living needs.
Supported Living Services – Minimal amount of support is provided to assist individuals living with their families or in their own residences.
Children's Extensive Services – Focuses on supporting families by providing respite care to children living within the family home.
Host Homes – Mosaic contracts with Independent Contractors to provide services in their home.
Apartment Living – Staff are available to individuals living on their own to provide support with activities of daily living. | https://cpfamilynetwork.org/resources/resources-guide/mosaic-northern-colorado-loveland-colorado/ |
Cataloged in full.
Description
Summary
The collection contains 1 letter to Sophie L. Goldsmith in 1930, and 10 letters to Mr. and Mrs. Jacob Bleibtreu (Helen Reinthaler Bleibtreu), 1955-1973. All are of a personal nature.
Using the Collection
Rare Book and Manuscript Library
Restrictions on Access
You will need to make an appointment in advance to use this collection material in the Rare Book and Manuscript Library reading room. You can schedule an appointment once you've submitted your request through your Special Collections Research Account.
This collection is located on-site.
This collection has no restrictions.
Terms Governing Use and Reproduction
Single photocopies may be made for research purposes. The RBML maintains ownership of the physical material only. Copyright remains with the creator and his/her heirs. The responsibility to secure copyright permission rests with the patron.
Preferred Citation
Identification of specific item; Date (if known); Thornton Wilder letters; Box and Folder; Rare Book and Manuscript Library, Columbia University Library.
Accruals
Materials may have been added to the collection since this finding aid was prepared. Contact [email protected] for more information.
Ownership and Custodial History
Gift of The Jacob Bleibtreu Foundation and John Bleibtreu, 1986.
Gift of The Estate of Sophie Goldsmith, 1963.
Immediate Source of Acquisition
Source of acquisition--Bleibtreu Foundation and John Bleibtreu, 1986; the estate of Sophie Goldsmith, 1963. Method of acquisition--Gift; Date of acquisition--12/03/86. Accession number--M-88-01-29.
About the Finding Aid / Processing Information
Columbia University Libraries, Rare Book and Manuscript Library
Processing Information
Processed J.L-W 01/29/88.
Subject Headings
Genre/Form
Subject
History / Biographical Note
Biographical sketch
Thornton Niven Wilder, American playwright and novelist. | https://findingaids.library.columbia.edu/ead/nnc-rb/ldpd_4078444 |
#VotePlanet: Towns and cities need a ‘refreshed vision’ to cope with future challenges11 December 2019
Increasing populations and a changing climate pose challenges for our future towns and cities, but also many opportunities.
More than half of the world’s population currently live in cities, and this is set to increase to 70% by the middle of the century. Yet, people living in urban environments are far more exposed to air pollution as well as higher temperatures due to trapped heat.
Reading researchers are helping to predict how humans will interact with city climates in future, as well as how exposed populations are to hazards like air pollution.
The Reading 2050 project, co-created as a partnership between the University and planning and design consultancy Barton Willmore, is also outlining a vision for a more sustainable town of Reading in 30 years’ time. Recommendations include making better use of renewable energy and better utilising features like rivers and parks for the good of the environment.
Professor Tim Dixon, Processor of Sustainable Futures in the Built Environment at the University of Reading, said: “The Reading 2050 project aims to bring together key stakeholders in a co-created city vision. We have brought together University expertise and linked this with the important voices of business, local government and civil society.
“The ambition is to develop a vision for Reading which will enable everyone to better understand how Reading could transition to a smart and sustainable net zero carbon future, drawing on Reading's strengths as a centre for digital innovation, green thinking and heritage and culture. But no vision is ever complete, so we are continuing to work with key stakeholders to develop and refresh the vision.”
A project led by Reading’s Professor Sue Grimmond was awarded €3m to investigate how human behaviour will impact urban climates in future, and how cities and people will need to adapt to cope with new threats. Possible changes include changing working hours or building tall buildings next to shorter ones to improve air circulation.
The University of Reading has teamed up with architecture experts in Santo Tomas in Manila, in the Philippines, to study urban design, including green infrastructure and how small cities can retain an identity.
The Whitley Researchers project, led by Dr Sally Lloyd-Evans, aims to make the Whitley area of Reading a more inclusive and sustainable place to live. Its work led to the introduction of a new bus route to serve residents, which also supports Reading’s ambition to reduce car use in the borough.
#VotePlanet
The University of Reading’s #VotePlanet campaign is highlighting the biggest threats facing the UK and global environment, as well as how research and action can combat them, in the lead up to the General Election on December 12.
Public concern for the environment has increased significantly in recent years, with political parties unveiling various eco-friendly pledges in their election manifestos.
The #VotePlanet campaign therefore aims to inform voters on the science behind these issues, and what sustainability action it is taking as an institution.
Follow the campaign on Twitter, Facebook and Instagram, and on the University news page. Get involved and share your examples of sustainable action using the hashtag #VotePlanet. | https://archive.reading.ac.uk/news-events/2019/December/pr832406.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.