content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
The Rasputin Chamber Puzzle ARG (alternate reality game) was discovered back in 2018 and one part of it stumped players so much they had to brute force the solution, solving the puzzle in just a few days but without fully understanding all the pieces, until last week.
The puzzle was introduced in Destiny 2's Warmind expansion and required players to find a series of ciphers that could be used to decode a message. There were six ciphers in all, but the third one related to a series of diamond clues in the bunker was never quite figured out. Instead, players simply performed cipher analysis to reverse-engineer the phrase after finding the rest of the clues. They were then able to decode the special message resulting in a set of coordinates leading players to a real-world cache hidden out in the woods containing a spear and Warmind coins.
A 2-year-old Destiny puzzle was JUST solved with the help of @bachmanetti.
A single puzzle from the Warmind ARG had its solution ("Mechanized") bruteforced, but the puzzle itself was never properly solved… until today.
As IGN spotted last Friday though, it didn’t take players Javano and Bachmanetti were able to finally piece together the “MECHANISED” cipher the correct way. Players have speculated for a long time that the diamond clues for the cipher might be connected to a weapon, and Javano already had a hunch that weapon had to be Sleeper Simulant, the popular exotic fusion rifle added in Warmind. It didn’t take Javano and Bachmanetti long to realise that the Simulant reticle is a perfect match for the diamond shape.
“Wait, I have an idea,” wrote Bachmanetti over chat. “Someone has to have tried it.” They loaded into the game and two minutes later reported back. “Solved it,” they wrote. “I am not fucking kidding either.”
Sleeper Simulant’s reticle, in addition to being a diamond shape, has a series of dashes on the left and right. When the player looks down its sights at the Clovis Bay warning sign, those dashes end up underlining a series of letters that go on to form the cipher phrase “MECHANISED.”
While the treasure it ultimately led to is long gone, getting to correctly solve a piece of one of Destiny 2's most complicated puzzles years after the fact has to feel like its own special sort of reward.
Get Permalink
Trending Stories Right Now
Last month, Nvidia’s game streaming service, GeForce Now, came out of beta officially. And since then numerous publishers have pulled their games from the service. Activizion-Blizzard, Bethesda and Hinterland Games have already pulled their titles from the service, and 2K Games has joined them. | |
Preparation and Planning
We recommend that you:
- Ensure that your administrator has uploaded your class from the SMS into your site on www.nzcermarking.org.nz
- Before administering a test, you will need to become familiar with the online procedure. LOGIN to your site, Click on the subject you are testing, and activate the Demo button to familiarise yourself with the test procedure.
- Find out how long each test is so that you can adhere to the correct timings. The following timings are the for the test only - remember that you will need approximately 10 minutes of administration in addition to these test times.
- Mathematics - 45 minutes
- Listening - 40 minutes (students will need their own set of headphones)
- Reading Comprehension - 45minutes
- Reading Vocabulary - 25 minutes
- Punctuation and Grammar - 30 minutes
- Science, Thinking with Evidence - 45 minutes
- STAR - timings are different for Years 3-6 and Years 7-9 and for the different sub-tests, please check your manual for these
- Printing Student Tokens before you set up testing.
- For listening comprehension, each student will need their own set of earphones.
Administering Online Testing
As students are very familiar with using devices and moving quickly through logins and websites, it is recommended that classes follow a step-by-step process to ensure that everyone understands the online procedure.
- Give students the URL: www.nzceronline.org.nz and ensure everyone has been able to locate the site.
- Distribute the Student Tokens face down. When you are ready, invite the students to log in and wait for your instructions.
- Online Screen Options: Students can choose Online Options on the examples page. From here, they can choose options for:
- Font – including dyslexic font
- Text size
- Background
- Question Preview: At any time, students can see all of the questions at a glance and monitor their progress.
- Yellow Progress Slide Bar: Once the student has begun the test, they can monitor their progress by following the Yellow Slide Bar at the top of the screen.
- Timing: There is an Online Timer to support students, however it will not stop the test at the Time Allowance.
- Following the Time Allowance for each test is crucial for the reliability of the scoring. Teachers must monitor and control the timing, including each Subtest for STAR.
- Finishing: Students must click the Finish button at the end of the test to ensure the data is saved to the online analysis reports. It is important to collect the Student Tokens and dispose of them to protect the data.
What will happen if a student isn't able to complete their online session in one session?
The student's token will remain active until the 'Finish' button is clicked. This means that students can log in again with their token and they will be able to continue their test where they left off. | http://assessmentservices.nzcer.org.nz/support/solutions/articles/4000044445/thumbs_down |
Known as the Hardy Kiwi because it is frost hardy, (however new growth is frost tender). Produces very sweet grape sized fruit with a smooth skin and flesh similar to the better known Kiwi fruit in appearance and taste. Needs male and female plants to produce fruit, and will not bear fruit until mature (5 years minimum). Very vigorous vines growing several metres in a year. Best in well drained soil to avoid root rot.
Bearing fruit in autumn, each individual vine can produce over 25kg of fruit. Fruits can be eaten whole, no need to peel. The fruits are aromatic with a cocktail of flavours (kiwi, strawberry, banana and pear) wrapped up in one delightful package.
A lot of work is currently being done to produce new cultivars, NZ have 3 commercial cultivars: Takaka Green, K2D4 and Marju Red, and no doubt more will be developed. Currently they do not have a long shelf life, so it is not easy to find them at a fruiterer or grocer (best chance is mid Feb to Mid April), much better to grow your own. | http://www.baag.com.au/hardy-kiwi/ |
---
abstract: 'We study the interacting Fermi-Hubbard model in two spatial dimensions with synthetic gauge coupling of the spin orbit Rashba type, at half-filling. Using real space mean field theory, we numerically determine the phase as a function of the interaction strength for different values of the gauge field parameters. For a fixed value of the gauge field, we observe that when the strength of the repulsive interaction is increased, the system enters into an antiferromagnetic phase, then undergoes a first order phase transition to an non collinear magnetic phase. Depending on the gauge field parameter, this phase further evolves to the one predicted from the effective Heisenberg model obtained in the limit of large interaction strength. We explain the presence of the antiferromagnetic phase at small interaction from the computation of the spin-spin susceptibility which displays a divergence at low temperatures for the antiferromagnetic ordering. We discuss, how the divergence is related to the nature of the underlying Fermi surfaces. Finally, the fact that the first order phase transitions for different gauge field parameters occur at unrelated critical interaction strengths arises from a Hofstadter-like situation, i.e. for different magnetic phases, the mean-field Hamiltonians have different translational symmetries.'
author:
- Jiří Minář
- Benoît Grémaud
title: From antiferromagnetic order to magnetic textures in the two dimensional Fermi Hubbard model with synthetic spin orbit interaction
---
Introduction
============
The recent progress of experiments using cold atomic gases[@Lewenstein07; @Blochreview08; @Ketterle2], in particular in implementing artificial gauge fields [@Spielman09a; @Spielman09b; @Spielman11; @Zhang12; @Zwierlein12; @Windpassinger12; @Spielman12], has open the door to the studies of a whole class of model Hamiltonians, some directly inherited from condensed matter physics (quantum Hall effects), but, more saliently, some are genuily generating new physical situations, allowing physicists to further develop and test theoretical ideas, like topological phases [@Mottonen09; @Bercioux11], non-abelian particles [@Burrello10] or mixed dimensional systems [@Nishida_2008; @Nishida_2010; @Lamporesi_2010; @Huang_2013; @Iskin_2010]. In particular, the experiments involving spinors, made of either bosons or fermions in different Zeeman sub-levels, are now able to produce non-abelian gauge-fields, leading to a kinetic term allowing for a modification of the internal degrees of freedom along the propagation of the particle [@Dalibard11]. In two-dimensional lattices, the non-abelian gauge fields result in tight-binding Hubbard models with spin-flip hoping terms, i.e. the hoping matrices are not diagonal anymore in the spin degrees of freedom. Among all the possibilities, an artificial gauge-field mimicking a spin-orbit coupling term [@Dalibard11] (see below) is probably the most well-known and studied situation, for two reasons: (i) it corresponds to a spatially independent vector potential leading to relatively simple analytical treatment, especially in the bulk situation; (ii) it leads to highly non-trivial features like broken time-reversal ground states and/or magnetic textures with topological properties, like skyrmion crystal [@Cole_2012; @Hofstetter12].
The cases of two components bosons or fermions in the presence of a spin-orbit coupling have been the objects of recent analytical and numerical studies [@Lewenstein09; @Cole_2012; @Cai12; @Galitski12; @Hofstetter12; @Fujimoto09; @Goldman12]. In particular, in the case of repulsive interactions, they have emphasized various magnetic ordering and textures depicted by the effective spins. One must note that in the large interaction strength, and close to half-filling, both bosonic and fermionic situations can be described by effective and quite similar Heisenberg models involving the spin degrees of freedom only (see below). The case of fermions with attractive interaction has also been studied, emphasizing the impact of the spin-flip terms on the pairing states, like the BCS-BEC crossover or the Fulde-Ferrel-Larkin-Ovchinnikov (FFLO) phase [@Shenoy11; @Sademelo12; @Lewenstein10; @Iskin11; @Iskin12; @Iskin13]. The instabilities of attracting bosons with such a spin-orbit coupling have also been considered [@Riedl13].
Even though the Bose-Hubbard model and the Fermi-Hubbard model with a spin-orbit coupling depicts similar phases in the strong (repulsive) interaction limit, the behavior for small coupling is obviously different: the Mott regime of the bosonic models turns into a complex superfluid regime whereas fermionic models are expected to be in a Mott insulator state. Still, in the later case, the evolution of the magnetic ordering from the non-interacting situation towards the Heisenberg model regime remains largely unexplored. In the present paper, using both a linear response approach and real space mean-field theory, we emphasize that the Fermi-Hubbard model at half-filling, in the presence of a spin-orbit coupling, still depicts, at low interaction, an antiferromagnetic phase, corresponding to the Fermi-Hubbard model *without* spin-orbit coupling. The impact of the gauge-field then results, at intermediate interaction depending on the strength of the spin-orbit coupling, in a first order phase transition towards a non collinear magnetic order, which further evolves to magnetic texture at large interaction, predicted by the effective Heisenberg model.
The paper consists in two main parts: in Section \[method\], we describe the theoretical framework (Fermi-Hubbard model, linear response theory, effective Heisenberg model). In Section \[result\] (i) we show that the non-interacting energies (band structure) always depict a nesting at the antiferromagnetic order resulting in a diverging spin-spin susceptibility at low temperature and, thereby, an instability towards an antiferromagnetic phase at small interaction; (ii) we provide the numerical mean-field results for different values of the spin-orbit coupling and the interaction, showing evidences for the transition from the antiferromagnetic order to a non-collinear magnetic order. In particular, the skyrmionic phase is shown to already exist for moderate values of the interaction.
Methods {#method}
=======
model
-----
In this paper we consider a system of spin 1/2 fermionic particles in 2D square lattice and subjected to a synthetic gauge field (more will be said about the specific form of the gauge field later). The lattice spacing is equal to unity, setting both spatial and momentum scales. The system Hamiltonian reads H\_[tot]{} = H\_[kin]{} + H\_ + H\_[int]{}, where
$$\begin{aligned}
H_{\rm kin} &= - \sum_{\bf j,j'} c^\dag_{{\bf j},s} T^{s, s'}_{\bf j, j'} c_{{\bf j'},s'} \label{eq H parts kin} \\
H_\mu &= - \sum_{\bf j} \mu_1 n_{{\bf j},1} + \mu_2 n_{{\bf j},2} \label{eq H parts mu} \\
H_{\rm int} &= U \sum_{\bf j} \left(n_{{\bf j},1}-\frac{1}{2}\right)
\left( n_{{\bf j},2}-\frac{1}{2}\right), \label{eq H parts int}
\end{aligned}$$
where $T$ are the tunneling matrices and are taken to be time independent in the following. $c^\dag_{{\bf j},s}, c_{{\bf j}',s'}$ are the usual fermionic creation and annihilation operators satisfying $\{ c_{{\bf j},s}, c^\dag_{{\bf j}',s'} \} = \delta_{\bf j,j'} \delta_{s,s'}$, $n_{{\bf j},s} = c^\dag_{{\bf j},s} c_{{\bf j},s}$ is the density operator, $\mu_s$ the chemical potential, $U$ the interaction strength and $s=1,2$ labels the spin degree of freedom. The interaction Hamiltonian can be written also as H\_[int]{} = - \_[**j**]{} [**S\_j S\_j**]{}, \[eq H\_int\_S\] where $S^a_{\bf j} = \frac{1}{2} c^\dag_{{\bf j},s} \sigma^a_{s,s'} c_{{\bf j},s'}$ are the usual spin operators.
To be more specific, when needed, we will consider the case of the gauge fields corresponding to the spin-orbit coupling of the Rashba type, corresponding to the position independent tunnellings in the $x$ and $y$ directions:
\[soc\] $$\begin{aligned}
T_x &= t{\rm e}^{-i \alpha \sigma_y} =
t\left(
\begin{array}{cc}
\cos (\alpha ) & - \sin (\alpha ) \\
\sin (\alpha ) & \cos (\alpha )
\end{array}
\right) \\
T_y &= t{\rm e}^{i \alpha \sigma_x} =
t\left(
\begin{array}{cc}
\cos (\alpha ) & i\sin (\alpha ) \\
i \sin (\alpha ) & \cos (\alpha )
\end{array}
\right).
\end{aligned}$$
where $t$ denotes the global strength of the hopping amplitudes and we have used the labelling $T_x = T_{{\bf j,j}+1_x}$ and $T_y = T_{{\bf j,j}+1_y}$.
Non-interacting case - $U=0$
----------------------------
In the case where the matrices $T_{\bf j,j'}$ are position independent, the Hamiltonian [Eq.(\[eq H parts kin\])]{} +[Eq.(\[eq H parts mu\])]{} is diagonalized in the momentum space: H\_0 = -\_[**k**]{} C\^\_[**k**]{} \_[**k**]{} C\_[**k**]{}, \[eq H diag\] where $C^{\dagger}_{\bf k} = (c^{\dagger}_{{\bf k},1}, c^{\dagger}_{{\bf k},2})$ and \_[**k**]{} &=& (T\_x [e]{}\^[-ik\_x]{} + T\^\_x [e]{}\^[ik\_x]{} + T\_y [e]{}\^[-ik\_y]{} + T\^\_y [e]{}\^[ik\_y]{} +)\
&=& (
[cc]{} \_1 & 0\
0 & \_2\
). \[eq Tk\]
Finally, the matrix $\mathcal{T}_{\bf k}$ is diagonalized with a unitary matrix $U_{\bf k}$, such that H\_0 = \_[[**k**]{},s]{} \_[[**k**]{},s]{} d\^\_[[**k**]{},s]{} d\_[[**k**]{},s]{}\[eq H\_0 diag\], where $(d^{\dagger}_{{\bf k},1}, d^{\dagger}_{{\bf k},2})=(c^{\dagger}_{{\bf k},1}, c^{\dagger}_{{\bf k},2})U^\dag_{\bf k}$.
In the specific case of the spin-orbit coupling , one obtains the following eigenenergies: \_[[**k**]{},1,2]{} &=& --2 t, \[eq eigenergies\] where $\mu$ is the chemical potential and $h$ is the spin imbalance defined through $\mu_{1,2} = \mu \pm h$ (see [Eq.(\[eq Tk\])]{}). The situation gets simpler in the limit of $\mu = h = 0$, which corresponds to half filling for all $\alpha$.
Linear Response in small $U$ limit
----------------------------------
Turning on the interaction, one expects the Fermi liquid phase to be unstable towards a magnetically ordered phase. This instability of the system can be captured using the standard linear response theory (see Appendix \[sec App susc\] for details), more precisely from the spin-spin susceptibility. Let us start with the non interacting Hamiltonian of the form $$H = H_0 + H_{\rm ext},$$ where $H_0$ is given by [Eq.(\[eq H diag\])]{} and $$H_{\rm ext} = \sum_{\bf j} S_{\bf j}^a B_{\bf j}^a (t) = \frac{1}{N}\sum_{\bf q} S_{\bf -q}^a B_{\bf q}^a (t),$$ where $B$ is the external driving force. Using the Fourier transform of the fermionic operators $c_{{\bf k},s} = \frac{1}{\sqrt{N}} \sum {\rm e}^{-i{\bf k \cdot j}} c_{{\bf j},s}$, where $N$ is the number of sites, one gets $S_{\bf j}^a = \frac{1}{N} \sum_{\bf q} {\rm e}^{i{\bf q \cdot j}} S^a_{\bf q}$ and $S^a_{\bf q} = \frac{1}{2}\sum_{\bf k} c^\dag_{{\bf k},s} \sigma^a_{s,s'} c_{{\bf k+q},s'}$. The spin-spin susceptibility is diagonal in the momentum space and reads \_[**q**]{}\^[a,b]{}() = -i \_[0]{}\^[+]{} [d]{} [e]{}\^[-i]{} , where the thermal average and the time evolution are done using the unperturbed Hamiltonian $H_0$. Therefore, the analytic expression for the susceptibility is easily obtained by diagonalizing $H_0$. After some manipulation, we find, that
\_[**q**]{}\^[a,b]{}() &=& \_[[**k**]{},s,s’]{} (\^a\_[**k,k+q**]{})\_[s,s’]{} (\^b\_[**k+q,k**]{})\_[s’,s]{}\
&& \^2 k \_[s,s’]{} (\^a\_[**k,k+q**]{})\_[s,s’]{} (\^b\_[**k+q,k**]{})\_[s’,s]{} F\_[s,s’]{}(,[**k,q**]{}), \[eq chi q\_omega Main\]
where $n(\epsilon) = (1+{\rm e}^{\beta \epsilon})^{-1}$ is the Fermi function. We have introduced F\_[s,s’]{}(,[**k,q**]{}) = \[eq integrand F\] and \^a\_[**k,k+q**]{} = U\_[**k**]{} \^a U\^\_[**k+q**]{}, \[eq S def\] where $\eta$ is an infinitesimal convergence factor. In the usual situation of a Hamiltonian diagonal in the original spin-space, the matrices $U_{\mathbf{k}}$ are simply the identity and one recovers the standard spin-spin susceptibility.\
In what precedes, we have derived the susceptibility [Eq.(\[eq chi q\_omega Main\])]{} for the non-interacting system. However, the interaction among the fermions affects the spin-spin susceptibility. This can be captured, in the random phase approximation (RPA) framework, by deriving, in a self-consistent way, the effective propagator for the spin fluctuations. This is equivalent to perform a mean-field approximation to the interacting Hamiltonian [@Jensen_1991; @Altland_n_Simons_2010; @Demler11].
Lets recall the main step: Starting from the interaction Hamiltonian [Eq.(\[eq H\_int\_S\])]{}, the effect of interaction then amounts to an introduction of an effective driving force given by H\^[eff]{}\_[ext]{} = H\_[ext]{} + H\^[MF]{}\_[int]{} = \_[**q**]{} S\^b\_[**-q**]{}(t) (B\^b\_[**q**]{})\^[eff]{}(t), where $(B^b_{\bf q})^{\rm eff}(t) = \left[ B^b_{\bf q}(t) - 2g\langle S^b_{\bf q}\rangle(t) \right]$ and we have introduced $g=2U/3$ (see Appendix \[sec App susc\]). After the manipulation described in Appendix \[sec App susc\] one finds the expression for average values of the spin operators \_[**q**]{}() = M\^[-1]{}\_[**q**]{}() \_[**q**]{}() [**B\_[**q**]{}**]{}() \[rpachi\] where $M^{ab}_{\bf q}(\omega)=\delta^{a,b} + 2g \chi^{a,b}_{\bf q}(\omega)$. $M^{-1}_{\bf q}(\omega) \chi_{\bf q}(\omega)$ is therefore the RPA susceptibility, whose singularities in the complex $\omega$ plane correspond to the vanishing eigenvalues of $M_{\bf q}(\omega)$.\
Large $U$ limit - the Heisenberg Hamiltonian
--------------------------------------------
Following the method of [@MacDonald_1988], one obtains, in the large (repulsive) interaction $U$ limit and at half-filling, the following effective Heisenberg Hamiltonian, up to the second order in the $t/U$ expansion:
H = H\_c + \_[=x,y]{} \_[<i,i+>]{} \_[a=x,y,z]{} J\^a\_S\^a\_i S\^a\_[i+]{} + \_[+]{} (\_i \_[i+]{})\_+ + \_[-]{} (\_i \_[i+]{})\_-, \[eq H Heis gen\]
where (\_i \_[i+]{})\_+ &=& (S\^y\_1 S\^z\_2, S\^z\_1 S\^x\_2, S\^x\_1 S\^y\_2)\
(\_i \_[i+]{})\_- &=& -(S\^z\_1 S\^y\_2, S\^x\_1 S\^z\_2, S\^y\_1 S\^x\_2) are the “positive” and “negative” part of the vector product. In the most general case, the coefficients $J^a_\delta, D^p_\delta$ and $H_c$, $\delta=x,y$, $a=x,y,z$, $p=1,2,3$, are quadratic functions of elements of the tunnelling matrices $T$. The general expression can be found in the appendix \[appheis\]. However, the situation will simplify considerably when considering a specific case of spin orbit coupling of Rashba type, see Eq. :
J\^x\_&=& 4\
J\^y\_&=& 4\
J\^z\_&=& 4 for both spatial directions $\delta=x,y$, D\^x\_[+]{} &=& D\^x\_[-]{} = 0\
D\^y\_[+]{} &=& D\^y\_[-]{} = -4\
D\^z\_[+]{} &=& D\^z\_[-]{} = 0
for $\delta=x$ (tunnelling $T_x$) and
D\^x\_[+]{} &=& D\^x\_[-]{} = -4\
D\^y\_[+]{} &=& D\^y\_[-]{} = 0\
D\^z\_[+]{} &=& D\^z\_[-]{} = 0
for $\delta=y$, (the tunnelling $T_y$),
$$H_c = -4 \lambda \frac{\mathds{1}}{4},$$
$\lambda = t^2/U$. This Hamiltonian is identical with Eq. (2) of [@Cole_2012], where one has to set their parameter $\lambda=1$. In the following, we take $t$ equal to unity to set the energy scale.
Results & Discussion {#result}
====================
Small U limit
-------------
Here, we study the small interaction behaviour of the lattice gas at half-filling, $\mu=h=0$, by means of the susceptibility, given by [Eq.(\[eq chi q\_omega Main\])]{}. The instability of the non-interacting ground state is signalled by a divergence of the DC ($\omega=0$) RPA susceptibility , corresponding to a vanishing eigenvalue of $M_{\mathbf{q}}(0)$. The latter corresponds to an eigenvalue of $\chi_{\mathbf{q}}(0)$ having the value $-1/2g$. In the following, we evaluate the susceptibility by numerical integration of over the first Brillouin zone. We will discuss the general $\mathbf{q}$ value below, but it turns out that the mean-field simulations indicate the onset of AF phase for small couplings $U$, corresponding to a value of ${\bf q}={\bf q}_\pi = (\pm \pi,\pm \pi)$. For this specific value of ${\bf q = q}_\pi$, one can see, using [Eq.(\[eq eigenergies\])]{}, that $\epsilon_{{\bf k +q}_\pi,s} = -\epsilon_{{\bf k},\bar{s}}$, where $\bar{s}$ means the opposite spin to $s$. Moreover, in this case, the Fermi energy is $E_F=0$ and one gets nested Fermi surfaces for any value of the gauge field parameter $\alpha$ - the Fermi surfaces together with the susceptibility integrand function $F_{s,s'}(0,{\bf k},(\pi,\pi))$, [Eq.(\[eq integrand F\])]{}, are plotted in [Fig.(\[fig Fermi surf\])]{} for different values of $\alpha$. The coincidence of the maximum of the integrand function $F$ is a specific feature of ${\bf q}_\pi$ and is responsible for the the onset of AF phase for sufficiently low temperature, i.e. large $\beta$ values. Indeed, we note, that in the case of nested Fermi surfaces, the susceptibility possess ${\rm ln} \beta$ divergence.
![(Color online) Plots of Fermi surfaces and susceptibility integrand $F$ (see [Eq.(\[eq integrand F\])]{}) in the first BZ for ${\bf q} = (\pi, \pi)$. Rows: successive values of $\alpha=0, \pi/3, \pi/2$; Columns: (i) Fermi surfaces. The nesting of Fermi surfaces is indicated explicitly for $\alpha=\pi/3$. For $\alpha=\pi/2$ the Fermi surface becomes a set of isolated Dirac points. (ii) $F$ for spins ($s,s’$)=(2,1), (iii) $F$ for spins ($s,s’$)=(1,2). The green and red colours in (ii) and (iii) represent the maximum of $F$. The Fermi surfaces for given $(s,s')$ in (ii) and (iii) are indicated by white lines. The coincidence of Fermi surfaces and the maximum of the integrand $F$ is not generic, but is specific for ${\bf q}_\pi = (\pm \pi, \pm \pi)$. []{data-label="fig Fermi surf"}](FermiSurf.png){width="9cm"}
Moreover, for $\alpha=0$, the presence of Van Hove singularities at the Fermi energy adds a leading divergence ${\rm ln}^2\beta$. Therefore, we fit the susceptibility with a function \_[fit]{}= a \^2 + b + c. \[eq fitting fction\]
$\alpha$ $a\times 10^3$ $b$ $c$ $\sigma^2\times 10^6$
----------- ---------------- --------- -------- -----------------------
0 (theor) -13. -0.073 -0.098 120.
0 (fit) -13. -0.073 -0.1 0.73
0.05 3. -0.13 -0.057 6.6
$\pi/12$ 0.9 -0.065 -0.11 2.
$\pi/6$ 0.38 -0.043 -0.1 2.7
$\pi/4$ -0.76 -0.025 -0.1 1.6
$\pi/3$ -0.68 -0.016 -0.088 0.032
$5\pi/12$ -0.85 -0.0052 -0.082 1.6
$\pi/2$ -0.95 0.0047 -0.084 1.4
: Values of the fitting parameters $a,b,c$ of [Eq.(\[eq fitting fction\])]{} for different values of the gauge field parameter $\alpha$. $\sigma^2$ is the usual data variance, defined in the text. [[ The first line corresponds to the theoretical prediction [Eq.(\[eq chi theor\])]{} and agrees well with the numerical computation of the susceptibility [Eq.(\[eq chi q\_omega Main\])]{} shown in the second line. ]{}]{} []{data-label="tab fit"}
![(Color online) a) Plot of the minimal eigenvalue of $\chi$ against $\beta$ for ${\bf q}=(\pi,\pi)$. The data points were calculated using [Eq.(\[eq chi q\_omega Main\])]{} and the lines are the fits [Eq.(\[eq fitting fction\])]{}. The lowest curve corresponds to $\alpha=0$ and the highest to $\alpha=\pi/2$ in monotonically increasing order corresponding to Table \[tab fit\]. b) [[ Fitting parameter $b$ against the gauge field parameter $\alpha>0$ (only non zero values of $\alpha$). The plot shows a monotonic decrease of $|b|$ - see text for details. ]{}]{} []{data-label="fig Susc vs. beta"}](ChiFit_corr.png){width="8cm"}
(2,2) (-3.2,14.7) (-3.2,9.8)
The data variance in the Table \[tab fit\] reads $\sigma^2 = \frac{1}{n}\sum_{i=1}^n |\chi_{{\rm fit},i} - \chi_i|^2$, where $\chi_i$ stands for a minimal eigenvalue at (given) ${\bf q}_\pi$ and $n=14$ is the number of simulated data points. As shown in [Fig.(\[fig Susc vs. beta\])]{}, the parameter of the ${\rm ln} \beta$ divergence $b$ increases monotonically (for $\alpha>0$) with $\beta$. This comes from the fact, that the contribution to the susceptibility [Eq.(\[eq chi q\_omega Main\])]{} is proportional to the area of the Fermi surface (length of the 1D surface in our case), which decreases with increasing $\alpha$ and shrinks to the Dirac points for $\alpha = \pi/2$ as shown in [Fig.(\[fig Fermi surf\])]{}.
Even though the AF order is expected at $T=0$, at higher temperature the susceptibility develops minima at ${\bf q} \neq {\bf q}_\pi$. An example for a diagonal ${\bf q}$, ${\bf q} = (q_d, q_d)$, $q_d \in \left[0, \pi \right]$ is shown in [Fig.(\[fig Susc vs. q\])]{}. Therefore, at this temperature, the linear response analysis predicts an instability towards a different magnetic order. In addition, since these minima correspond to a finite value of the susceptibility $\chi_m$, the phase transition is predicted to occur at a finite interaction $U=-\frac{3}{4}\frac{1}{\chi_m}$. However, this prediction assumes a second order phase transition and at this (large) value of $U$, another magnetic order might have already appeared. In addition, since the value of the minimum of the susceptibility is not much lower than the one of the AF order, it might explain that, from a numerical point of view (finite size...), the onset of those non-AF phases at finite temperature and finite interaction have not been observed yet.
![(Color online) Susceptibility eigenvalues vs. $\bf q$ on the diagonal of the Brillouin zone, ${\bf q} = (q_d,q_d)$. An example for $\alpha=\pi/4$ and $\beta=8.7$ is shown and demonstrates a minimum of the susceptibility occurring for ${\bf q} \neq {\bf q}_\pi$. []{data-label="fig Susc vs. q"}](ChiEigVals_3_3.png){width="8cm"}
\
\
\
Mean Field numerical simulation
-------------------------------
To further investigate the properties of the system, we study the properties of the mean-field Hamiltonian ground state. More precisely, at finite temperature, we minimize the mean-field free energy $F_{\mathrm{MF}}=-\frac{1}{\beta}\ln{Z}$, where $Z$ is the partition function associated to the mean-field Hamiltonian $H_{\mathrm{MF}}$ (See Appendix \[sec App susc\] for details): $$\label{meanfieldh}
\begin{aligned}
H_{\mathrm{MF}}= &- \sum_{\bf j,j'} c^\dag_{{\bf j},s} T^{s, s'}_{\bf j, j'} c_{{\bf j'},s'} -\frac{4U}{3} \sum_{\bf j} {\langle\mathbf{S_j}\rangle \cdot \mathbf{S_j}} \\
&+\frac{2U}{3}\sum_{\bf j} {\langle\mathbf{S_j}\rangle \cdot \langle \mathbf{S_j}\rangle}.
\end{aligned}$$ At half-filling, with a repulsive interaction, the total average density is expected to remain fixed to unity, the relevant degrees of freedom being the average values of the spin operators $S_j^a$. The present mean-field decoupling respects thus the $SU(2)$ invariance of the interaction and allows for all possible magnetic orderings, in particular those obtained in the large $U$ limit from the effective Heisenberg model [@Cole_2012; @Hofstetter12]. From that point of view, even though other mean-field decoupling schemes are possible [@schultz90], the present one is expected to capture qualitatively the properties of the magnetic phases for different values of $U$ and $\alpha$.
The numerical calculation has been performed on a $36 \times 36$ square lattice with periodic boundary conditions for different values of parameters $\alpha, U, \beta$. On each lattice site, the three components of the spin $\langle\mathbf{S_j}\rangle$ are independent mean-field parameters. Since the mean-field Hamiltonian is quadratic in the fermionic operators, the free energy can be obtained by diagonalizing a $2N\times 2N$ matrix, where $N$ is the number of lattice sites. Even though the exact structure of the matrix is slightly different from the one obtained in the BCS case [@Dubi_07; @Chen09; @Iskin13], there is a one to one mapping between the two situations, namely a particle-hole transformation on one of the species. From a numerical point of view, the free energy is minimized using a mixed quasi-Newton and conjugate gradient method; additional checks (e.g. different initial values of the spins) were performed to ensure that the global minimum has been reached. Finally, even though the mean-field calculation captures the temperature dependence of the spin degrees of freedom, it only describes the Mott transition, whereas the determination of the true critical temperature to a quasi-long range magnetic order, especially in the large interaction limit, amounts to taking into account the effects of terms beyond mean field [@Sademelo97].
The results are summarized in Table \[tab Summary\].
-------------------------------------------
{width="8cm"}
-------------------------------------------
: (Color online) Summary of the MF results. *Legend:* AF (blue) - antiferromagnetic phase, SI (dark red), SI2 (orange) - spiral phases, SkX (yellow) - Skyrmion crystal, see text and Table \[tab phase orderings\] for definitions; For fixed values of $\alpha$, we show phases which occur as a function of $\beta$, the inverse temperature, and $U$, the interaction strength. For small values of the interaction, AF phase occurs for all $\alpha$ and for sufficiently high values of $\beta$ (low temperatures), in agreement with the results predicted by the linear response theory. For increasing values of $U$, a phase transition occurs towards different phases (SI, SI2, SkX), depending on $\alpha$. For other values of $\alpha$, not shown here, we have found a similar scenario. For $\alpha\geq0.4\pi$, the logarithmic divergence of the susceptibility with the temperature is very slow, such that the AF phase only appears for very low values of the temperature and, from a numerical point of view, is very difficult to observe with our method. Coexistence of different phases is indicated (either of two well defined phases or of a well defined phase and an unknown phase (coex)).[]{data-label="tab Summary"}
Only few values of the gauge field and interaction are presented, since the focus of the paper is on the generic evolution of the system phase from the non-interacting situation to the large interacting limit. For the parameters under consideration, we have identified the following phases: • Antiferromagnet (AF), ${\bf q}=(\pi, \pi)$, • Spiral (SI), ${\bf q} = (q, q')$, • Spiral 2 (SI2), ${\bf q} = (q, \pi)$, •Skyrmion crystal (SkX), ${\bf q} = (\pi, \pm \pi/3)$ and ${\bf q}$ is modulo the periodicity of the BZ. As one can see from the Table \[tab Summary\], an AF phase always shows up first at low interaction; one can see that when the transition normal-AF occurs for a low interaction, the phase only exists for very low temperature, a behaviour similar to the BCS situation. [[ For other values of $\alpha$, not shown here, we have found a similar scenario. For $\alpha\geq0.4\pi$, the logarithmic divergence of the susceptibility with the temperature is very slow, such that the AF phase only appears for very low values of the temperature and, from a numerical point of view, is very difficult to observe with our method.]{}]{}
{width="17cm"}
For larger interaction values, the AF order further evolves to a phase depicting a magnetic texture (spiral...), corresponding to peaks in $|\mathbf{S(q)}|^2$ away from the corner of the BZ. The transition is of first order, since the AF order parameter doesn’t vanish at the transition point, at which also a magnetic texture appears.
Then, for larger interaction strength and depending on the gauge-field parameter, the magnetic texture can further evolve to another phase, to reach finally the spin configuration predicted from the Heisenberg-model. It’s more difficult to determine the type of transition from one phase to the other, but from the numerical data it seems to correspond to a smooth change in the location of the peaks of $|\mathbf{S(q)}|^2$, i.e. a crossover within the numerical resolution of the simulation.
Finally, one should notice that the critical value $U_c(\alpha)$ of the transition from the AF order is not expected to depict a smooth curve in the $U - \alpha$ plane, since, in the magnetic texture phase, the different mean-field Hamiltonians $H_{\mathrm{MF}}(\alpha)$ have different translational symmetries, a situation similar to the Hofstadter-Hubbard model, depicting fermions (or bosons) in an external magnetic field. Nevertheless, our numerics seems to suggest that the transition from an AF order to a magnetic texture phase increases with the gauge-field parameter $\alpha$, such that the AF order on the $\alpha=0$ line seems to be unstable towards a magnetic structure phase in the presence of arbitrary small spin-orbit coupling.
In [Fig.(\[fig Sk sq\])]{}, an example of a real space spin configuration $\braket{S^a_{\bf j}}$ together with its Fourier transform $|S({\bf q})|^2$ is shown for $\alpha = 0.3\pi$ and $\beta=50$. Different phases (AF, SI, SkX for $U$=4.5, 4.75, 10) or their coexistence (SI + SkX for $U$ = 6.5) are clearly indicated by peaks of $|S({\bf q})|^2$. As one can see, in the large $U$ limit, one recovers the spin configuration expected from the effective Heisenberg model, i.e. a $3\times3$ Skyrmion crystal which is non-planar magnetic order with a non-vanishing Skyrmion density $\mathbf{S_j}\cdot\left(\mathbf{S_{j+1_x}}\times\mathbf{S_{j+1_y}}\right)$. The magnetic orders can be parametrized by the peak values of $|{\bf S(q)}|^2$. More specifically, each peak ${\bf q}$ gives rise to a spin wave, which can be described as [@Sachdev_2003] = N\_1\^a([**q**]{}) + N\_2\^a([**q**]{}) \[eq mag order\] with further distinction of collinear, ${\bf N}_1 ({\bf q}) \times {\bf N}_2 ({\bf q}) = 0$, and non collinear, ${\bf N}_1 ({\bf q}) \times {\bf N}_2 ({\bf q}) \neq 0$, orders. The values of ${\bf q}, {\bf N}_1({\bf q})$ and ${\bf N}_2({\bf q})$ are summarized in Appendix \[sec App mag orders\].
${\bf q}$ ${\bf N}_1({\bf q}) \times {\bf N}_2({\bf q})$ ${\bf N}_1({\bf q}) \cdot {\bf N}_2({\bf q})$
----- -------------------- ------------------------------------------------ -----------------------------------------------
AF $(\pi, \pi)$ 0 0
SI $(q, q')$ $\neq$ 0 0
SI2 $(q, \pi)$ $\neq 0$ 0
SkX $(\pi, \pm \pi/3)$ non-planar $\neq 0$
: Summary of magnetic phases. Different phases are defined by the value of ${\bf q}$ at which $|{\bf S(q)}|^2$ peaks. Further distinction of magnetic orders given by the values of ${\bf N_1 \cdot N_2}$ and ${\bf N_1 \times N_2}$ is shown. The skyrmion phase (SkX) corresponds to a pair of parameters $({\bf N}_1({\bf q_{\pm}}),{\bf N}_2({\bf q_{\pm}}))$ at inequivalent positions ${\bf q_{\pm}}$ in the Brillouin zone, with, in addition, non-collinear ${\bf N}_1({\bf q_{\pm}}) \times {\bf N}_2({\bf q_{\pm}})$ vectors. See also [Eq.(\[eq mag order\])]{} and the text for details. []{data-label="tab phase orderings"}
Conclusion
==========
In summary, we have studied the quantum phase transitions of the Fermi-Hubbard model in a square lattice at half-filling in the presence of an effective spin-orbit coupling. We have shown that at small interaction, the system always enters an AF order, then undergoes a first order phase transition to a phase depicting magnetic texture, and, eventually, reaches (at large interaction strength), the magnetic texture predicted by the associated Heisenberg model.
In addition to the half-filling situation presented here, a possible study will concern the doped case or the imbalanced population case, where more exotic magnetic phases are expected to occur, possibly in competition with the non-conventional superconductivity. One should also take into account the effects of terms beyond mean field to determine properly the critical temperature of the transition and to estimate the strength of the quantum fluctuations thus allowing for a better comparison with possible experimental results.
Apart from the (magnetic) properties of the ground state, it would be interesting to study the excitations above the ground state, in particular to describe the dynamical response of the system to external perturbations, like the (sudden) quenches of the interaction or the gauge-field, which can efficiently be achieved in cold atomic gases experiments.
Linear response of the spin systems {#sec App susc}
===================================
From the linear response theory [@Jensen_1991; @Altland_n_Simons_2010], the expression for the spin-spin susceptibility $\chi_{\bf q}^{a,b}(\tau)$ reads: \_[**q**]{}\^[a,b]{}() = -i(). \[eq chi q\_omega def\] The frequency domain susceptibility is given by the Fourier transform $\chi_{\bf q}^{a,b}(\omega)=\int d\tau\,e^{-i\omega \tau}\chi_{\bf q}^{a,b}(\tau)$. \
We can now find an explicit analytical expression for $\chi^{a,b}_{\bf q}(\omega)$, given the Hamiltonian [Eq.(\[eq H parts kin\] - \[eq H parts int\])]{}. Namely we need to evaluate the thermal average = Z\^[-1]{} [Tr]{} . \[eq Tr comm\] In order to evaluate the trace and the time dependence, one needs to diagonalize $H_0$. As explained in the main text, this is achieved by going to momentum space and finding $2\times2$ unitary transform $U_{\mathbf{k}}$ ($d_{\bf k} = U_{\bf k} c_{\bf k}$) such that H\_0 = \_[[**k**]{},s]{} \_[[**k**]{},s]{} d\^\_[[**k**]{},s]{} d\_[[**k**]{},s]{}. In the diagonal basis, the time evolution of the operators $d_{{\bf k},s}$ is simple, such that one obtains readily the time evolution of the spin operators: S\^a\_[**q**]{}() = \_[**k**]{} d\^\_[[**k**]{},s]{} (\^a\_[**k,k+q**]{})\_[s,s’]{} d\_[[**k+q**]{},s’]{} [e]{}\^[i(\_[[**k**]{},s]{} - \_[[**k+q**]{},s’]{} )]{}, \[eq St\] where \^a\_[**k,k+q**]{} = U\_[**k**]{} \^a U\^\_[**k+q**]{}. We proceed with the evaluation of the trace [Eq.(\[eq Tr comm\])]{}. Lets start with the first part of the commutator ( S\^a\_[**q**]{}() S\^b\_[**-q**]{}(0) [e]{}\^[-H\_0]{} ) &=& \_[n\_i]{} S\^a\_[**q**]{}() S\^b\_[**-q**]{}(0) \^[-H\_0]{}\
&=& S\^a\_[**q**]{}()\_[n\_1,n\_2]{} S\^b\_[**-q**]{}(0)\_[n\_2,n\_1]{} \_[n\_1,n\_1]{}, \[eq Tr of SS\] where the sum runs over complete basis (i.e. momentum and spin) $\ket{n_i}$ and we have used the fact, that $H_0$ is already diagonalized. Plugging the expression [Eq.(\[eq St\])]{} to [Eq.(\[eq Tr of SS\])]{} one finds &&[Tr]{} ( S\^a\_[**q**]{}() S\^b\_[**-q**]{}(0) [e]{}\^[-H\_0]{} ) =\
&=& [Tr]{} ( d\^\_[[**k**]{},s]{} (\^a\_[**k,k+q**]{})\_[s,]{} d\_[[**k+q**]{},]{} d\^\_[[**k’**]{},s’]{} (\^b\_[**k’,k’-q**]{})\_[s’,’]{} d\_[[**k’-q**]{},’]{} [e]{}\^[-i(\_[[**k’**]{},s’]{} - \_[[**k’-q**]{},’]{} )]{} [e]{}\^[-\_[[**p**]{},]{} d\^\_[[**p**]{},]{} d\_[[**p**]{},]{} ]{} )\
&=& [Tr]{} ( d\^\_[[**k**]{},s]{} (\^a\_[**k,k+q**]{})\_[s,s’]{} d\_[[**k+q**]{},s’]{} d\^\_[[**k+q**]{},s’]{} (\^b\_[**k+q,k**]{})\_[s’,s]{} d\_[[**k**]{},s]{} [e]{}\^[-i(\_[[**k+q**]{},s’]{} - \_[[**k**]{},s]{} )]{} [e]{}\^[-\_[[**p**]{},]{} d\^\_[[**p**]{},]{} d\_[[**p**]{},]{} ]{} ). \[eq Tr SqSmq\] Following [Eq.(\[eq Tr of SS\])]{}, we put in the previous ${\bf k' = k+q}$ and $\sigma = s', \; \sigma' = s$. One obtains && ( S\^a\_[**q**]{}() S\^b\_[**-q**]{}(0) [e]{}\^[-H\_0]{} ) =\
&=& Z \_[[**k**]{},s,s’]{} (\^a\_[**k,k+q**]{})\_[s,s’]{} (\^b\_[**k+q,k**]{})\_[s’,s]{} [e]{}\^[-i(\_[[**k+q**]{},s’]{}-\_[[**k**]{},s]{})]{} , 0\
&=& Z { \_[[**k**]{},s s’]{} (\^a\_[**k,k**]{})\_[s,s’]{} (\^b\_[**k,k**]{})\_[s’,s]{} [e]{}\^[-i(\_[[**k**]{},s’]{}-\_[[**k**]{},s]{})]{} .\
&& . + \_[[**k**]{},s]{} (\^a\_[**k,k**]{})\_[s,s]{} (\^b\_[**k,k**]{})\_[s,s]{} [e]{}\^[-i(\_[[**k**]{},s’]{}-\_[[**k**]{},s]{})]{} }, = 0. \[eq Tr eval\] The trace of $S^b_{\bf -q}(0) S^a_{\bf q}(\tau) {\rm e}^{-\beta H_0}$ is obtained in a similar way and yields a result identical to [Eq.(\[eq Tr eval\])]{} with exchange $\beta \epsilon_{{\bf k},s} \leftrightarrow \beta \epsilon_{{\bf k+q},s'}$ (i.e. the energies are exchanged only in the thermal terms including $\beta$). We next notice, that &=&\
&=& - = (1-n(\_b)) - (1-n(\_a))\
&=& n(\_a) - n(\_b). This also holds for the degenerate case ($\bf q=0$, last line in [Eq.(\[eq Tr eval\])]{}), in which we have directly - = n(\_a) - n(\_b). We obtain the result for the trace of the commutator = \_[[**k**]{},s,s’]{} (\^a\_[**k,k+q**]{})\_[s,s’]{} (\^b\_[**k+q,k**]{})\_[s’,s]{} (n(\_[[**k**]{},s]{}) - n(\_[[**k+q**]{},s’]{})) [e]{}\^[-i(\_[[**k+q**]{},s’]{}-\_[[**k**]{},s]{})]{}. The only time dependent factor is the oscillating exponential and we can thus directly compute the time integral in the definition of the susceptibility [Eq.(\[eq chi q\_omega def\])]{} -i\_[-]{}\^ [d]{}\^[i]{} () [e]{}\^[-i(\_[[**k+q**]{},s’]{}-\_[[**k**]{},s]{})]{} &=& -i\_[0]{}\^ [d]{}t [e]{}\^[i- ]{} [e]{}\^[-i(\_[[**k+q**]{},s’]{}-\_[[**k**]{},s]{})]{}\
&=& , where we have added the infinitesimal convergence factor $\eta$. Plugging this back to the defining relation for the susceptibility [Eq.(\[eq chi q\_omega def\])]{}, we obtain the final result for the susceptibility of the non interacting system \_[**q**]{}\^[a,b]{}() = \_[[**k**]{},s,s’]{} (\^a\_[**k,k+q**]{})\_[s,s’]{} (\^b\_[**k+q,k**]{})\_[s’,s]{} . \[eq chi q\_omega\]
\
\
*Susceptibility in the interacting MF model*\
We have derived the susceptibility [Eq.(\[eq chi q\_omega\])]{} for the non-interacting system subjected to a small external driving force. We will now use this result to find a susceptibility of the interacting system described by the mean-field theory. Lets recall the interaction Hamiltonian [Eq.(\[eq H\_int\_S\])]{}: H\_[int]{} &=& - \_[**j**]{} S\^b\_[**j**]{}(t) S\^b\_[**j**]{}(t) =- \_[**q**]{} S\^b\_[**-q**]{}(t) S\^b\_[**q**]{}(t)\
&& \_[**q**]{} \_[**-q**]{}\_[**q**]{}+ \_[**-q**]{}(S\_[**q**]{}-\_[**q**]{}) + (S\_[**-q**]{}-\_[**-q**]{}) \_[**q**]{}+ O((S)\^2)\
&=& \_[**q**]{} 2 S\_[**-q**]{} \_[**q**]{}- \_[**-q**]{} \_[**q**]{}= H\^[MF]{}\_[int]{}, where we have introduced the coupling strength $g = 2U/3$. In the last line of the preceding equation, the last term does not contribute to the spin dynamics and is normalized out in the computation of the spin average values. We thus drop this term. We obtain the effective external driving Hamiltonian H\^[eff]{}\_[ext]{} = H\_[ext]{} + H\^[MF]{}\_[int]{} = \_[**q**]{} S\^b\_[**-q**]{}(t) = \_[**q**]{} S\^b\_[**-q**]{}(t) (B\^b\_[**q**]{})\^[eff]{}(t). We can thus see, that the inclusion of the MF interaction amounts to replacing the external driving $B$ by the new effective driving $B^{\rm eff}$. One then follows the same procedure as in the non interacting case. Since we are mainly interested in the frequency response of the system, we will use the defining formula connecting the frequency components of the spins with the driving through the susceptibility <S\^a\_[**q**]{}()> = \^[a,b]{}\_[**q**]{}() (B\^b\_[**q**]{})\^[eff]{}(). \[eq delta\_S avg\] One can easily check, that the frequency components of the effective driving are given by the Fourier transform of its parts, (B\^b\_[**q**]{})\^[eff]{}() &=& t [e]{}\^[i t]{} (B\^b\_[**q**]{}(t) - 2g |[S]{}\^b\_[**q**]{}(t))\
&=& B\^b\_[**q**]{}() - 2g |[S]{}\^b\_[**q**]{}(). Therefore, one obtains \^a\_[**q**]{}() = \_b \^[a,b]{}\_[**q**]{}() B\^b\_[**q**]{}() - 2g\^b\_[**q**]{}(), \[eq S avg int\] which after a straightforward manipulation yields the equation for the average value of the spin operators \_[**q**]{}() &=& M\^[-1]{}\
M\^[a,b]{} &=& \^[a,b]{} + 2g \^[a,b]{}\_[**q**]{}(), where we merely rewrote the equation [Eq.(\[eq S avg int\])]{} in the symbolic matrix notation. From here, one can obtain the information about the critical value of the coupling strength $g$ (and thus $U$) from the divergences of the average of the spin operators, i.e. when the matrix $M$ becomes singular.
Effective Heisenberg model in the large $U$ limit {#appheis}
=================================================
We now restrict our interest to the regime with strong repulsion, high $U$. In this regime, the ground state of the grand canonical ensemble has single occupation at each site. Moreover it would cost an energy of order of $U$ to increase the double occupancy by one. This regime can be described by the method of effective Hamiltonian, which is suitable for systems with well separated energy manifolds [@Tannoudji_1998]. The energy manifolds are separated by $U$ and we would like to evaluate the effect of the coupling between the ground state manifold and the higher lying manifolds. This coupling results in the perturbation of the bare energy levels in the ground state manifold. In this section, we present the treatment used in [@MacDonald_1988] and [@Tannoudji_1998 page 38].
In the following we consider a situation at half filling $\mu_1 = \mu_2 = 0$. When we multiply the kinetic Hamiltonian [Eq.(\[eq H parts kin\])]{} by $n_{{\bf j},\bar{s}} + h_{{\bf j},\bar{s}} = 1$ from the left and by $n_{{\bf j'},\bar{s'}} + h_{{\bf j'},\bar{s'}} = 1$ from the right, we obtain H\_[kin]{} T = T\_0 + T\_[-1]{} + T\_1, where T\_0 &=& - n\_[[**j**]{},|[s]{}]{} c\^\_[[**j**]{},s]{} T\_[**j,j’**]{}\^[s,s’]{} c\_[[**j’**]{},s’]{} n\_[[**j’**]{}, |[s’]{}]{} + h\_[[**j**]{},|[s]{}]{} c\^\_[[**j**]{},s]{} T\_[**j,j’**]{}\^[s,s’]{} c\_[[**j’**]{},s’]{} h\_[[**j’**]{}, |[s’]{}]{}\
T\_[-1]{} &=& -h\_[[**j**]{},|[s]{}]{} c\^\_[[**j**]{},s]{} T\_[**j,j’**]{}\^[s,s’]{} c\_[[**j’**]{},s’]{} n\_[[**j’**]{}, |[s’]{}]{}\
T\_1 &=& -n\_[[**j**]{},|[s]{}]{} c\^\_[[**j**]{},s]{} T\_[**j,j’**]{}\^[s,s’]{} c\_[[**j’**]{},s’]{} h\_[[**j’**]{}, |[s’]{}]{}, \[eq T\] where $n$ and $h$ denote the particle and hole number operators respectively and $\bar{s}, \bar{s'}$ denote the spin orthogonal to $s, s'$ - e.g. $\bar{s}$ is spin up for $s$ spin down and vice versa. The sums in [Eq.(\[eq T\])]{} run over nearest neighbours $\braket{{\bf j j'}}$ and spins $s,s'$. Denoting the interaction Hamiltonian [Eq.(\[eq H parts int\])]{} as $V$, one can easily verify the following commutation relations &=& 0\
&=& -U T\_[-1]{}\
&=& U T\_1, which can be summarized as = m U T\_m. \[eq comm VT\] We are now ready to proceed with the effective Hamiltonian derivation. We wish to rewrite the current Hamiltonian $H = V + T$ as H\_[eff]{} = [e]{}\^[iS]{} H [e]{}\^[-iS]{} = H + + + …= \_[n=0]{}\^ \^[(n)]{}, \[eq H\_eff\] where $S$ is some Hermitian matrix (we require the transformation to be unitary). $\left[iS,H\right]^{(n)}$ denotes the $n$-times nested commutator and $\left[iS,H\right]^{(0)} = H$. The matrix $S$ can be determined in the following way. Lets write $S$ as S = S\_1 + \^2 S\_2 + …= \_[n=1]{}\^ \^n S\_n, where $\lambda = 1/U$ is our small parameter around which we will do our perturbative expansion. The elements of the matrix $S_k$ can be determined in an iterative way by requiring, that after the unitary transformation up to the order $k$ in the expansion [Eq.(\[eq H\_eff\])]{} all terms bringing the energy out of the ground state manifold have to vanish. We also denote S\^[(k)]{} = \_[n=1]{}\^k \^n S\_n Important thing to note is that the energy ratio between $V$ and $T$ is of order $\lambda^{-1}$. It is thus more transparent to rewrite the Hamiltonian as $H = V + \lambda \tilde{T}$, where $V$ and $\tilde{T} = \lambda^{-1} T$ are now contributions of the same order. With this notation, the commutator [Eq.(\[eq comm VT\])]{} can be written as = m \_m. In the first order in $\lambda$ we have from [Eq.(\[eq H\_eff\])]{} H\_[eff]{} = H + = V + (\_0 + \_[-1]{} + \_1) + + …. The terms which bring one from the ground state manifold are the terms changing the double occupancy, i.e. $\tilde{T}_{-1}$ and $\tilde{T}_1$. In order to cancel these terms with the commutator, one gets i S\_1 = (\_1 - \_[-1]{}). We will now generalize this procedure in an iterative way to arbitrarily high order in $\lambda$. Lets define a Hamiltonian of order $k+1$ as H\^[(k+1)]{} = [e]{}\^[iS\^[(k)]{}]{} H [e]{}\^[-iS\^[(k)]{}]{} = \_[n=0]{}\^ \^[(n)]{}. As a matter of example, lets take $k=1$ H\^[(2)]{} &=& H + + \[iS\_1, . + O(\^3)\
&=& H + + \^2 + \[iS\_1, . + O(\^3). The last two terms in the second line are of the same order $\lambda$ since $S_1 \propto 1$ and $V \propto \lambda^{-1}$. The idea is now to use this Hamiltonian to find the explicit form of the next elements in the expansion of $iS$, namely the element $iS_2$. This can be done as follows. The third order Hamiltonian can be written as H\^[(3)]{} = H\^[(2)]{}\_0 + H\^[(2)]{}\_[ch]{} + \^2 + O(\^3), where we have decomposed the second order Hamiltonian into a part which does not change the double occupancy $H^{(2)}_0$ and the one which changes the double occupancy, $H^{(2)}_{\rm ch}$. We require, that the lowest order term of $iS_2$ cancels the double occupancy changing term. This is the general procedure for an arbitrary order expansion, so that H\^[(k+1)]{} = H\^[(k)]{}\_0 + H\^[(k)]{}\_[ch]{} + \^k + O(\^[k+1]{}). \[eq Ham iS\_k\] In order to find the explicit form of $iS_k$, we will need the following relationships. Lets denote the product of tunnelling operators as \^k(m) \_[m\_1]{}…\_[m\_k]{}. The commutator with $V$ then reads = M\^k(m) \^k(m), where $M^k(m) = \sum_i m_i$. Looking at [Eq.(\[eq Ham iS\_k\])]{}, one can write the last two terms as H\^[(k)]{}\_[ch]{} + \^k = \^[2k-1]{} \_[m]{} C\^k(m) \^k(m) + \^k = 0. The last equality can be achieved by noting that \^[2k-1]{} = \^[2k-1]{} C\^k(m) \^k(m), so that we finally obtain iS\_k = \^k \_m \^k(m). Note that since $M^k(m) \neq 0$ for double occupancy changing $T^k$, we can safely divide by it.\
\
We have implemented the above described iterative procedure in Mathematica for general tunnellings $T_{\bf j, j'}^{s,s'}$. Up to the second order in the $1/U$ expansion, one obtains the following effective Heisenberg Hamiltonian ($\delta = x,y$ are the spatial directions in the 2D lattice) $$H = H_c + \sum_{\delta=x,y} \sum_{<i,i+\delta>} \sum_{a=x,y,z} J^a_\delta S^a_i S^a_{i+\delta} +
\mathbf{D}_{\delta +} \cdot (\mathbf{S}_i \times \mathbf{S}_{i+\delta})_+ + \mathbf{D}_{\delta -} \cdot (\mathbf{S}_i \times \mathbf{S}_{i+\delta})_-
,$$ where (\_i \_[i+]{})\_+ &=& (S\^y\_1 S\^z\_2, S\^z\_1 S\^x\_2, S\^x\_1 S\^y\_2)\
(\_i \_[i+]{})\_- &=& -(S\^z\_1 S\^y\_2, S\^x\_1 S\^z\_2, S\^y\_1 S\^x\_2) are the “positive” and “negative” part of the vector product. In the most general case, the coefficients read: J\^x\_&=& 2 (T\_\^[2,2]{} [T\_\^[1,1]{}]{}\^\*+T\_\^[1,2]{} [T\_\^[2,1]{}]{}\^\*+T\_\^[2,1]{} [T\_\^[1,2]{}]{}\^\*+T\_\^[1,1]{} [T\_\^[2,2]{}]{}\^\*)\
J\^y\_&=& 2 (T\_\^[2,2]{} [T\_\^[1,1]{}]{}\^\*-T\_\^[1,2]{} [T\_\^[2,1]{}]{}\^\*-T\_\^[2,1]{} [T\_\^[1,2]{}]{}\^\*+T\_\^[1,1]{} [T\_\^[2,2]{}]{}\^\*)\
J\^z\_&=& 2 (T\_\^[1,1]{} [T\_\^[1,1]{}]{}\^\*-T\_\^[2,1]{} [T\_\^[2,1]{}]{}\^\*-T\_\^[1,2]{} [T\_\^[1,2]{}]{}\^\*+T\_\^[2,2]{} [T\_\^[2,2]{}]{}\^\*)
D\^1\_[+]{} &=& 2 i (-T\_\^[2,1]{} [T\_\^[1,1]{}]{}\^\*+T\_\^[1,1]{} [T\_\^[2,1]{}]{}\^\*+T\_\^[2,2]{} [T\_\^[1,2]{}]{}\^\*-T\_\^[1,2]{} [T\_\^[2,2]{}]{}\^\*)\
D\^2\_[+]{} &=& 2 (T\_\^[1,2]{} [T\_\^[1,1]{}-]{}\^\*T\_\^[2,2]{} [T\_\^[2,1]{}]{}\^\*+T\_\^[1,1]{} [T\_\^[1,2]{}]{}\^\*-T\_\^[2,1]{} [T\_\^[2,2]{}]{}\^\*)\
D\^3\_[+]{} &=& 2 i (T\_\^[2,2]{} [T\_\^[1,1]{}]{}\^\*+T\_\^[1,2]{} [T\_\^[2,1]{}]{}\^\*-T\_\^[2,1]{} [T\_\^[1,2]{}]{}\^\*-T\_\^[1,1]{} [T\_\^[2,2]{}]{}\^\*)
D\^1\_[-]{} &=& 2 i (-T\_\^[1,2]{} [T\_\^[1,1]{}]{}\^\*+T\_\^[2,2]{} [T\_\^[2,1]{}]{}\^\*+T\_\^[1,1]{} [T\_\^[1,2]{}]{}\^\*-T\_\^[2,1]{} [T\_\^[2,2]{}]{}\^\*)\
D\^2\_[-]{} &=& 2 (-T\_\^[2,1]{} [T\_\^[1,1]{}]{}\^\*-T\_\^[1,1]{} [T\_\^[2,1]{}]{}\^\*+T\_\^[2,2]{} [T\_\^[1,2]{}]{}\^\*+T\_\^[1,2]{} [T\_\^[2,2]{}]{}\^\*)\
D\^3\_[-]{} &=& 2 i (T\_\^[2,2]{} [T\_\^[1,1]{}]{}\^\*-T\_\^[1,2]{} [T\_\^[2,1]{}]{}\^\*+T\_\^[2,1]{} [T\_\^[1,2]{}]{}\^\*-T\_\^[1,1]{} [T\_\^[2,2]{}]{}\^\*)
$$H_c = 2 \lambda \left(-T_{\delta}^{1,1} {T_{\delta}^{1,1}}^*-T_{\delta}^{2,1} {T_{\delta}^{2,1}}^*-T_{\delta}^{1,2}
{T_{\delta}^{1,2}}^*-T_{\delta}^{2,2} {T_{\delta}^{2,2}}^*\right) \frac{\mathds{1}}{4}$$
Magnetic orders {#sec App mag orders}
===============
In Table \[tab order pars\] we list the details of the magnetic orders parametrized by [Eq.(\[eq mag order\])]{} and summarized in Table \[tab phase orderings\]. The tables have the following format
$$\begin{array}{l}
\begin{array}{c}
\text{$\{\alpha, U, \beta \}$}, {\rm Phase}
\end{array}
\\
\begin{array}{|c|c|c|c|c|c|c|c|}
\hline
q_x/\pi & q_y/\pi & N_1^x & N_2^x & N_1^y & N_2^y & N_1^z & N_2^z \\
\hline
\end{array}
\\
\end{array}$$
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
$
\begin{array}{l}
\begin{array}{c}
\text{$\{\alpha, U, \beta \}$} = \text{$\{$0.1, 2.0, 10000$\}$}, {\rm AF}
\end{array}
\\
\begin{array}{|c|c|c|c|c|c|c|c|}
\hline
q_x/\pi & q_y/\pi & N_1^x & N_2^x & N_1^y & N_2^y & N_1^z & N_2^z \\ \hline
-1 & 1 & 1.8\times 10^{-2} & 0 & -1.8\times 10^{-2} & 0 & 0 & 0 \\ \hline
1 & 1 & 1.8\times 10^{-2} & 0 & -1.8\times 10^{-2} & 0 & 0 & 0 \\ \hline
\end{array}
\\
\begin{array}{c}
\text{}
\end{array}
\\
\begin{array}{c}
\text{$\{\alpha, U, \beta \}$} = \text{$\{$0.1, 2.5, 10000$\}$}, {\rm SI2}
\\
-1 & \frac{13}{18} & 0 & 0 & -5.3\times 10^{-2} & 8.5\times 10^{-2} & -9.4\times 10^{-2} & -5.8\times 10^{-2} \\ \hline
1 & \frac{13}{18} & 0 & 0 & -5.3\times 10^{-2} & 8.5\times 10^{-2} & -9.4\times 10^{-2} & -5.8\times 10^{-2} \\ \hline
\\
\\
\text{$\{\alpha, U, \beta \}$} = \text{$\{$0.2, 3.0, 1000$\}$}, {\rm AF}
\\
-1 & 1 & 2.1\times 10^{-2} & 0 & -2.1\times 10^{-2} & 0 & 0 & 0 \\ \hline
1 & 1 & 2.1\times 10^{-2} & 0 & -2.1\times 10^{-2} & 0 & 0 & 0 \\ \hline
\\
\\
\text{$\{\alpha, U, \beta \}$} = \text{$\{$0.2, 3.5, 50$\}$}, {\rm SI}
\\
-\frac{5}{6} & \frac{5}{9} & -4.4\times 10^{-3} & -2.1\times 10^{-2} & 9.3\times 10^{-3} & 4.5\times 10^{-2} & -4.2\times 10^{-2} & 8.8\times 10^{-3} \\ \hline
\frac{5}{6} & \frac{5}{9} & -2.2\times 10^{-2} & 4.5\times 10^{-4} & -4.6\times 10^{-2} & 9.4\times 10^{-4} & -8.9\times 10^{-4} & -4.3\times 10^{-2} \\ \hline
\\
\\
\text{$\{\alpha, U, \beta \}$} = \text{$\{$0.2, 4.0, 10000$\}$}, {\rm SI}
\\
-\frac{1}{2} & \frac{8}{9} & 1.1\times 10^{-1} & 2.9\times 10^{-2} & -3.8\times 10^{-2} & -1.\times 10^{-2} & 2.7\times 10^{-2} & -1.\times 10^{-1} \\ \hline
\frac{1}{2} & \frac{8}{9} & 2.9\times 10^{-2} & -1.1\times 10^{-1} & 1.\times 10^{-2} & -3.8\times 10^{-2} & 1.\times 10^{-1} & 2.7\times 10^{-2} \\ \hline
\\
\\
\text{$\{\alpha, U, \beta \}$} = \text{$\{$0.2, 4.0, 50$\}$}, {\rm SI}
\\
\\
\\
\text{$\{\alpha, U, \beta \}$} = \text{$\{$0.25, 3.5, 1000$\}$}, {\rm AF}
\\
-1 & 1 & 1.6\times 10^{-2} & 0 & -4.4\times 10^{-2} & 0 & 0 & 0 \\ \hline
1 & 1 & 1.6\times 10^{-2} & 0 & -4.4\times 10^{-2} & 0 & 0 & 0 \\ \hline
\\
\\
\text{$\{\alpha, U, \beta \}$} = \text{$\{$0.25, 4.0, 50$\}$}, {\rm SI}
\\
-\frac{7}{18} & \frac{7}{9} & -4.\times 10^{-2} & 1.2\times 10^{-2} & 1.8\times 10^{-2} & -5.5\times 10^{-3} & 1.1\times 10^{-2} & 3.8\times 10^{-2} \\ \hline
\frac{7}{18} & \frac{7}{9} & -2.8\times 10^{-2} & 3.1\times 10^{-2} & -1.3\times 10^{-2} & 1.4\times 10^{-2} & -2.9\times 10^{-2} & -2.6\times 10^{-2} \\ \hline
\\
\\
$
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
$
\text{$\{\alpha, U, \beta \}$} = \text{$\{$0.3, 4.5, 50$\}$}, {\rm AF}
\\
-1 & 1 & 0 & 0 & -6.1\times 10^{-2} & 0 & 0 & 0 \\ \hline
1 & 1 & 0 & 0 & -6.1\times 10^{-2} & 0 & 0 & 0 \\ \hline
\\
\\
\text{$\{\alpha, U, \beta \}$} = \text{$\{$0.3, 4.75, 50$\}$}, {\rm SI}
\\
\frac{2}{9} & \frac{13}{18} & -5.9\times 10^{-2} & -3.2\times 10^{-2} & -1.9\times 10^{-2} & -1.\times 10^{-2} & -2.5\times 10^{-2} & 4.6\times 10^{-2} \\ \hline
\frac{13}{18} & \frac{2}{9} & -2.1\times 10^{-2} & 6.2\times 10^{-3} & -6.4\times 10^{-2} & 1.9\times 10^{-2} & 1.5\times 10^{-2} & 5.\times 10^{-2} \\ \hline
\\
\\
\text{$\{\alpha, U, \beta \}$} = \text{$\{$0.3, 12.0, 50$\}$}, {\rm SkX}
\\
-1 & \frac{1}{3} & 0 & 0 & -2.8\times 10^{-1} & 1.6\times 10^{-1} & 6.9\times 10^{-2} & 1.2\times 10^{-1} \\ \hline
-\frac{1}{3} & 1 & -2.8\times 10^{-1} & -1.6\times 10^{-1} & 0 & 0 & 6.9\times 10^{-2} & -1.2\times 10^{-1} \\ \hline
\frac{1}{3} & 1 & -2.8\times 10^{-1} & 1.6\times 10^{-1} & 0 & 0 & 6.9\times 10^{-2} & 1.2\times 10^{-1} \\ \hline
1 & \frac{1}{3} & 0 & 0 & -2.8\times 10^{-1} & 1.6\times 10^{-1} & 6.9\times 10^{-2} & 1.2\times 10^{-1} \\ \hline
\end{array}
\end{array}
$
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
: Magnetic order parameters ${\bf N_i} = (N_i^x,N_i^y,N_i^z)$, $i=1,2$ introduced in [Eq.(\[eq mag order\])]{} and the peak values of ${\bf q} = (q_x,q_y)$, together with the parameters $\{\alpha, U, \beta \}$ for phases given in Table \[tab Summary\]. In the table, we provide the data for magnetic order parameters only for $q_y>0$, since the values for $q_y<0$ are related by the inversion symmetry ${\bf{q}}\rightarrow{-\bf{q}}$.[]{data-label="tab order pars"}
| |
Haverfordwest Town Council grant leads to 5 robot pets being placed.
In October we applied for and were kindly awarded a tier1 small grant from Haverfordwest Town Council to purchase and find homes for 5 robot companion therapy pets for within the Haverfordwest ward.
We have found welcome homes for the 5 pets and we have enjoyed hearing the positive feedback they have given, especially during these difficult times.
Montrose Residential Home Elliotshill Portfield School Haverfordwest Highgrove Residential Home Fenton Primary School
Thank you again to the kindness of Haverfordwest Town Council. | https://www.cariadpettherapy.co.uk/post/haverfordwest-town-council-grant-leads-to-5-robot-pets-being-placed |
Journal of Mechanical Engineering, the 1st in the field of mechanical engineering, is supervised by China Association for Science and Technology (CAST) and sponsored by Chinese Mechanical Engineering Society (CMES). The journal aims to become an international academic journal of mechanical engineering. Its scope covers mechanics, manufacturing science and technology, instrument science and technology, materials science and engineering, carrying engineering, renewable energy and engineering thermophysics, fluid transmission and control, deep sea equipment technology, and automation control. The journal is included in CA, JST, Pж(AJ), EI, CSCD.
-
Editor-in-Chief
Zhong Qunpeng
Executive Editor-in-Chief
Wang Buxuan;Lu Yongxiang
Pan Jiluan
Deputy Editor-in-Chief
Wang Wenbin;Wang Guobiao
Chen Xuedong;Chen Chaozhi
Huang Tian;Luo Jianbin
MCCARTHY;J Michael
Permanent Magnets Based Nonlinear Vibration Isolator Subjected to Large-Acceleration Excitations
Journal of Mechanical Engineering,2019,Vol 55,No. 11
Large-acceleration excitations extremely affect the performance of host structures, such as the vibrations during the launch. A novel nonlinear isolator based on permanent magnet (PM) is proposed, which utilizes the nonlinear force to construct the equivalent negative stiffness and also utilizes the softening spring to realize the high performance isolation performance. The nonlinear vibration isolator is mainly comprised of an axial PM and three radial PMs that are uniformly distributed along the circumference, where all the PMs are magnetized along their axes. The governing equation of the isolator is established, and the displacement transmissibility is obtained with the harmonic balance method. The isolation performance performance tests under the base acceleration with the amplitude of 0.2 g, 0.3 g, 5 g and 7 g are carried out. The results demonstrate that the peak frequency and corresponding transmissibility decrease by 47% and 70% with the nonlinear isolator. For the asymmetric configuration of the radial PMs, the peak frequency and corresponding transmissibility decrease by 51% and 86%.
Hybrid Global Optimization Method Based on Dynamic Kriging Metamodel and Gradient Projection Method for Optimal Design of Robot
Journal of Mechanical Engineering,2019,Vol 55,No. 11
In order to solve the black-box problem which is commonly existed in engineering applications such as robots, we propose an efficient and stable hybrid global optimization (HGO) algorithm based on the genetic algorithm, the non-uniform Kriging metamodel, and the gradient projection method. In the proposed algorithm, the non-uniform Kriging metamodel is used to evaluate the objective function, which can ensure the accuracy of the optimization process without demanding the global accuracy of the approximate model and save much computation. Moreover, the gradient projection method is used to mutate the population of the genetic algorithm, which can improve the convergence efficiency of optimization and ensure the optimization constraints to avoid using the non-strict penalty function method to deal with constraints. To validate its effectiveness and superiority, we apply the proposed algorithm to two mathematical test examples and a modular manipulator optimization example and then compare it with other optimization algorithms. The results show that the proposed algorithm can balance the accuracy of the results, the optimization efficiency, and the stability of the algorithm to achieve a better comprehensive performance, so as to achieve a global optimization design for engineering problems. | https://jtp.cnki.net/bilingual/Navi/Detail?pykm=JXXB&year=2019&issue=11 |
Microsoft Excel has a handful of useful functions that can count nearly everything: the COUNT function to count cells with numbers, COUNTA to count non-blank cells, COUNTIF and COUNTIFS to conditionally count cells, and LEN to calculate the length of a text string.
Unfortunately, Excel doesn't provide any built-in tool for counting the number of words. Luckily, by combining serval functions you can make more complex formulas to accomplish almost any task. And we will be using this approach to count words in Excel.
Where cell is the address of the cell where you want to count words.
After that, you subtract the string length without spaces from the total length of the string, and add 1 to the final word count, since the number of words in a cell equals to the number of spaces plus 1.
Additionally, you use the TRIM function to eliminate extra spaces in the cell, if any. Sometimes a worksheet may contain a lot of invisible spaces, for example two or more spaces between words, or space characters accidentally typed at the beginning or end of the text (i.e. leading and trailing spaces). And all those extra spaces can throw your word count off. To guard against this, before calculating the total length of the string, we use the TRIM function to remove all excess spaces except for single spaces between words.
As you can see in the screenshot above, the formula returns zero for blank cells, and the correct word count for non-empty cells.
Instead of entering the word to be counted directly in the formula, you can type it in some cell, and reference that cell in your formula. As a result, you will get a more versatile formula to count words in Excel.
The SUBSTITUTE function removes the specified word from the original text.
Then, the LEN function calculates the length of the text string without the specified word.
In this example, LEN(SUBSTITUTE(A2, $B$1,"")) returns the length of the text in cell A2 after removing all characters contained in all occurrences of the word "moon".
The result of this operation is the number of characters contained in all occurrences of the target word, which is 12 in this example (3 occurrences of the word "moon", 4 characters each).
Finally, the above number is divided by the length of the word. In other words, you divide the number of characters contained in all occurrences of the target word by the number of characters contained in a single occurrence of that word. In this example, 12 is divided by 4 , and we get 3 as the result.
If you need to count both uppercase and lowercase occurrences of a given word, use either the UPPER or LOWER function inside SUBSTITUTE to convert the original text and the text you want to count to the same case.
SUMPRODUCT is one of few Excel functions that can handle arrays, and you complete the formula in the usual way by pressing the Enter key.
For the SUM function to calculate arrays, it should be used in an array formula, which is completed by pressing Ctrl+Shift+Enter instead of the usual Enter stroke.
Please remember to press Ctrl+Shift+Enter to correctly complete the array SUM formula.
This is how you count words in Excel. To better understand and probably reverse-engineer the formulas, you are welcome to download the sample Excel Count Words workbook.
If none of the formulas discussed in this tutorial has solved your task, please check out the following list of resources that demonstrate other solutions to count cells, text and individual characters in Excel.
How to count characters in Excel - formulas to count all or specific characters in a cell or range.
20 Responses to "How to count words in Excel"
Can someone show me the formula needed to count how many times certain text appears in one column (the text can appear multiple times in one cell or just once) while meeting criteria from another cell.
I need a formula that will count the number of times in 2010, No Action was taken in cases. The answer is 3.
Gets me halfway there. Now I need to figure out how to only count the words when the YEAR row is 2010.
I need total word count in a cell, done line by line. I pasted this formula and nothing calculates, just the formula in the cell. Can someone help?
I need to compere the Characters & numbers with true or false please help.
Wanted to count of the words in huge excel sheet. Succeeded through this page.
Thanks for the help. Couldn't find else where in Internet.
Thank you so much! I was trying to use "split" with array formula and it wasn't working, but your first example with the len(trim( worked very well with arrayformula and saved my project :).
Counts the number of cells within a range that meet the given condition, here "Yes".
▪ range: is the range of cells from which you want to count nonblank cells.
▪ criteria: is the condition in the form of a number, expression, or text that defines which cells will be counted.
Sorry this reply is posted by mistake. I have no clue how can I delete it.
The semicolon vs comma is a regional setting in Office. The formulas will provide the result desired.
Hi I am trying to count the words in a cell after a specific character. I am having trouble nesting the subtitue function to count the words into the character count after ")".
How can I combine those two?
I just saw a mistake in the first LEN function. I removed the second bracket in the SEARCH Function. But still would need help in nesting the Substitute function!
Where A1 is the cell that contains your text string. After that you can copy this formula down along the column if necessary. | https://www.ablebits.com/office-addins-blog/2016/06/16/how-to-count-words-excel/ |
On desinformation of the margin of Warsaw Security Forum
Discussions about foreign propaganda, disinformation and misinformation remain at the heart of the political debate in the European Union (EU) and across the transatlantic area. In the context of on-going disinformation campaigns targeting our democratic societies, National Endowment for Democracy, in collaboration with the Beacon Project of the International Republican Institute (IRI), the Centre for Propaganda and Disinformation Analysis and the Casimir Pulaski Foundation held a conference dedicated to strengthening a networked response, addressing disinformation and its causes on both the transatlantic and the national level. The conference was a side event of the Warsaw Security Forum 2018.
The conference started with a panel entitled “A Networked response to Disinformation” and its goal was to share experiences and best practices in building political and civil society networks, in order to address disinformation and its causes. The panel was led by Brady Hills, Programme Officer (Beacon Project) of the International Republican Institute and was attended by: Miriam Lexmann – EU Regional Programmes Director (BeaconProject) of the International Republican Institute, Borislav Spasojevic – Regional Programmes Director (Beacon Project) of the International Republican Institute, Mikael Tofvesson – Deputy Head of Department (Coordination and Operations) of the Swedish Civil Contingencies Agency and Nevile Bolt – Director of the King’s Centre for Strategic Communications at the King’s College London.
The aim of the 2nd panel entitled “Research, Analytics and Cooperation”, was to discuss how research, analytics, activities of NGOs and media, but also scrutiny of laws and regulations can support actions taken by the state institutions. Among other contributors there were: Michał Potocki – Op-ed Editor of the Dziennik Gazeta Prawna, Wojciech Jakóbik – Editor-in-Chief of the BiznesAlert.pl, Robert Kośla – Director of Cybersecurity Department at the Ministry of Digital Affairs of Poland and Anna Kozłowska-Słupek – I Secretary at Eastern Department of the Ministry of Foreign Affairs of Poland. The panel was led by Adam Lelonek PhD of the Center for Propaganda and Disinformation Analysis.
During this panel, Anna Kozłowska-Słupek emphasized that public administration cannot influence NGOs in terms of fighting disinformation, but it can use trainings and grants as positive stimulus. In turn, Wojciech Jakubik claimed: “as the media, we cooperate very closely with analysts working at think tanks verifying information”. On the other hand, Jakubik stressed that “the public administration works slowly in this dimension”. This opinion was shared by another media representative – Michał Potocki, who said that “governmental press services sometimes tend to act as guardians protecting officials from journalists”- Also, he added that “Blocking of some journalists is problematic as well”. Representative of governmental side, Robert Kośla – in this context – indicated that: “the will is necessary in order to enhance cooperation between public administration, media and NGOs. Without it, no law or regulations will help”.
3rd panel entitled “Strategic Communication as the key tool to fight disinformation” aimed at discussing contemporary challenges posed by disinformation and at sharing experiences of government and private entities in fighting fake news by formulating strategic communication. Invited experts tried to advise on how governmental representatives and institutions can use the technology to their advantage and create an efficient strategic communication via social media outlets. Aim of the panel was also to inspire representatives of different stakeholders to plan their Internet communication, and to consider how their actions are influencing public trust in the governmental institutions. Panel led by Katarzyna Pisarska – Director of the European Academy of Diplomacy – among other panelists – was attended by Agnieszka Romaszewska-Guzy – Director of the Biełsat TV, Beata Biały – fmr. Director of Social Communication of the Ministry of Defense of Poland, Antoni Wierzejski – expert of the Center for International Relations and Piotr Świtalski – Head of Press of The European Commission Representation in Poland.
Piotr Świtalski emphasized that “rather than EU as a whole, its member states have resources and tools to counter disinformation”. He had also posed a question: “Can we use bots to fight other bots in social media?”. “It is an ethical question” – he stated. Antoni Wierzejski stressed that “in spite of declarations, as Poland we still don’t have a strategic communication at the level of government. In turn, Director Romaszewska-Guzy and Beata Biały covered their experience related to fighting disinformation in an unfriendly environment such as Belarus (case of Biełsat). | https://pulaski.pl/en/on-desinformation-of-the-margin-of-warsaw-security-forum-2/ |
How Fungi Made All Life on Land Possible
“Mycology” is a branch of biology concerned with the study of fungi. It might not sound like the most interesting thing to make a video about, but fungi are some of the most incredible and important organisms on the planet… and have been throughout the history of earth, possibly having been around for up to one billion years. Distinct from both plants and animals, fungi are their own separate ‘kingdom’, with literally millions of species, coming in all shapes and sizes. One fungus to the next can be as different as… a snake and a giraffe, or a chicken and a dung beetle.
Although we might tend to think of mushrooms when we think of fungi, this is actually only the fruiting body of the fungus, with much more going on below the surface. Some fungi are microscopic, invisible to the naked eye… while one fungus (the “Humongous Fungus”) is believed to be the single largest organism that exists by area, covering 2,200 acres in the US state of Oregon. For some people, the word “fungus” might stir up negative connotations, and of course some fungi can cause infections or diseases, while others are poisonous… and that furry stuff growing on your strawberries, that’s a fungus too.
However, while the bad is often visible and obvious, fungi are very much a force for good as well. We just don’t notice it. Fungi are not something that we think about often, but they are (and, have been) immensely important to the ecosystem of this planet. In fact, it’s no understatement to say, that life as we know it, would not be possible without fungi. To understand how, we need to go back a few years… well, a few billion years. Life in the form of simple single-celled organisms had already been around for a long time in the oceans, but, as far as we know, the land was largely rocky and barren of life.
Things started to change when some early bacteria developed the ability of ‘photosynthesis’ – the process of converting the sun’s light into nutrients. Over time, the byproduct of this, more oxygen in the atmosphere, led to the acceleration of more complex organisms in what is known as the ‘Cambrian explosion’, about 540 million years ago, but again, this was effectively confined to the water.
The transition of more complex life forms to land was possible because of fungi and their unique ability – fungi can eat rocks, breaking them down and turning them into soil. This is achieved by secreting digestive enzymes, as well as through mechanical pressure. Fungi were able to access nutrients that were otherwise unavailable to any other organism at the time.
It was commonly thought that a short 60 million years or so after the Cambrian explosion, fungi began to move onto land; they would have most likely benefited from access to more than two billion years of bacteria on the shore to feed on. Of course, the fossil record from this period has many gaps, so there’s plenty of uncertainty, with the possibility that fungi had already been on land a full 500 million years earlier.
However long fungi had been on land beforehand, we are fairly certain that they were eventually followed by small proto-plants, simple organisms that could photosynthesis. Fungi had minerals and plants had photosynthesis, but they both needed what the other had to survive. Fungi and plants began to cooperate, in a process known as “symbiosis”, forming a mutually beneficial relationship.
As the fungi and plants began to spread, colonising the land, they began to turn the Earth green. The soil became more suitable for many other types of plants that had yet to evolve, eventually allowing some to become independent of the fungi. With ecosystems becoming more complex, new, dynamic balances were established. The increase in oxygen produced by plants was balanced out by a growing population of organisms that needed the oxygen to live.
Likewise, the organic matter that started to build up after things would die needed to be recycled so that it could continue to be used. This is where fungi come in. To put it simply, fungi “eat death”; by breaking down dead things, they allow the nutrients to be used again by living things. This set up another important cycle that is fundamental to sustaining all life on Earth, with fungi serving as one of the final building blocks for the world we know and love today.
Of course, while new cycles were being established and developed into what would guide our modern ecosystems, older relationships continued to thrive. In particular, the symbiosis between plants and fungi, called “Mycorrhiza”, has continued to change and evolve, even to this day, allowing more complex partnerships to form. There are two types of Mycorrhizae, ecto-mycorrhiza, in which the fungus wraps itself around the roots of the plant, and endo-mycorrhiza, in which the fungus will actually penetrate the cell wall of the plant, entwining itself around the cell membrane.
But as invasive as this sounds, this can actually make it even easier for the plant to benefit. The plant will happily let another organism live literally inside it, because the fungus helps the plant derive more nutrients. Today, the vast majority of plants benefit from a symbiotic relationship with various different species of fungi. Some numbers have suggested as high as 90% of plants in the world. Some plants, after millions of years of evolution alongside fungi, still rely entirely on them for survival.
The orchid plant family, for example, has virtually no independent energy reserve during its germination stage – that is, while it’s growing from a seed into… well, a plant. Now, many orchids engage in a symbiotic relationship with fungi that is not mutually beneficial. Many of these species are actually a parasite to certain fungi, in which the plant will effectively suck the energy out of the fungus, in this case referred to as the “host”.
At the same time, the fungus may also be involved in a symbiotic relationship with another plant, so the orchid indirectly gets its energy through photosynthesis, even while many of these species have actually lost the ability to photosynthesise themselves. Parasitism can also work the other way around – with a fungus that is a parasite to a plant. As well as the exchange of nutrients between fungi and plants, fungi can actually help the plants exchange nutrients amongst themselves.
For example, a small tree in a forest with limited access to sunlight, could be ‘fed’ more nutrients to help it, thus growing tall enough to be able to photosynthesise on its own. These more complex types of interactions between a fungus and potentially many different plants, from the smallest flowers to the tallest trees, are known as “common mycorrhizal networks”. Fungi are capable of connecting entire forests, which can sort of be thought of as “Nature’s Internet”, or somewhat more comically, the “Wood Wide Web”. Fungi are able to facilitate communication between plants, which can be especially strong if the plants are of the same species.
Now, obviously plants aren’t literally having a conversation with each other, they’re not sentient. However, the communication they are capable of is still pretty impressive. While plants are able to communicate via the air as well, via the fungi is much more effective. Signals and cues are transferred between plants which can influence behaviour. For example, fungi can mediate the transfer of chemicals that plants produce to stunt the growth of their neighbouring, rival plants, such as by depriving them of nutrients or inhibiting their photosynthesis.
These so-called “allelochemicals” may also be used against herbivores, such as insects, that might want to eat the plant. Alternatively, fungi can also facilitate the warning signals that affected plants send to unaffected plants, triggering the plants’ defensive response. Such a response could come in the form of a chemical that acts as a repellent to the attacker, which could be a pathogen—something that can give plants diseases—or, again, a herbivore.
But as well as merely helping the plants communicate, fungi can actually directly protect the plants. For example, a fungus can secrete a compound that both kills pathogens and strengthens the plant’s immune system. Of course, this isn’t just the fungus being altruistic. It’s just in the best interest of the fungus that the plant survives, allowing the continuation of their mutually beneficial symbiotic relationship.
Fungi have played a key role in the development of the world as we know it. They may not always be visible, and they’re not usually something we think of as being all that important… but fungi are an unseen cornerstone of their ecosystems. Silently pulling strings ‘behind the scenes’, forging relationships with many other organisms, both alive and dead.
Of course, it’s not just plants that fungi interact with, us humans also have a long history with fungi as well. Whether it be as a food source, or the yeast we need for our beer and bread… their medicinal purposes, pest control, and so many other uses, fungi have always been there… always there as an influential part of not just the history of our planet, but of human history as well – both for good and bad. The topic of fungi has proven to be one of the most surprisingly interesting topics I’ve made a video about.
Read more interesting articles on our tuition agency blog. | https://exceltuitionagency.com/how-fungi-made-all-life-on-land-possible/ |
If you ask for a holistic definition of health and social care, you would probably get some vague answers. You see, a holistic definition of health and social care can be used to describe any non-physician approach to health and social care. It can also explain any program that aims to use holistic health or natural health in its efforts to achieve a healthy society.
In fact, this approach is often used to explain programs such as the MhwAC (Medical Humanities and Behavioural Health) programme in Canada. The term holistic definition of health and social care is now gaining more acceptance. This is probably because of the many people who are convinced that medical practices such as modern medication or surgical operations are no longer efficient in dealing with health concerns.
There are some people who even go as far as to say that modern medicine or the medical procedures and equipment we have today are inefficient in their handling of health problems. They feel that modern medicines and techniques do not have the power to effect fundamental changes in the way they treat people. This means that holistic def and social care are the best.
The holistic health and social care approach to health is a natural one. This means that it incorporates the use of nature in its design and the prevention of any unnatural treatment. It also believes in the recovery of the body, mind, and soul. What exactly is holistic definition health and social care?
It is a form of alternative medicine that was founded in the early twentieth century by B.J. Fuller. The focus is on health maintenance and treatment through a holistic approach that looks beyond the conventional means of diagnosing, treating and preventing disease and illness. It also believes that healing comes from within and that allopathic methods are simply part of the problem.
The holistic definition health and social care approach believe that the root causes of illness are our attitudes and responses to life. That is why, for example, it believes that poverty is a result of ignorance and that knowledge is the key to overcoming poverty. This is because knowledge allows us to understand and deal with certain situations more effectively.
If we understand that sickness is a result of ill-health and disregard its underlying causes, we will be able to eliminate it. Eliminating illness means eliminating poverty. There are various holistic definitions of health and social care practices today. Holistic practitioners believe that health is a complex organism that can be understood and healed.
It is a system of interrelated factors that work together to determine whether the organism is functioning well or not. The holistic definition of health and social care emphasizes prevention and early detection, with treatment as a supportive approach. The holistic approach to health and the traditional medicine approach help deal with common illnesses like the flu.
However, they differ in their outlook on the illness. While traditional medicine focuses on the symptoms and treatments, holistic medicine concentrates more on the cause. Holistic practitioners think that if we can understand the illness better, we can find the exact reason that has caused the illness.
By understanding the cause better, the cause can then be treated using holistic methods. Holistic health and social care often differs from traditional health and medical care. Holistic definition health and social care often conflict because both holistic and traditional medicine focus more on the symptoms of the illness than the causes.
That is why holistic medicine and health care often require the use of traditional medicines like vitamins and herbs. By using traditional medicines, holistic medicine and health care can avoid taking the wrong medicinal action that may only worsen the illness. This is where holistic definitions of health and social care find common ground. | https://mediumhealthy.com/holistic-definition-health-and-social-care/ |
Discrimination based on the style or texture of a person’s hair would be forbidden in New Jersey under a bill introduced in the state Legislature.
The bill (S3945) revises the state’s Law Against Discrimination, which protects people from discrimination on the basis of gender, sexual orientation, race or other categories, to also include “traits historically associated with race, including, but not limited to, hair texture, hair type, and protective hairstyles,” like twists or braids.
Assemblywoman Angela McKnight, D-Hudson, said she was inspired to propose the ban after Andrew Johnson, a Buena Regional High School wrestler was forced by a referee to choose between forfeiting his match or cutting his dreadlocks and competing.
McKnight cried, she said, at the images of the Atlantic County wrestler’s hair being sheared off.
“He had to choose between, do I move forward with something that I love to do versus with my hair that I love. He had to choose which one. That should not have been a choice,” she said.
McKnight said black men and women too often face discrimination at school, in the workplace and in extra-curricular activities — as was the case in Atlantic County — based on their natural hairstyles or protective hairstyles. They should not, she said, have to conform to “Eurocentric” standards and norms.
“This is a movement to protect black citizens from systematic discrimination because of a hairstyle,” McKnight said. “We’re more than that. This is a civil rights issue.”
McKnight said she believes more men and women, including herself, are embracing their natural hair, and she doesn’t want people to have to second-guess if they’re going to be accepted or get a job because of how they choose to wear their hair.
“Whether I’m at a board meeting or I’m in Trenton or at a park with my family, I embrace it. This is who I am,” said McKnight, who said she likes to adorn her hair with a signature flower.
“I no longer have to worry about do I have the right hairstyle for the outward people to accept me. I no longer have to worry about that, and I want every man or woman to feel the same way. Instead of staring at my hair I want you to listen to what I’m saying,” she said. “That’s what this piece of legislation is going to do with the state of New Jersey.
“Everyone who wants to wear their natural hair, they no longer have to worry about someone else giving them the approval to proceed.”
In California and New York, similar legislation would ban discrimination on the basis of hair. And the New York City Commission on Human Rights in February unveiled guidelines for employers and housing providers that protects hairstyles that are “an inherent part of black identity.”
“Bans or restrictions on natural hair or hairstyles associated with black people are often rooted in white standards of appearance and perpetuate racist stereotypes that black hairstyles are unprofessional,” the commission said, adding Such policies exacerbate anti-black bias in employment, at school, while playing sports, and in other areas of daily living.”
State Sen. Sandra Cunningham, D-Hudson, said she was prompted to pursue this legislation after her nephew, a recent college graduate who wears dreadlocks began his job hunt.
She said she admired his confidence to maintain his hairstyle rather than conform to someone else’s idea of professional appearance.
“He said I like wearing my hair like this. This is a part of who I am,” she said. “He kind of helped me grow up.”
Cunningham said she wants everyone to be confident and secure in their identity, like her nephew, and that’s why she’s sponsored this bill, which she hopes the Legislature will take up in the fall.
“Unfortunately, this is a part of our American experience here. And that is that sometimes African Americans are treated as though they can’t make decisions for themselves, that you have to confirm to what present-day society feels is acceptable and that is just so unfair,” she said.
McKnight also sponsored legislation last year to exempt hair braiders from state licensing requirements and established a new agency to oversee hair braiding businesses. Supporters of the roll back said hair braiders didn’t need the kind of extensive and costly training from cosmetology schools state rules required. And hair braiders who didn’t have a license risked arrest, they said.
"Hair braiders are predominantly African-American and African immigrant women. This is a skill that is often learned at an early age and passed down from one generation to the next,” McKnight said at the time. “We want to make sure that these women are able to use their skills to support themselves and their families, without excessive regulation.
Gov. Phil Murphy partially vetoed that legislation and agreed to create a limited license and reduces required training hours from 1,200 to 40 for experienced hair braiders and 50 for those without experience. | |
dal Washington Post
The Pentagon, readying for what it calls a "long war," yesterday laid out a new 20-year defense strategy that envisions U.S. troops deployed, often clandestinely, in dozens of countries at once to fight terrorism and other nontraditional threats.
Major initiatives include a 15 percent boost in the number of elite U.S. troops known as Special Operations Forces, a near-doubling of the capacity of unmanned aerial drones to gather intelligence, a $1.5 billion investment to counter a biological attack, and the creation of special teams to find, track and defuse nuclear bombs and other catastrophic weapons.
China is singled out as having "the greatest potential to compete militarily with the United States," and the strategy in response calls for accelerating the fielding of a new Air Force long-range strike force, as well as for building undersea warfare capabilities.
The latest top-level reassessment of strategy, or Quadrennial Defense Review (QDR), is the first to fully take stock of the starkly expanded missions of the U.S. military — both in fighting wars abroad and defending the homeland — since the Sept. 11, 2001, terrorist attacks.
The review, the third since Congress required the exercise in the 1990s, has been widely anticipated because Donald H. Rumsfeld is the first defense secretary to conduct one with the benefit of four years’ experience in office. Rumsfeld issued the previous QDR in a hastily redrafted form days after the 2001 strikes.
The new strategy, summarized in a 92-page report, is a road map for allocating defense resources. It draws heavily on the lessons learned by the U.S. military since 2001 in Iraq, Afghanistan and counterterrorism operations. The strategy significantly refines the formula — known as the "force planning construct" — for the types of major contingencies the U.S. military must be ready to handle.
Under the 2001 review, the Pentagon planned to be able to "swiftly defeat" two adversaries in overlapping military campaigns, with the option of overthrowing a hostile government in one. In the new strategy, one of those two campaigns can be a large-scale, prolonged "irregular" conflict, such as the counterinsurgency in Iraq.
In the 2001 strategy, the U.S. military was to be capable of conducting operations in four regions abroad — Europe, the Middle East, the "Asian littoral" and Northeast Asia. But the new plan states that the past four years demonstrated the need for U.S. forces to "operate around the globe, and not only in and from the four regions."
Yet, although the Pentagon’s future course is ambitious in directing that U.S. forces become more versatile, agile and capable of tackling a far wider range of missions, it calls for no net increases in troop levels and seeks no dramatic cuts or additions to currently planned weapons systems.
For example, the active-duty Army will revert by 2011 to its pre-2001 manpower of 482,400, with the additional Army Special Operations Forces incorporated in that number, defense officials said. The Air Force will reduce its strength by about 40,000 personnel.
Moreover, the review’s key assumptions betray what Pentagon leaders acknowledge is a certain humility regarding the Defense Department’s uncertainty about what the world will look like over the next five, 10 or 20 years, as well as its realization that the U.S. military cannot attain victory alone.
"U.S. forces in all probability will be engaged somewhere in the world in the next decade where they’re not currently engaged. But I can tell you with no resolution at all where that might be, when that might be or how that might be," Ryan Henry, principal deputy undersecretary of defense for policy, said at a Pentagon news briefing unveiling the QDR.
"Things get very fuzzy past the five-year point," Henry said of the review in a talk last month.
At the same time, Henry stressed yesterday, "we cannot win this long war by ourselves."
When a major crisis, such as a terrorist strike or outbreak of hostilities, occurs — requiring a "surge" in forces — the U.S. military will plan for "somewhat higher level of contributions from international allies and partners, as well as other Federal agencies," the review concludes.
The new strategy marks a clear shift away from the Pentagon’s long-standing emphasis on conventional wars of tanks, fighter jets and destroyers against nation-states. Instead, it concentrates on four new goals: defeating terrorist networks; countering nuclear, biological and chemical weapons; dissuading major powers such as China, India and Russia from becoming adversaries; and creating a more robust homeland defense.
Central to the first two goals is a substantial 15 percent increase in U.S. Special Operations Forces (SOF), now with 52,000 personnel, including secret Delta Force operatives skilled in counterterrorism.
The review calls for a one-third increase in Army Special Forces battalions, whose troops are trained in languages and to work with indigenous fighters; an increase in Navy SEAL teams; and the creation of a new SOF squadron of unmanned aerial vehicles to "locate and target enemy capabilities" in countries where access is difficult.
In addition, civil affairs and psychological operations units will gain 3,500 personnel, a 33 percent increase, while the Marine Corps will establish a 2,600-strong Special Operations force for training foreign militaries, conducting reconnaissance and carrying out strikes.
"SOF will increase their capacity to perform more demanding and specialized tasks, especially long-duration, indirect and clandestine operations in politically sensitive environments and denied areas," the report says. By 2007, SOF will have newly modified Navy submarines, each armed with 150 Tomahawk missiles, for reaching "denied areas" and striking individuals or other targets.
"SOF will have the capacity to operate in dozens of countries simultaneously" and will deploy for longer periods to build relationships with "foreign military and security forces," it says.
To conduct strikes against terrorists and other enemies — work typically assigned to Delta Force members and SEAL teams — these forces will gain "an expanded organic ability to locate, tag and track dangerous individuals and other high-value targets globally," the report says.
The growth will also allow for the creation of small teams of operatives assigned to "detect, locate, and render safe" nuclear, chemical and biological weapons — as well as to prevent their transfer from states such as North Korea to terrorist groups.
To strengthen homeland defense, the report calls for improving communications and command systems so that military efforts can be better coordinated with state and local governments. | http://silendo.org/2006/02/04/ability-to-wage-long-war-is-key-to-pentagon-plan/ |
The goal of this study is to discover, optimize, standardize and validate clinical trial measures and biomarkers used in ongoing AD research. All of the data from all the ADNI sites is collected in a secure database so that scientists studying Alzheimer’s disease can access it for scientific investigation, teaching or planning clinical research studies (see ADNI Data Access for details).
Inclusion criteria: Participants must be between the ages of 55 and 90 and be in good general health, with or without memory problems or concerns. Participants will need a reliable study partner who has frequent contact with the participant, is available to provide information about the participant, and who can accompany the participant to research visits as needed. All participants must be willing and able to undergo testing procedures, including neuroimaging, and agree to longitudinal follow up.
Exclusion criteria: Any significant neurological disease other than Alzheimer’s disease including Parkinson’s disease, multi-infarct dementia, Huntington’s disease, normal pressure hydrocephalus, brain tumor, progressive supranuclear palsy (PSP), seizure disorder, subdural hematoma, multiple sclerosis, or history of significant head trauma followed by persistent neurological deficits or known structural brain abnormalities. Any significant systemic illness or unstable medical condition. The presence of pacemakers, aneurysm clips, artificial heart valves, ear implants, metal fragments or foreign objects in the eyes, skin or body. Longstanding (>10 years) history of alcohol or substance abuse with continuous abuse up to and including the time that the symptoms leading to clinical presentation developed. History of major depression, bipolar disorder or schizophrenia that puts assessment of cognitive impairment into question. Clinically significant abnormalities in B12, RPR (rapid plasma reagin) or TFTs (thyroid function tests) that might interfere with the study.
Testing: Neurological and physical examinations; interview with study partner; MRIs; PET scans; cognitive testing; detailed family history; blood and urine specimen collection for cell line generation and biomarker analysis; lumbar punctures (LP) for CSF specimen collection and biomarker analysis; questionnaires for participant and study partner.
The Frequency of Visits: Participants will be assessed at the UCSF Memory and Aging Center. MRIs will be done at the Neuroscience Imaging Center (NIC), and PET scans will be conducted at the UCSF China Basin campus. How often the participant will come in depends on the cohort they are in. The typical timeline looks like this: Screening, baseline, Y1, Y2, Y3, Y4. Participants can also opt to do telephone check-ins rather than actual clinic visits.
Costs: No costs will be charged for any of the study procedures. Participants will be sent a check for $100 for every clinic visit and $200 for every lumbar puncture.
If you are interested in participating in this study or have any questions, please contact the study coordinator, Elise Ong at [email protected] or 415.514.5753. | https://memory.ucsf.edu/adni |
After nearly a year of delay, the European Commission released its long awaited proposal on mandatory due diligence for companies. The new Directive seeks to radically improve the negative impacts of trade on people and the environment. Here is our take on this legislation and what to expect from it.
No matter how hard we, as consumers, try, it is almost impossible to trace back the full production line of a product. It is unfair to burden consumers in this way and it also removes the responsibility of companies to be accountable for how they run their business.
Following an overwhelming demand from the public, NGOs as well as businesses, for harmonised due diligence rules in the EU, the proposal recently published by the Commission could be a first step in the right direction. It is now in the hands of the European Parliament and our governments, with both sides expected to have their say within this month. While this process is ongoing, we looked closely at the proposal to determine if it is achieving the necessary objectives.
Due diligence for better trade
Trade is a key component of our society and economy, but trade as we know it seems to have gone out of control – with it its environmental and social impacts, including the prevalence of forced labour, degrading work conditions, environmental pollution, poor resource management, etc. The need for less trade and better trade is undeniable as we are living beyond the carrying capacity of the planet with devastating social, climate and environmental injustices.
In the EEB’s latest video, Kevin the squirrel aims to bring wealth to his community through his nutcap business. In this short tale, we witness how easy it is to lose track of what other players in the supply chain are doing, and how the absence of appropriate rules lets a profit-driven approach take over at the expenses of society and nature.
Due diligence, but for whom?
To address these risks, the Commission’s proposal suggests mandatory due diligence rules that would apply to companies of all business sectors. With these rules, companies would be considered responsible for what happens in their entire value chain, within and beyond EU borders. That would mean more protection for the environment, workers and communities involved or impacted at every stage of the production and distribution of products, as well as more transparency for consumers.
The Commission’s proposal addresses all EU businesses that have an annual global turnover of over 150 million EUR and employ more than 500 people. For high-risk sectors such as textiles, mining and extractive industries, and agriculture and food production, the threshold would be set at 40 million annual global turnover with 250 employees, providing that at least 50% of the turnover is generated in the high-risk sector. These conditions would also apply to companies based outside the EU, with an important distinction: the benchmark on their turnover would only relate to what is generated in the EU.
However, there is a gap between the theory and the practice. According to EEB experts, with these proposed thresholds, only about 13,000 EU companies would be covered, and just about 4,000 foreign companies operating in the region. Notably, small and medium enterprises (SMEs) would not be covered by the Directive, even in high-risk sectors such as textiles and mining industries where they are key players.
Besides, the companies covered by the Directive would be considered responsible for partners with whom they have an ‘established business relationship’. However, the ‘established business relationship’ is not clearly defined in the proposal and EEB experts fear there could be a potential legal loophole. As it stands, it could mean that certain one-off exceptional transactions that may still be substantial or significant for the company do not need to undergo due diligence.
Moreover, limiting companies’ due diligence obligations to ‘established business relationships’ means there may not actually be an obligation to carry out due diligence before a relationship is established, which could be detrimental.
What does this mean for the environment?
The due diligence proposal requires companies to identify, prevent, mitigate and cease causing any environmental adverse impacts caused in their business. The potential “environmental adverse impacts” that companies need to assess in their due diligence refer to a violation of one of the elements found in international environmental conventions and listed in the Annex of the proposal. However, this definition is weakened because of two reasons.
First, it only covers those impacts that are identified in the listed Multilateral Environmental Agreements (MEAs). Second, many adverse environmental impacts, such as plastic pollution, are not yet regulated in any international convention, while there is not even a reference to any climate agreement in the Annex and therefore climate is not included in the definition.
For these reasons, NGOs have been calling for an open-ended definition of environmental adverse impacts in the Directive, that avoids having specific environmental impacts to be pre-defined.
Where does the liability fall?
The proposal requires Member States to ensure that companies are held accountable if they fail to comply with their due diligence obligations, and includes both civil and administrative liability. This double liability means that there are more ways for companies to be held accountable and more ways for affected people, communities and stakeholders to seek remedies in case of failure.
However, as the proposal stands, companies may have an easy way to limit their liability by requiring their business partners to provide them with contractual assurances that human rights and environmental harm is not occurring on their watch.
Access to justice
The proposal provides three ways for complaints to be made by anyone affected. Firstly, each company must set up a complaint system whereby affected or potentially affected stakeholders can alert the company of the actual or potential harm it causes. Secondly, the national supervisory authority may also receive alerts from the public or stakeholders about actual or potential non-compliance with the Directive, or about actual or potential harm. Thirdly, complainants may seek damages in national courts in the EU thanks to company civil liability.
However, the proposal fails to address the practical barriers for seeking justice and accessing remedies. These include high costs – notably legal fees and court costs – travel needs, language barriers, and access to legal advice. There are also no details on the limitation periods for environmental damages to be eligible for complaints, nor is there a clear obligation for rapid decisions and measures to be taken in potentially grave or life-threatening situations.
In the same way our squirrel Kevin reached out to the League of Extraordinary Animals, civil society in Europe and beyond is relying on the EU institutions to develop effective corporate due diligence rules to ensure more transparency, fairness and accountability for all actors in the value chains.
The Nutville tale comes with two possible endings, and we have a clear favourite. We hope the European Parliament and our national governments will choose the same one. | https://meta.eeb.org/2022/03/15/nut-cutting-time-for-fairer-trade/ |
F.A.Q.
Home Page
»
Other
Analysis of an Argument Essay
Submitted by:
marus
Views:
737
Category:
Other
Date Submitted:
03/09/2013 02:15 PM
Pages:
2
Report this Essay
OPEN DOCUMENT
1st essay
In the annual report of the Olympic Foods Corporation to its stockholders, it has been argued that soon enough it will be able to minimize the costs of food processing, thus maximizing its profits. This argument is based mainly on the assumption that different products have similar production processes and their costs can be lowered merely by improving the efficiency of these processes. However, in my opinion, this argument is flawed, since there are several additional aspects of the process that haven’t been taken into account.
One major flaw in the line of reasoning is the parallel drawn by the company between the film processing and the food processing, which are not necessarily comparable, since the processing of film is based on a completely different product and technique than the processing of food. I believe that the argument could be more persuasive if the example given in it would describe an area of processing closer to the food processing. Such example will make the comparison more reasonable, assuming that in similar products the techniques of processing is more similar.
Additionally, the argument is flawed since it assumes that the profits of the company would grow based solely on its experience, not taking into account the costs of the material used in processing – possibly more dependent on different aspects other than the efficiency of the process itself. This argument could be strengthened if the author could show the stockholders that those different aspects of food processing like price of water, soil and fertilizers, crop abundance or environmental variable have little effect on the production cost of processed food.
The points raised above reflect several major flaws in the line of reasoning presented in the report. For this reason, I do not find the conclusion drawn in the Olympic Foods’ annual report sufficiently reasonable.
READ FULL ESSAY
Similar Essays
F100...
Argumentative Essay
Rhetorical Analysis...
Argumentation Essay
Ile H110... | https://www.cyberessays.com/Term-Paper-on-Analysis-Of-An-Argument-Essay/73638/ |
Arlington, VA —On Monday, millions of Americans will celebrate the contributions that businesses and individuals have made to benefit the strength and prosperity of America’s economy. But for many other Americans who are locked out of the workforce due to arbitrary licensing, permitting or other unjustified regulations, Labor Day is another reminder that governments big and small continue to deprive them of their right to earn an honest living.
Yesterday, September 1, 2016, the Institute for Justice (IJ) launched its “IJ Asks Why” initiative to encourage entrepreneurs, regulator,s and others to question the underlying justification for laws that stand between entrepreneurs of modest means and their ability to climb the economic ladder. Through litigation, activism, and research, IJ Asks Why will hold governments accountable for infringing Americans’ right to economic liberty.
As part of the initiative, IJ is releasing “Open for Business,” a new report outlining seven simple steps cities can take to foster economic growth by unleashing the transformative power of economic liberty. Economic liberty is the simple idea that all Americans have a right to earn an honest living. Unlike the grand plans and regulations that typify government-directed economic development, the report suggests that cities take a different approach. By reducing the barriers to entrepreneurship and eliminating unjustifiable economic regulations, local governments can unleash the creative potential of their citizens and empower individuals to put themselves to work.
The report recommends that cities:
- Streamline business licensing;
- Reduce or remove restrictions on street vendors and food trucks;
- Allow for more competition in transportation markets;
- Liberalize regulation of signage;
- Expand opportunities for home-based businesses;
- Reduce the burden of overly restrictive zoning codes, and;
- Remove unnecessary regulations for food businesses.
“These seven suggestions are not meant to be exhaustive; they are instead meant to encourage municipal leaders to critically examine existing regulations by simply asking ‘Why?’ ” said IJ Senior Attorney Sherman. “Why is the process of starting a business so complicated? Why are some businesses insulated from competition in the free market? Why do we have this or that regulation? Often, this simple inquiry will reveal that existing regulations or processes are supported by no good reason and that we would all be better off without them.”
For 25 years, the Institute for Justice has fought to tear down these barriers to entry by persuading state and federal judges to take entrepreneurs’ constitutional rights seriously. Three recent IJ cases highlight the need to question the underlying justifications for regulations limiting economic liberty.
- Why does Baltimore not allow food truck owner Joey Vanoni to sell pizza within 300 feet of another pizzeria?
The city of Baltimore prohibits food trucks from operating within 300 feet of any brick-and-mortar business that sells the same type of food. Baltimore’s law has no basis in protecting the health or safety of residents. Rather, the law only serves to insulate restaurants from competition.
- Why does Louisiana require eyebrow threaders like Lata Jagtiani to get twice as much training as it requires to become an emergency medical technician?
Louisiana requires that eyebrow threaders spend 750 hours to obtain an esthetician’s license that involves no training in threading. That’s more than twice as long as it takes to become a life-saving EMT. The law’s only purpose is to raise the barriers to entry for threaders and thus protect traditional beauty salons from competition.
- Why does Little Rock, Ark. allow a single taxi business to maintain a monopoly?
The city of Little Rock has only one taxi business and it is illegal to start a second one. Rather than doing what’s best for the public, Little Rock’s law only serves to prevent citizens from enjoying the competition, job creation and consumer choice found in most other U.S. cities. | https://palmbeachexaminer.com/2016/09/02/ij-asks-why-initiative-will-hold-governments-accountable-for-infringing-americans-right-to-economic-liberty-new-report-outlines-seven-simple-steps-cities-can-take-to-foster/ |
Morena is currently finishing her PhD in the ARC Centre of Excellence for Coral Reef Studies at James Cook University. Her current work focuses on how to best inform the systematic conservation planning process to facilitate the implementation of effective and cost-efficient conservation initiatives. She has experience working closely with government, conservation NGOs, subsistence and commercial fishermen in marine resource management issues. Morena is supervised by Prof. Bob Pressey, Dr. Natalie Ban, Dr Simon Foale and Dr. Andrew Knight.
Opportunistic conservation actions are taken with the support of communities and understanding of values and opportunity costs. However, they might not contribute to regional objectives. Regional systematic conservation plans, although seldom fully implemented, help to guide decisions about conservation investments by exploring spatial and temporal options to achieve regional objectives. The marginal benefits of systematic methods over opportunistic establishment of protected areas are rarely measured and likely to be context specific. However, this understanding is crucial to make wise investments with limited conservation resources. We use a prospective approach to assess the contributions of regional conservation planning to Fiji. Trying to meet national conservation goals while relying heavily on community-based conservation. We use data on established management and key informant interviews to simulate the expansion of community-based MPAs. We then use Marxan with Zones to design a theoretically optimal MPA network for Fiji given the equivalent constraints to those used in the simulation of opportunistic action. We highlight differences between our simulated MPA expansion and a theoretically optimal MPA solution and discuss how each inform a more integrated approach to conservation, leveraging local conservation actions to meet regional objectives. | https://www.coralcoe.org.au/crs_event/seminar-where-do-regional-and-local-conservation-actions-meet-modelling-the-differences-between-local-implementation-and-regional-conservation-planning-in-fiji |
It's probably something that Lady Rai, nursemaid to Queen Nefertiti, would have preferred to keep under wraps: she had the worst case of arteriosclerosis ever found among ancient Egyptians. When she died around 1530 B.C., Lady Rai was between 30 and 40 years old. Although, to be fair, it is unclear whether arteriosclerosis was the cause of death. What is known is that she was not alone. In fact, a recent study of 20 mummies led by UC Irvine clinical professor of cardiology Dr. Gregory Thomas, found that nine of the 16 mummies with arteries or hearts still identifiable after mummification "had calcification either clearly seen in the wall of the artery or in the path where the artery should have been."
The study was undertaken in February of 2009. It subjected 20 mummies on display at the Museum of Egyptian Antiquities in Cairo to whole body CT scans with special attention to the cardiovascular system. And sure enough, in mummies as old as 3,500 years, there was significant evidence of hardening and clogging of the arteries.
Modern medicine tells us that arteriosclerosis is the result of poor lifestyle choices like drinking, smoking, eating fatty foods, and not exercising. But the study authors are not so sure. According to Dr. Michael Miyamoto of the University of California in San Diego, the appearance of arteriosclerosis in the mummies was identical with what doctors see in their patients today, and they imply this wouldn't occur if lifestyle factors were at root. Randall Thompson, a cardiologist at Mid-America Institute, Kansas City, and one of the researchers, argues that the evidence points to a genetic disposition which, combined with environment, promoted the development of heart disease. Said Dr. Thomas, "The findings suggest that we may have to look beyond modern risk factors to fully understand the disease."
Dr. Miyamoto concurs. "Perhaps the development of arteriosclerosis was a part of being human, as we are observing the footprint of the same disease process in people who lived thousands of years ago," he says, arguing that ancient and modern lifestyles differed so much that another source of causation must be at root. They would probably welcome studies that point to the fact that mummies from other ancient cultures have also yielded evidence of arteriosclerosis.
So does the study indicate that we're doomed to heart disease no matter how much we stick to the Mediterranean diet and how scrupulously we avoid desserts? Can we gorge on fatty foods because it won't matter anyway? Absolutely not! It is important to remember that wealthy Egyptians, the only ones who could afford mummification, ate the richest diet available in their times. Although the "lifestyle factors" in ancient Egypt certainly differ from the modern "grab some Cheetos at the 7-Eleven" lifestyle, there are, nevertheless, similarities.
The scientists acknowledge that the people who got mummified came from the upper strata of ancient Egyptian society. Egyptologist Abdel-Halim Noureddin says that the social strata represented by the mummies used large amounts of salt to preserve their food. Salt, of course, nowadays is considered a key contributor to high blood pressure, a cause of heart disease. According to Noureddin, the mummies, when alive, most likely ate "large amounts of bread, cheese, red meat and poultry, as well as honey and cakes made with butter." Such a diet would lead to increased blood cholesterol levels that commonly accompany heart disease. But, Noureddin points out, the ancient Egyptians were not sedentary and that may have reduced the impact of their fatty diets. Then again, is he talking about the wealthy Egyptians who were carried about everywhere in palanquins, a common mode of transport in ancient Egypt, or the poor and middle class who actually were active.
And let's keep in mind the many studies that have pointed out the correlation between lower rates of chronic heart disease among those who eat the Mediterranean diet. In my recent article on the Mediterranean diet, I mentioned that the standout factors in the lower mortality rates were moderate intake of red wine, limited intake of meat and meat products, high intake of vegetables, and use of olive oil in place of other fats (particularly, refined high Omega-6 vegetable oils). The diet also emphasizes local organic produce and limited consumption of dairy (which is organic, raw, and local at that). Not only is this regimen different from the highly processed, red-meat-rich, high Omega-6 vegetable oil infused diet of urban Americans -- it's also radically different (and far healthier) from the probable diet of the ruling class of ancient Egyptians.
It seems clear that if there is a genetic predisposition to arteriosclerosis, it acts more like a switch that lifestyle factors can turn on. Most likely the Egyptian pharaohs and their servants lived in ways that turned that switch on. Just as clearly, the modern American diet is like turning on that switch with a club. So while studies of our ancient forbears may or may not indicate that the potential for ill-health is in our genes, you're likely to reduce your chances of developing arteriosclerosis the more your adopt a lifestyle that helps you fit into your old jeans. | https://jonbarron.org/article/heart-disease-mummies |
- All industries in China will face significant short-term obstacles as they strive to get their businesses up and running.
- The key medium- to long-term challenges are deterioration of market demand, supply chain disruptions and rising operating costs.
- Bain research shows 60% of senior executives in China are skeptical about their ability to overcome the medium- to long-term effects of the coronavirus.
Companies in China are gradually resuming production, but the challenges imposed by the coronavirus pandemic are far from over. Cash-flow problems and parts shortages may slow efforts to get plants up and running. And the spread of the disease, creating a humanitarian crisis around the world, is likely to weaken demand. Senior executives in China expect Covid-19 to pummel their businesses in the short term, followed by a gradual recovery, according to Bain research.
But uncertainties abound. Few leadership teams and boards have managed through a disruption of this magnitude. As they consider actions to reduce the medium- to long-term effects of the crisis, many remain worried about the path ahead. Nearly 60% of the senior executives we surveyed were skeptical that these actions would be successful. The findings are drawn from interviews with 89 senior executives in China at multinationals, private companies, state-owned enterprises and joint ventures in multiple industries.
Macro Surveillance Platform
For more detail on the business implications of coronavirus from Bain’s Macro Trends Group, log on to the Macro Surveillance Platform. Learn more about the platform >
Short-term headwinds
In the next 3 to 6 months, all industries will encounter obstacles restarting a smooth collaboration with suppliers, distributors and customers. The nationwide lockdown on production and consumption disrupted supply chains and logistics, hitting some industries harder than others. The advanced manufacturing, consumer products, retail and healthcare industries face the biggest challenges, given the complexity of their supply chains and surging demand for their products during the outbreak (see Figure 1).
Companies are most worried about Covid-19’s effect on supply chains and damage to business partners
Over the next 6 to 18 months, fallout from the epidemic will include mounting operating costs (labor and raw materials, for example), supply chain disruptions, and possible deterioration in market demand. The challenges will differ from industry to industry. Retail, advanced manufacturing and financial services, for example, are tied more closely to the macroeconomic cycle and as a result, are more vulnerable to a downturn in demand (see Figure 2).
Manufacturing, financial and retail companies expect a long-term deterioration of market demand
The most effective medium- to long-term responses to Covid-19 also vary from industry to industry. Advanced manufacturing, retail, financial services and technology, media and telecom typically put more emphasis on cost reduction and operational efficiency to bolster performance. Retailers, with their extensive supplier networks, tend to focus on optimizing their long-term supply chains, vendor management and inventory (see Figure 3).
Retailers’ Covid-19 recovery plan: reduce costs and improve supply chains
To minimize the impact of the coronavirus in the medium to long term, leading companies are starting to reduce costs and optimize their supply chains. More than 60% of the executives we interviewed pointed to labor and nonlabor costs as a top priority. Companies that focused on supply chains said sales and operations and production planning, inventory management and digitalization are critical. Leaders will also diversify their supply chains to avoid excessive reliance on a single supplier or single region
Two priorities to grow out of the crisis
The coronavirus epidemic has accelerated the pace of structural change and innovation. Leaders are taking two important steps to kick-start sustainable growth: cost reduction and supply chain optimization.
Cost reduction
For many companies, cost-cutting is a one-time reset that delivers short-term results at the expense of employees and customer experience. Our experience shows that costs eliminated by radical cuts often creep back over time. Cost productivity leaders take a more programmatic approach, following five key principles:
Define clear targets using a today-forward, future-back perspective. Cost-reduction plans with clear targets and specific measures are more likely to deliver the expected results. Leaders formulate a plan to improve current operations while developing future productivity targets and long-term strategy.
Focus on best in class vs. best in cost. Instead of cutting costs indiscriminately, successful companies adapt the cuts to their strategy. They invest in low-cost capabilities, but also build a competitive advantage to fuel sustainable growth.
Adopt a zero-based budget approach. Applying cost-reduction tools to an existing business model is likely to produce disappointing results. Leaders start from scratch to rethink processes and unearth new efficiencies. They use zero-based redesign and zero-based budgeting to align spending and investment with the firm’s stage of development.
Improve organizational capabilities. Top-down cost-reduction plans typically are difficult to implement, lack frontline commitment and produce less satisfactory results. Effective leaders give the front line a sense of ownership in change programs. They cultivate an open culture focused on productivity, streamline decision making and offer incentives that sustain results.
Harness digital solutions. Digital tools can significantly improve operations efficiency and strengthen supplier and inventory management. Leading companies deploy digital technologies to create outstanding cost and operating models.
Companies that follow these principles are likely to recover faster and emerge from the crisis stronger. One international fast-food chain in Greater China embarked on a series of cost-reduction efforts, including zero-based redesign and zero-based budgeting. The leadership team built a transparent cost baseline to reevalute its savings priorities. Using digital tools, it generated a more accurate sales forecast. That, in turn, helped optimize the costs of labor, packaging and raw material, and supported vendor evaluations and negotiations. The company met its profit goal, cut baseline costs 5% to 25%, and invested the savings in projects tied to future priorities.
Supply chain optimization
The coronavirus epidemic has dramatically changed companies’ external environment, upending the competitive landscape and collaboration along the value chain. Leading companies now are reconfiguring their supply chains for the future by focusing on three vital steps.
Align supply chain objectives with business strategy. Supply chain managers typically balance three competing priorities: improving efficiency (cost, lead time and inventory turnover, for example); reducing supply risk; and enhancing customer satisfaction. To make the right trade-offs, leaders review their medium- to long-term business strategies and focus on the factors that sharpen their competitive edge, such as speed to market, supply security, pricing or innovation. Setting strategic priorities guides future supply chain objectives and the overall supply network’s design, as well as daily supply chain operations.
Adopt a global perspective. A global supply network—including suppliers, manufacturing sites, logistics and warehousing—helps lower costs and spread risks across regions. Companies can reconfigure their manufacturing footprint by moving production to lower-cost regions such as inland China or parts of Southeast Asia. To improve supply security for core components, leaders are developing alternative suppliers and sites in other countries. They also are cultivating strategic suppliers for key technologies to avoid potential disruption from international trade disputes.
Embrace digitalization. Digital tools give managers a comprehensive view of their internal and external supply chain and provide real-time performance feedback. In addition, leaders make sure the digital platform is linked to the operating model and includes a mechanism that can rapidly engage senior executives for decisions on critical issues at the right moment.
The coronavirus outbreak has provoked unprecedented economic disruption around the world. As leadership teams in China emerge from the most acute phase of the crisis and restart operations, they are rightly focused on the long-term health of their businesses, employees and customers. Those that optimize their costs and supply chain structure will be best positioned to meet the challenges ahead and succeed in the long run.
Kelly Liu is a partner with Bain & Company and leads the firm’s Performance Improvement practice in Greater China. She is based in Bain’s Beijing office. Kevin Ye is an expert principal in Bain’s Performance Improvement practice and is based in the firm’s Shanghai office.
The authors would like to give special thanks to Peter Guarraia, the leader of Bain’s Global Supply Chain and Cost Management practices; and Raymond Tsang and Vinit Bhatia, the coleaders of Bain’s Asia-Pacific Performance Improvement practice, for their guidance. The authors also thank case team leader Zoe Cai and associate consultant Joyce Li for their contributions.
Coronavirus
The global Covid-19 pandemic has extracted a terrible human toll and spurred sweeping changes in the world economy. Across industries, executives have begun reassessing their strategies and repositioning their companies to thrive now and in the world beyond coronavirus. | https://www.bain.com/es-es/insights/chinas-businesses-start-down-the-path-of-recovery-from-covid-19/ |
Who is the greatest devotee of Lord Shiva? If you are willing to find the answer to this question. In this post, we have described the name of Lord Shiva greatest devotee.
The greatest devotee of Lord Shiva - Lakulesh.
Although Lord Shiva has countless devotees, every one of them is equally important and dear to Mahakaal; but this post is about someone who is different and unique from everybody else and regarded as the greatest devotee of Lord Shiva.
Therefore before starting the foremost thing, reverence doesn't come in shape, size and quantity hence no one can compare the devotion of one devotee with other.
In this post, we have covered five highest devotees of Shiva, but there is someone else who is not included in this list but holds a special place in Shiva's heart.
It is not possible to define the glory of everyone on the list. We have just compiled the story collection of greatest Shiv Bhakt.
When we come across Vedic Puranas like Vishnu Purana and Shiva Purana they fill you with joy. There are many beautiful incidents featured in those Puranas out of them, Shiva gifting Sudarshan Chakra to Vishnu is spellbinding.
After taking a bath in the Manikarnika Ghat, Lord Vishnu decides to worship Shiva with one Lakh [100 k] lotus flowers.
After pouring water on Lingam, Lord Vishnu decides to offer Lotus one by one on Lingam.
Meanwhile, Shiva decides to test the reverence of Lord Vishnu on him; Mahadev hides a lotus flower from the collection of lotuses.
Now there was the shortage of only one lotus flower in the collection.
Lord Vishnu starts offering lotus flowers on lingam one by one. After finding the shortage of one lotus, Vishnu tries to find the lost lotus.
While thinking that Lord Vishnu decides to offer his eyes on Lingam as the replacement of lost flower; As Vishnu was going to pluck out his eyes, Shiva stops Lord Narayan.
Shiva - " Narayan, you are too close to my heart. You are my Aradhya; therefore, I cannot let that happen. Ask anything my Lord; I will surely present that to you."
Lord Vishnu - "My great Lord, Shiva I am thankful to you that got pleased by my offerings and came here. That is enough for me."
Shiva - "I am delighted, Narayan; I have to give something today. The reverence you have shown towards me, so from today Kartik Sukla Chaturdashi, this day will be celebrated as Vaikuntha Chaturdashi. Who so ever will worship you first on this auspicious day than me. He will get instant Moksha (salvation)."
Lord Vishnu -"You are too kind, lord."
Shiva was too pleased with Lord Vishnu that Shiva gifted the powerful Sudarshan Chakra to Lord Vishnu.
Shiva - "Narayan, this Chakra will be invincible in all aspect. It will hold similar strength as my trident; In all three realms, no other weapon will able to counter it. Hence, it will help you to defeat every demon."
Since then Sudarshan Chakra is the weapon of Lord Vishnu. This chakra returns to Lord Vishnu after destroying the enemy as Shiva's trident returns to Shiva after finishing his task.
Lord Krishna avatar of Lord Vishnu used Sudarshan Chakra to kill Shishupal in a massive gathering of warriors.
Shiva is the greatest destroyer of darkness ; for doing that energy is required. Everyone knows that goddess Shakti is the consort of Lord Shiva. Therefore, Shiva is Shaktipati.
Shakti manifested herself as goddess Parvati, daughter of mountain king Himavan. Both, Shiva and Parvati are acknowledged as the beautifully iconic divine couple.
Time to time, Shakti has manifested herself to marry Shiva. Firstly as Goddess Sati then Goddess Parvati.
Most of us know how the loss of Goddess Sati turned Shiva towards Vairagya.
Goddess Parvati took birth to marry Lord Shiva so that equilibrium could get established in the world after the loss of Daksha daughter, Sati.
Devi Parvati did the hardest penance to achieve Shiva as her husband. Hence, Goddess Parvati is also involved in Lord Shiva devotees list.
Shiva is Adipurushua regarded as the alpha male; whereas, Goddess Parvati is the iconographic form of nature.
She takes care of Shiva by giving him proper nourishment, care, and environment so that Shiva could take care of universal phenomena.
So far we have defined different dimensions of Shiva devotees including Lord Vishnu and Goddess Parvati.
Shiva's white colored bull, Nandi is the manifestation of justice, faith, devotion, and Vairagya; He is the Vahana of Shiva.
Shiva granted a boon to Bull Nandi which enabled Nandi to stay in Kailash with him as his devotees, companion and friend.
Before Shiva and Sati marriage, Nandi was the only in the Kailash who recognized hidden agony of Lord Shiva. Shiva never expressed his grief of separation with his Shakti with anyone, but Nandi experienced it.
As the devotee of Mahadev that is the selfless emotional attachment, Nandi has with Lord Shiva.
When Nandi came to know about Goddess Sati is the incarnation of Shakti. That little happiness filled him with high optimism; Nandi was extremely pleased.
Nandi supported and encouraged Goddess Sati in her journey towards Shiva. He guided Goddess Sati, how could she get Shiva as the husband.
By seeing Nandi efforts of bringing Sati and Shiva closer; Shiva once scolded Nandi, He even made Nandi return to his Father Shilada house.
After getting apart from the Kailash premises, Nandi stops taking his meal and becomes mute and deaf toward outer stimuli's and indulges himself in the profound thoughts of Shiva.
By seeing the condition of melt-hearted Nandi, on request of Nandi Father Shilada, Shiva appears to Nandi and admires his devotion towards him.
Before taking Nandi to Kailash, Shiva blesses a boon to Nandi that who so ever will request their prayers in your ears. Their prayers will directly reach to me without any resistance.
Since then people speak their prayers and wishes in the ears of the Nandi. It reaches to Shiva without any barrier.
Nandi is often seen sitting in front of Shiva temple as Bull whose limbs folded viewing Shiva continually which projects his devotion for Shiva.
The greatest demon Ravana was one of the most famous devotees of Lord Shiva.
He pleased Shiva by offering his nine head, and at the time of submitting his tenth head Shiva appeared in front of him and stopped him by doing so.
Shiva returned all heads to Ravana; that gave a name to Ravana that is Dashanan Ravan.
After getting pleased by Ravana way of devotion, Shiva blessed him with the immense strength and gifted a weapon known as Chandrahas.
That level of extreme power helped him win all planets including Shaani. Ravana doesn't stop there; He even tried to make Shiva Personalize.
As we know, Ravana was an astonishing architect, musician, and admirer of Vedic art and literature.
He developed a various musical instrument to please Shiva.
Once he tore his stomach and pulled out his intestine and played Veena with his instrument to please great Lord Shiva.
Shiva got pleased with his devotion and gave him mighty strength which he used in the war against Lord Rama .
Ravana devotion didn't stop there; Ravan even tried to Lift entire mount Kailash but Shiva put his feet thumb down to Mount Kailash and Ravana hands get suppressed beneath the Kailash.
For seven days Ravana hands remained beneath the Kailash making him feel his mistakes and immense pain. While handling all the pain, again for pleasing Shiva; he created Shiva Tandav Strotra in which he did the grand apotheosis of Lord Shiva.
Shiva got impressed with the devotion of Ravana and lifted his feet thumb over Kailash ground. Ravana thanked Shiva for Shiva's generosity and returned to Lanka.
Shiva devotee list is incredible. It contains demons, humans, animals, Rivers, sages, information seekers, nature, demigods, Lord Vishnu, Ghost, Gandharvas, Yakshas, and others.
There is one particular devotee of Shiva who's devotion is free from outer formalities of the world.
Who even made his Lord Shiva his devotee. His name is Lakulesh.
Such devotees don't follow the rules and customs while worshipping Shiva.
On the other side, Lord understands and admires their devotee admiration for him.
Once upon a time, a young devotee named Lakulesh takes an oath to worship Shiva till the last breath.
It was the time before Sati's birth. He did a hard penance upon Shiva without the intake of any food for several Kalpas.
Shiva appears in front of him and asks Lakulesh to take some boon.
Lakulesh Sees Shiva in the Vairagi guise. A certain kind of shine comes into Lakulesh eyes. Shiva's that morph gets stored in his heart and mind of the Lakulesh. He gets down on the feet of Shiva.
Lakulesh - "I didn't meditate on you in the hope of receiving something."
Shiva delighted by his devotion; he again encourages Lakulesh to ask a boon.
Shiva informs Lakulesh that when a devotee penance gets successful, then he has to give something to his devotee in return of devotee penance.
Finally, Lakulesh asked a common boon from Shiva.
Lakulesh - "Oh my lord till the last breath of my life; I want to worship you."
Shiva grants the wish of Lakulesh and gives an additional boon that he will live a long lifespan.
After that Lakulesh drinks the Kailash Mansarovar lake water and chooses an appropriate place to worship Shiva again; that was the level of Lakulesh devotion for Shiva.
Lakulesh gets involve in the meditation for many Kalpas. In the meantime, Sati takes birth to marry Shiva; but after the wedding, unfortunately, dies in the Yagna of Prajapati Daksha.
Then second time Shakti incarcerates herself again as Parvati. Parvati performs the hardest austerity upon Shiva to get him as her husband.
Shiva gets married to Parvati through a grand wedding ceremony and slowly - slowly takes the whole responsibility of Mount Kailash.
After many years of chanting Om Namah Shiva, Lakulesh comes out of the meditation. On Kailash, Shiva gets to realize that his devotee came out of years of meditational practice. That brings happiness for Shiva.
Lakulesh hears the chant of Shiva mantra; he decides to approach slowly towards the source of Shiva's mantra. His matted hairs were touching the soil. He was carrying a wooden trident with him.
Lakulesh was moving slowly in an unbalanced way of walking; his nails were long.
He reaches a beautiful kingdom where a Yagya got accomplished, and respectable delegates were enjoying Yagya Prasad.
King's man observes his condition and felt that he is undernourished. They request Lakulesh to consume some Prasad.
Lakulesh accepts the Prasad of Shiva and sits in the position to eat it. Suddenly, he views beautiful Shiva Parvati Idols in the Yagya.
He stands on his feet and moves towards Shiva idols; Everyone gets shocked by the unusual behavior of the Lakulesh.
Kingsmen try to stop him, but he keeps moving toward Shiva idols. King of the kingdom holds his men to see what happens.
Lakulesh lifts Shiva's idol and put down just next to him. He takes a bite of his prasad and offers Shiva idol to taste it.
King and everyone gets angry at Lakulesh by seeing his strange Behavior.
Lakulesh Again requests Shiva idol to take a bite of his prasad, but Shiva idol doesn't consume it.
By seeing Shiva idol not consuming his prasad Lakulesh again request Shiva Idol to consume the prasad.
Lakulesh tells to Shiva idol that if you are stubborn, Shiva, then I m too stubborn. I will not eat before making you eat the Prasad.
After a few moments, observing Shiva idol not consuming the Prasad Lakulesh asks a question to Shiva, if you will not eat the prasad, then How will I eat it? Due to your stubborn nature, I will die hungry here.
Please accept the prasad, Shiva; Otherwise, I will beat you with my wooden stick and only you will be responsible for that.
Have you heard me, Shiva? Again Shiva's Idol doesn't consume Prasad.
Lakulesh takes his stick in his hands, but King's man stops him from doing so.
The king expresses his rage on Lakulesh; He punishes him by a hard stick.
King reveals that he has broken rules of worshipping Shiva.
Firstly he had apart Shiva with Goddess Parvati.
Secondly, he tried to hit Shiva idol with his stick.
Next time when the king tries to hit Lakulesh, Shiva appears as a mystical Aghori and defines that they are Vairagi who doesn't know how to worship Shiva as elite people.
He tells that Aghori doesn't have a particular style of worshipping Shiva. He warns everyone to stop punishing Lakulesh.
Kingsmen ask Who you are?
Aghori replies that We were moving through your kingdom. We lost our path and by mistake; my guru reached here. He must have heard Shiva name that's why he came here.
What inauspicious thing you were going to do?
King - do you know what your guru was doing here.
Aghori sage replies that I know he must be talking with Shiva. Our Guru doesn't eat food without making Shiva eat first.
King - So it means Shiva listens to your guru?
Aghori makes Lakulesh to stands on his feet. Lakulesh offers prasad to Shiva. This time Shiva idol consumes the Prasad of Lakulesh.
King get surprised by seeing Shiva idol consuming Prasad. He gets down on the knees of Lakulesh and seeks an apology from the Lakulesh.
Lakulesh moves from King palace and mystic Aghori Shiva moves with him. In the middle of their journey, Lakulesh gets collapse down.
Aghori Shiva request Lakulesh to have some water but Lakulesh refuses by saying that before drinking Kailash Mansarovar water, he cannot drink different water.
Old Lakulesh was weak and thirsty Shiva gets Mansarovar water for Lakulesh which makes Lakulesh wonder that ordinary man cannot fetch Kailash Mansoravar water so fast.
Lakulesh, ask Aghori Shiva to reveal his identity; Shiva smiles, and disappear. Lakulesh identifies Aghori sage as Shiva.
Lakulesh decides to run towards Kailash and reaches Kailash where Nandi welcomes him.
It is the moment when a great devotee of Shiva meets another one.
Finally, Lakulesh meets Shiva and seeks an apology for not identifying him.
Later Lakulesh understands that Shiva is married to Goddess Parvati and his lord has become a great family man. Lakulesh worships both Shiva and Parvati and takes their combined blessing.
It is a beautiful definition of reverence and emotions which knows no boundaries and differences.
Therefore, Now how can we categorize the devotion of various devotees?
In this post, we have shared the information about the greatest devotee of Shiva.
Please feel free to share the post; your single share will inspire us. | http://www.viyali.com/sam/shiva/the-greatest-devotee-of-lord-shiva.aspx |
Agile Vs Waterfall model. Which one are you Following?
Waterfall Model methodology is also known as Linear Sequential Life Cycle Model. This Model follows a sequential order, so the project development team only moves to the next phase of chrysalis if the previous step is completed successfully.
Some people state incorrectly that in the Waterfall model if an error is found in the later stages of a project, it will need to be scrapped and started again. Although this is not impossible, it would have to be a very significant issue to cause this level of impact. In reality, this will only occur if there is very poor control of a project. It is also not true that Waterfall projects never involve iterations or feedback — they certainly can but some project teams choose not to.
It is one of the easiest models to manage. Because of its nature, each phase has specific deliverables and a review process.
It works well for smaller size projects where requirements are easily understandable.
On well-managed projects, waterfall may provide more confidence of what will finally be delivered earlier in the life-cycle.
Where there are many interfaces and dependencies outside of the basic product development, waterfall projects tend to have the tools to model and manage these.
Easily adaptable method for shifting teams.
This project management methodology is beneficial to organize and manage dependencies.
If the requirement is not clear in the beginning, it is a less effective method.
Communication can be a far higher risk — especially when there is limited early review of outputs and deliverables or when one-way methods of communication are used to convey requirements.
The testing process starts once development is over. Hence, it has high chances of bugs to be found later in development where they are expensive to fix.
Waterfall methodology is a sequential design process and a Software development process which is divided into distinct phases. It is a structured software development methodology so most times it can be quite rigid as there is no scope of changing the requirements once the project development starts. All the project development phases like designing, development, testing, etc. are completed once in the Waterfall model.
Agile methodology is a practice that helps continuous evaluation of development and testing in the product development process. In this model, development and testing activities are concurrent, unlike the Waterfall model. This process allows more communication between customers, developers, managers, testers, and overall the whole team.
Agile encourages or requires frequent communication between developers and those who will ultimately accept and use the deliverable. This should pay major dividends when effective. For example, feedback can be incorporated into future iterations as increments are delivered and reviewed by users or a Product Owner or both. False assumptions made by developers can be recognized very early reducing impact. Agile gives us continual opportunities to learn via this feedback.
When projects are genuinely new they usually require creativity. Requirements can then emerge as understanding matures and grows.
The process is completely based on the incremental progress. Therefore, the client and team know exactly what is complete and what is not. This reduces risk in the development process.
Collaboration is usually much higher with Agile. Although not guaranteed, this can result in more successful development environments, in terms of product quality (i.e. fit for purpose).
Agile teams are extremely motivated and self-organized so they are likely to provide a better result from the development projects.
Agile is very intensive for both developers and users. There can be reasons that may prevent this for example if developers work on multiple projects at one time.
In Agile there can be a great reluctance (by some) to adopt or accept deadlines. Projects don’t exist in isolation so when this happens it can be a real issue. Agile methods typically only address product development and large-scale projects can be made up of many other elements.
Agile itself is not a Product Management framework — it is a set of principles relating to product development, specifically producing software and it is not a “methodology”. We say “mainly software” as there are few types of projects outside of the software domain where the deliverable can truly be produced and accepted sequentially and incrementally or where requirements can evolve within the development phase. A phased delivery of any project does not mean it is Agile. It is simply a ‘phased delivery’.
So in a defined nutshell if we compare the pros and cons of both then we find out that, Agile methodology follows an iterative development approach because of this planning, development, prototyping and other software development phases may appear more than once. Whereas, Waterfall methodology is a sequential design process which is a structured software development methodology so most times it can be quite rigid as there is no scope of changing the requirements once the project development starts. | http://www.ideausher.com/agile-vs-waterfall-model/ |
The invention provides liquid crystal module automatic disassembling equipment which is used for disassembling an LCM and an LCM which are bonded and connected with each other on a liquid crystal module and comprises a heating device and a disassembling device, the heating device is used for heating the liquid crystal module, so that hot melt adhesive between the LCM and CTP is in a softening state, the subsequent disassembling process is facilitated, and the disassembling efficiency is improved. The disassembling device comprises a fixing mechanism used for fixing the LCM and a disassemblingmechanism used for grabbing and stripping the CTP, the LCM is firstly fixed through the fixing mechanism, and then the CTP is driven to be separated from the LCM through the disassembling mechanism, when the disassembling device is used, the liquid crystal module is firstly heated through the heating device, so that the hot melt adhesive between the LCM and the CTP is softened, and the disassembling efficiency is improved. According to the disassembling device for the liquid crystal module, the CTP is driven by the disassembling mechanism to be stripped from the LCM, the disassembling efficiency of the liquid crystal module can be improved, and the CTP and the LCM can be prevented from being damaged. | |
About the dog family (Canidae)
Both familiar species, such as the wolf, red fox and Arctic fox, and the lesser-known species such as raccoon dog and golden jackal are Norwegian animals that belong to the dog family.
The dog family
Latin: Canidae.
Number of species living wild in Norway: 5.
Number of species living wild in the world: 37.
Canids
Canid, members of the dog family, are usually adaptable and effective hunters. The red fox is the most common canid in Norway today, although both arctic foxes and wolves used to exist in much larger numbers than that of today. A fourth canid is the raccoon dog, which is blacklisted and not wanted in Norway. The fifth canid is the newly registered golden jackal.
Wolf
The wolf is the largest member of the dog family that you can encounter in the Norwegian wild. It is very rare to encounter a wolf in the wild, and you can find out more about what to do if you do meet one on the page about wolves.
Red fox
The red fox is the most numerous member of the dog family, and many of us have come across red foxes on forest walks. The red fox mainly eats small rodents, eggs and small birds.
Arctic fox
There are not many Arctic foxes left in Norway, but numerous measures have been put in place to try to save the remaining population.
Raccoon dog
Not much is heard about the raccoon dog, which is not surprising perhaps. There is some uncertainty surrounding how many there are here in Norway, and the species is regarded as undesirable. Despite the name, it is neither a raccoon nor a dog, but it is a member of the dog family.
Golden jackal
Golden jackal is an animal that very few Norwegians know of, which is natural since the first observation of the species was officially confirmed as late as February 2021! | https://rovdyrsenter.no/facts-about-large-carnivores/about-the-dog-family-canidae/?lang=en |
One of the first things most breeders learn about genetics is the difference between dominant and recessive alleles. Most people understand that dominant alleles are expressed whether there is one or two copies at a locus, whereas recessive alleles are only expressed if there are two copies of that allele. This clear difference between dominant and recessive expression is certainly true for some genes, but in reality the situation can be much more complex. Perhaps you have heard terms such as "incomplete dominant", "co-dominant", or "semi-dominant". Are these different types of alleles than the basic dominant and recessive? Or is there some other condition that makes these alleles function differently? There is lots of confusion about this, and often people resort to one of the "incomplete", "co-", or "semi-" terms to describe anything that doesn't behave like a simple dominant or recessive. These terms refer to differences in phenotypes, but what's going on at the level of the gene?
Here is a nice little video that explains the complexity of these differences in the expression of genes as phenotypes, from the "Useful Genetics" online course taught by Dr Rosemary Redfield (Univ. British Columbia). She gives a nice explanation of the terms, and explains why the descriptions of gene expression such as "semi" or incomplete" incorrectly imply that these reflect functional differences among genes, when in fact they are only descriptors of phenotype that result from the interactions and relationships among genes.
Spend 15 minutes with this video and clear the fog about dominant and recessive. Enjoy!
Useful Genetics: Dr Rosie Redfield, The University of British Columbia.
You can view the entire course "Useful Genetics" on the ICB course site.
Check out ICB's online courses and our Breeding for the Future Facebook group!
The fiction of "knowing your lines"
Today I found yet another study that identifies a recessive mutation that has popped up out of nowhere to ruin the lives of some Labrador Retriever puppies with congenital myasthenic syndrome. It occurred in a pair of littermates from parents with two recent common ancestors. The investigators also examined relatives of these dogs and found 16 of 58 carried the mutation - that's almost 30% - while 288 unrelated Labradors carried the normal gene.
The authors say that "Linebreeding in this Labrador Retriever family makes it likely that the sire and dam inherited the mutation from a common ancestor and that the affected puppies are homozygous for a chromosome segment transmitted IBD" (identical by descent)... "Linebreeding practices expedite the appearance of recessive diseases in purebred populations."
Let me make this crystal clear.
This particular family of dogs carries a mutation that causes a serious genetic disorder, in this case the mechanism that allows nerves talk to muscles is broken in dogs that get two copies of the defective gene. Dogs with only one copy apparently are fine. Two related dogs were bred, both as it turns out were carriers, and they produced puppies with problems. However well this breeder might "know their lines", there was no way to know that this gene was lurking in this line of dogs.
Coefficient of inbreeding is the probability of inheriting two copies of the same gene from an ancestor on both sides of a pedigree. The higher the COI, the greater the risk of having something like this happen. ALL dogs have mutations that the breeder has no way of knowing about. If this was a "responsible" breeder, they would have done the available DNA tests for the disorders known in Labradors. They could be certain of not producing puppies with any of those. But then they did a close breeding, and whoopsie, ran into this.
It's the mantra of the experienced breeder: "Know your lines." That is certainly good advice for the things that CAN be known, but there seems to be little appreciation for the fact that there are many things that you CANNOT know. The only way to manage the unknowns is by breeding in a way the manages the risk of finding out the hard way what those silent mutations are. This is what I argued in an earlier essay about why DNA testing is not going to solve the genetic problems in purebred dogs if breeders DNA test then inbreed. Just the other day I was reading a long discussion among breeders about inbreeding/linebreeding, and several breeders were swearing that they "knew their lines" and that's why they can linebreed without the genetic problems that other (less experienced?) breeders are complaining about.
I don't know if these people truly believe the fiction that there are no problems in their lines, or perhaps the more serious myth that they can inbreed yet avoid genetic problems because of their great skill as breeders (and "knowing" those lines). What will it take to convince breeders that it's only a matter of time - the landmines are out there, and one of these days what appears to be a clear path is going to reveal them. Guaranteed.
I'm sick of reading these papers about yet another "new" genetic disease in dogs caused by a recessive mutation that has become a problem because of inbreeding/linebreeding. I'm sick of reading posts from breeders who proclaim that skill, experience, and "knowing my lines" allows them to breed closely related dogs without consequence, when in fact they are intimately involved in a dangerous game of roulette in which, sooner or later, the loser will be a puppy and the family with a broken heart that owns it.
This is an exercise straight out of one my courses, Basic Population Genetics for Dog Breeders, but it is so important for breeders to understand that I'm making it available here. Please take the time to work your way through it. There are some simple simulations you do with colored "alleles", then some computer simulations where you can do some experiments that will allow you to explore the factors that can affect your breed's gene pool in ways you wouldn't expect. These simple exercises will change the way you view the genetic stability of your breed - I promise. This might be a fun thing to do the next time you get together with a group of fellow breeders.
How big is your breed - not in total numbers of dogs, but in the size of the breeding population?
What fraction of the dogs in your breed are allowed to breed?
Does your breed have restrictions in the standard that remove dogs from the breeding population?
Do you have breeding restrictions on the puppies you place?
Is your breed relatively unpopular or dominated by just a few kennels?
How vulnerable is your breed to genetic drift?
You probably know that for every gene location -‐ a locus -‐ an animal has two alleles, one that came from the sire and one from the dam. Which one of the two alleles gets passed on to each offspring is random, so the pair of alleles that the offspring inherits for each gene is determined only by chance.
Take out a coin. Every coin has two sides -‐ heads and tails -‐ and for this reason when we flip a coin we are talking about binomial probability. If it's a fair coin, there is a 50:50 chance of getting heads every time you flip it. You might get 5 heads in a row, but nevertheless at the next toss the chance of getting a heads is one out of two.
If you only flip it once and get heads, then the outcome of the trial is 100% heads. If you flip it 5 times and get 4 heads and 1 tails, then the outcome of the trial is 80% heads. If you flip it 100 times, or 1000 times, the probability of getting an extreme result goes down, and it should tend towards 50:50. This is the basis of the "bell curve" -‐ most of the results will be close to 50:50 and deviations from this will become rarer as they become more extreme.
What does this have to do with dog breeding? Remember, which of the two possible alleles an offspring inherits from each parent is determined randomly -‐ but when there are only a small number of "trials" (puppies in this case), the results could be extreme just by chance. This is relevant to dogs because in terms of "large" vs "small", the typical size of a litter is small. Because litters are statistically small, extreme results from a binomial sampling can occur.
It's easy to demonstrate what we're talking about here. I'm sure you understand the example of the coin toss (or if you didn't, get out a coin and do some tossing). Let's do the same sort of thing, but now using beans to represent the alleles a dog could inherit at a particular locus.
This is where you get to play with the beans! Get out a small bowl and some cups. Start with two types (colors) of beans, and count out 50 of each into the bowl. Mix them all up with your hand. Now we're going to simulate inheritance.
1) If you reach into the bowl and randomly (without peeking!) select one bean, what is the probablility it will be a "light" bean?
2) If you put that bean back in the bowl and mix it around, what is the probability of selecting another light bean?
The probability for each independent draw is 50% (0.5). So, what is the probability of drawing two light beans in a row (LL)? It's the product of their independent probabilities -‐ (0.5) x (0.5) = 0.25, or 25%.
If we know this for the light beans, it must be equally true of the dark beans (or whatever color you're using) (DD).
Now, what is the probability of choosing a light bean first and a dark bean second (LD)? It's still (0.5) x (0.5) = 0.25, or 25%.
And what about drawing the dark bean first and the light bean second (DL)? So the probability of getting DL is also 25%.
Go back to your piece of paper with the columns, and above LL write 25%, and above DD write 25%. What about the middle column? If we consider all the beans of a particular color to be identical, then LD is the same as DL. So the probability of getting two different colors is 25% + 25% = 50%, and you can write that above the middle column.
Okay, back to our beans. Mix them up, then reach into the bowl and pull out TWO beans at once, and put a tick mark in the appropriate column (if two light beans, count one for "LL"). Put those beans back (so there are always 100 beans in the bowl in a 50:50 ratio), draw another pair, and log the result. Do this a total 20 times. Now tally up the number of occurrences of each combination (e.g., 8 LL, 4 LD, 8 DD). Then divide each of these numbers by the total number of draws (20) to determine the frequency of each outcome (e.g., 8/20 = 0.4, or 40%). How close did this come to the statistically "expected" outcome?
Draw a line under those data and repeat this exercise 2 more times, and calculating the fractional outcomes as before.
The bottom line here is that when you are working with a small sample (20), you are more likely to get frequencies that are different than expected. As the number of samples increases, the proportions should get closer and closer to the predictions.
Get out a cup, and put in it 100 beans in the proportions you got from your first trial above. (Just multiply the numbers in each column by 5.) If by some fluke you got the exact expected proportions (25% LL + 50% LD + 25% DD), pick the second trial (or third). We want to do a new simulation with a population where the frequencies of D and L are not equal. Do the same thing you did before -‐ record the results of 20 draws of a pair of beans, this time just once. Compute the proportions, then mix up another bowl of 100 beans in these new proportions. Do one more set of 20 draws and record your data.
Let's look at your data. You know you started with L and D beans in equal proportion at the very beginning. You then used the data from the first round of draws to create a the next generation of our bean population, which just by chance alone has a different proportion of L and D. And you repeated this again, creating another new generation that probably had again a different ratio of L:D. With every subsequent generation, the frequency of alleles in the population will vary, just by chance.
This change in the frequency of alleles in our population with each generation is called genetic drift. If you continued doing these trials, say 100 or 1000 of them, you would see that the effect of genetic drift on the genetics of a population can be profound. But instead of playing with beans for a few more hours, we can do the same kind of simulation very quickly using a computer program that will create a virtual population of alleles, then randomly select, replace, and select again, in the same way you just have, for as many generations as you want. Using this, we can do a bunch of experiments very quickly.
You can put the beans away for now, and go to the Red Lynx Population Genetics Simulator (http://scit.us/redlynx/).
To see how it works, run some simulations using the default settings -‐ 2000 generations, population size of 800, and initial frequency of each of our alleles (A1 and A2) of 50%.
Each time you click on "Run Simulation", it will do the same thing you just did for beans -‐ starting with a 50:50 mix of alleles, it will create 800 new individuals with alleles drawn at random. Then starting over again with alleles at their new frequencies, it will repeat again for 2000 generations. It will draw a line for each run showing how the frequency of the A1 allele changed over time. The total number of alleles in the population stays the same over time, so if A1 goes up, A2 must go down. If A1 goes all the way to 100%, that means A2 has -‐ just by chance -‐ been lost from the population. Likewise, if A1 goes to zero, then all of the alleles in the population are A2. Each time you click on run, it does another simulation of 2000 generations and plots a new line.
Okay, let's do some experiments. From the bean counting experiment we did above, we decided that if a population is very large, the proportions of alleles drawn randomly should be close to what is predicted. When the population is small, just by chance you can get a result that is extreme.
1) Run 10 simulations with the default settings (population size of 800) except change the number of generations to 200, which is more reasonable for purebred dogs. How many times was the A1 allele lost from the population (its frequency went to zero)? How many times did the A1 allele go to fixation (100% A1) -‐ i.e, A2 was lost, and all individuals were therefore homozygous for A1?
2) Clear the graph, change the population size to 400, run 10 simulations, and note as above the number of times A1 was eliminated or became fixed in the population.
3) Do the same thing with population sizes of 100, 50, and 25. You should be getting the picture. What is the effect of population size on the genetic stability of our virtual population?
Now, let's simulate something more interesting. Let's pretend A1 is the gene for PRA or some other genetic disorder, and we'll make it rare in the population -‐ say 10%. Change initial frequency to 10%, put the population size back to 800, and run 10 trials, followed by population sizes of 400, 100, 50, and 25, as before.
As before, you will notice that population size has a large influence on the stability of the allele in the population, with the results getting more and more unpredictable as the population gets smaller. In these simulations, you probably found that many times your rare PRA allele was completely eliminated from the population, but occasionally (and more frequently at small population sizes), the frequency of this allele increased, perhaps substantially.
What is the size of the reproductive population in your breed? Think about some genetic disease that occurs in your breed that is caused by an autosomal recessive allele -‐ e.g., PRA, or von Wildebrand disease. This disease gene could start out being rare in your breed, but in just a few generations -‐ by chance alone -‐ it could be lost entirely, or it could become very common and even fixed in the breed. Of course, as this allele becomes more common, the frequency of affected animals will go up (because the number of homozygous offspring will increase), and suddenly a genetic disorder shows up in your breed. This isn't a spontaneous mutation -‐ it is an allele that has been there all along, and just by chance has become more common by genetic drift.
The frequencies of all alleles can vary each generation because of genetic drift, not just disease alleles. Just by chance, dogs might get larger, or bolder, or a rare color could become more common, or they might become more sensitive to a particular disease or have more allergies. The point to remember is these changes are occurring because of changes in allele frequencies of the population.
You now might be worrying about your own breed, wondering how large your breeding population is, and what nasty gene might be lurking in your gene pool waiting for the chance -‐ just by chance -‐ to become a serious problem. This is definitely something breeders should be thinking about. In most breeds, only a small percentage of puppies born each year are bred, and those are not selected randomly from each litter. Under these conditions, as you have seen, some dramatic shifts in allele frequencies can be occurring by chance without breeders even being aware. Population size is far more influential on the genetic status of a breed than most breeders realize.
Now you see each of the possible genotypes plotted for the A1 and A2 alleles. The lines for the homozygous combinations (A1A1 and A2A2) should be similar to the lines in the top graph, but the third line is for the heterozygous combination (A1A2). You can see that the heterozygous condition is lost from the population if one of the alleles goes extinct (of course). With the loss of one of the alleles, you’ve really lost 2 possible genotypes from the population, as well as the phenotypes those combinations produced. Everybody in the population is now fixed for the remaining allele in the homozygous state, and if this happenes to be a gene that has a detrimental effect on the animal there’s nothing breeders can do about it. You can’t breed away from the problem because there is no alternative allele that you can select for. Obviously, this is bad.
You can play around with population size as you did in the previous exercise and see the effect it has on genotype, which is really what you are working with as a breeder because genotype determines phenotype.
Biologists use the word “fitness” to refer to the liklihood that an animal will pass on its genes to the next generation. Animals that die before they reproduce have a fitness of zero. Animals that have more offspring have a greater fitness than ones with fewer offspring. In our population simulation, we can observe what happens when a particular genotype has a negative effect on fitness.
Leaving the settings as they were when you started the first exercise, we will now assign a detrimental effect to the A2A2 homozygous condition. In the boxes at the top under “Fitness”, leave 1’s in the A1A1 and A1A2 boxes (1 is no detrimental effect, 0 is lethal), and change A2A2 to 0.9 – a reduction in fitness of 10%. Run the simulation with these settings.
You will see that there is now a solid black line in the upper graph. This is the “theoretical” curve, and you can compare that with the behavior of the A1 allele. You should see that a reduction in fitness of only 10% has a pretty significant effect on the frequency of that allele – it is eliminated from the population more quickly than before, and of course the heterozyge combination is eliminated as well. At the upper left, it will tell you the “mean generations to fixation = some number. You can compare how this number changes with duplicate runs under the same conditions, and like you’ve seen before the more trials you do the closer the average response will get to the theoretical one.
Now do some experiments by reducing the fitness of the homozygous A2A2 in 10% steps, running 5 trials of each and recording the number of generations to fixation (e.g., 0.9, 0.8, 0.7, 0.6, etc.). As you would expect, as the fitness penalty for the homozygous A2A2 increases, both the homozygous and heterozygous combinations are lost more quickly from the population. But it takes just a tiny penalty to have an effect. Inbreeding causes inbreeding depression, which is essentially a reduction in fitness. In most populations of animals, the effects of inbreeding depression begin to appear at inbreeding coefficients above 5%. What you have just seen is that even a very small reduction in fitness can profoundly affect the allele frequencies in a population, but it just takes a bit longer on average than with more severe penalties.
A bottleneck is a drastic reduction in the size of the population. Most purebred dog breeds have a bottleneck somewhere in their past. Many breeds were drastically reduced during wars, distemper epidemics, or when they were no longer needed as a working breed. Other breeds were affected by artificial bottlenecks produced by the extreme dominance of a few very popular dogs in the breeding population, as happened in Standard Poodles when the Wycliffe kennel dominated the breed in the 1950’s. In some cases, a breed was reduced to only a few dogs – Norwegian Lundehunds are all descendants of 6 dogs that were all that remained of the breed after a series of unfortunate events – and 3 of these dogs were siblings and 5 shared a grandmother.
A drastic reduction in population size can substantially alter the allele frequencies in the subsequent population. We can simulate the effects of bottlenecks of various sizes.
Run a few simulations at these settings so you can get an idea of the pattern this produces.
Now click the “Bottleneck!” box. We will start our bottleneck at generation 50, end it at 55, and reduce the population during the bottleneck from 500 to 100. Do 5 runs and record which allele (A1 or A2) was more frequent at the end of the run and whether either allele was fixed or lost (you will see the changes in genotypes in the bottom graph).
Reduce the population bottleneck to 50 (10% of the original population size), and run again 5 times, recording which allele was more frequent at the end and whether either allele was fixed or lost. Reduce the bottleneck again to 10 and repeat, then to 5 and repeat.
Repeat these trials with the initial frequency of A1 at 0.8 (so A2 is 0.2) instead of 50%. and reducing the size of the bottleneck by steps.
You will see that a bottleneck changes the genetic trajectory of the population, and the more severe the bottleneck, the less predictable are the consequences.
Think about this. Registration numbers for many breeds are declining. Popular sires are common. Breeders neuter most of their puppies or impose breeding restrictions to “protect their line.” All of these things have genetic consequences for the breed that will persist for dozens – or hundreds – of generations.
Just for fun, you can do some more runs while imposing a fitness penalty for homozygous A2A2 as you did above (or improvise – apply a fitness penalty to the heterozygote and see what happens).
The founder effect is really just a special case of a bottleneck. Some subset of a larger population is separated off to form a new population. (In a bottleneck, the rest of the original population usually died.) The number of founder dogs is the size of the bottleneck in this case. The fewer the number of animals used to start the new population, the less likely it is that the gene pool of the subset is the same as the original population. Many breeds were founded with just a few dogs, and many went on to suffer through a bottleneck or two as well. A breed starts out at founding with a population of animals that are declared to be “purebred” whatever-they-are. Generations later, the dogs might still look like the original founders because breeders are selecting for type, but the genes that aren’t under selection are subject to the effects of genetic drift and bottlenecks, with completely unpredictable results.
It doesn’t matter how many years of experience you’ve had as a breeder. It doesn’t matter how carefully you select the dogs in your breeding program and choose among the offspring to continue on with. Underneath the traits you can see, all sorts of things can be (and probably are) happening to allele frequencies that matter for other things, with effects that will last for generations. And for all of these effects we’ve looked at, the size of the population has a most profound effect.
Who's tending your genetic pantry?
Let's pretend you are one of a group of master chefs, and you each whip up your own special meals using ingredients from a shared pantry. In the pantry there is a "replicator" gadget, and each time someone uses an ingredient they use the replicator to replace it in the pantry. If you have a well-stocked pantry and everybody remembers to use the replicator when they use ingredients, things run very smoothly.
But if somebody uses up the last of the sugar making a huge cake and forgets to run the replicator, the next chef that comes to the pantry isn't going to be making desserts that use sugar. In fact, dessert is going to be an unhappy event for everybody. If people are occasionally forgetful - or worse, lazy - the lapses in replication will add up over time. And with fewer ingredients available to you there is less variety in the menu, or you have to scramble to substitute some less suitable ingredient. No doubt about it, your cooking suffers; you can't make what you really want if you don't have the best ingredients.
A shared pantry doesn't work unless everybody shares in the responsibility of making sure that it's well managed. A few careless chefs will make things more difficult for everybody.
Managing the genetics of a breed is like managing the pantry of a collective of chefs. Each breeder is working more or less independently, mixing ingredients to create the dog of their vision, and the quality of the gene pool - the genetic pantry - affects everyone in the breed. Loss of a single gene for nitrogen metabolism from the gene pool affected the entire Dalmatian breed. The ability to mix genes in new combinations to improve a breed depends on having some variety of genes to choose from. If there is no variation, there is nothing to select from. Having a very narrow gene pool in a breed is like facing an impoverished pantry; there's only so much you can do with salt, cranberries, and barbecue sauce, and trying to come up with something incredible for dinner night after night is going to be tough. Managing the pantry is key if all the chefs will have the best possible ingredients to use in creating their culinary vision. A well-managed gene pool benefits both the breeders and the breed.
What does population genetics have to do with breeding dogs? Population genetics is about management of the gene pool, protecting the assets in the breed's genetic pantry. Who is keeping an eye on the pantry of your breed?
Read more about why understanding population genetics is important for dog breeders here.
An interesting study was just published about the genetics of behavior in the Belgian Malinois (Cao et al 2014). This is a working breed used in some of the same service environments as the German Shepherd Dog (e.g., military, security, etc), so behavior is important to the breed's function. Malinois that perform well, with good drive and initiative for work, tend to exhibit a circling behavior when in confined spaces, which is a form of obsessive-compulsive behavior. Dogs that do not display the circling behavior, and those that have very high levels of circling behavior, don't perform as well.
It turns out that a gene (Cadherin 2, CDH2; or genes in the same genomic block), that has been linked to obsessive-compulsive behavior in both Dobermans and humans might also be involved in the manifestation of these degrees of working and circling behavior in Malinois, from non-existant to extreme. Maintaining the most useful, moderate behavior in the Belgian Malinois is an example of something called "balancing selection", in which the heterozygous condition (e.g., Aa) is advantageous over either homozygous condition (AA or aa). (This is also referred to as "overdominance".) This means that breeding two dogs that are great working dogs and heterozygous won't produce better dogs, because some of the offspring will lack the drive and initiative to be good working dogs (AA), while others will have a double-dose of the CDH2 gene and be too high-strung to be useful. Because the best dogs will be heterozygous, selection tends to favor the gene combination that is the best combination of advantageous (good worker) and disadvantageous (moderate circling).
You might be familiar with other examples of overdominance in dogs. For example in the Whippet, dogs with one copy of a mutated allele of the myostatin gene (which is involved in muscle function) are significantly faster than dogs with the normal gene, but dogs with two copies of the gene are over-muscled (Mosher et al 2007). One again, the heteroygous condition is superior to either of the homozygous options.
These are three examples where assuming that breeding "best-to-best" will not result in "even better" because of failure to understand the underlying genetics. In fact, it can result in removing a dog from the gene pool for a genetic issue (e.g., a Malinois with extreme circling), when in fact breeding that dog to the appropriate mate (e.g., a homozygous dog with low drive) would result in heterozygous offspring that could have the perfect blend of motivation and self-control. Likewise, using Ridgebacks without ridges will produce some offspring without ridges, but it also will not produce pups with dermoid sinus.
With so many breeds facing a growing list of genetic issues as a result of the continued loss of genetic diversity, it is especially imprudent to remove dogs from the gene pool that could be used to produce offspring with the desired genotype (that is, heterozygous for the gene of interest) without the collateral damage of pups with unacceptable phenotypes.
Hillbertz NHCS, M Isaksson, EK Karlsson, E Hellmen, et al 2007 Duplication of FGF3, FGF4, FGF19 and ORAOV1 causes hair ridge and predisposition to dermoid sinus in Ridgeback dogs. Nature Genetics 39(11): 1318-1320.
When Should You Spay or Neuter Your Puppy?
You probably heard about the recent study from a group at UC Davis vet school that found some detrimental effects of spaying and neutering in Golden Retrievers and Labradors, specifically in orthopedic disorders and cancer. This is very interesting in itself, but perhaps even more significant is that the effects on the two breeds were not the same. (I have summarized the results as graphs to make them easier to understand here.) In the US, where most pet dogs are spayed or neutered, the fact that there might be adverse health consequences is very troublesome.
Now the UC Davis group is hoping to leverage the huge database from their veterinary clinic to explore possible effects of spay/neuter in mixed breed dogs. And they're hoping to fund it with a crowd-sourced campaign through one of the largest platforms, IndieGogo.
Here's how it works: The research team has prepared a little proposal outlining what they plan to do, who is involved, and how much they need to fund it. Anybody can then contribute to the funding campaign, with various perks available for donations of certain sizes. You can follow the success of the campaign on the project website and encourage your friends to support it as well.
I think this is a great idea, and the amount of money they're trying to raise is modest ($9,000). AND, they are publishing in an open access journal, so the information will be freely available to anybody with an interest.
IndieGogo: When should you spay or neuter your puppy?
As the list of known genetic disorders in dogs continues to get longer, it's tempting to think that most of the nasty mutations lurking in the gene pool of a breed have already been found. The number of possible mutant genes must be finite, and surely we must be close to getting things under control. Or not?
If you haven't taken (and survived) a biochemistry course, you probably know very little about the inner workings of the cell and all of the various chemical processes that must be executed perfectly. If you ask how all this works, the short answer is that it's complicated. Really, really complicated.
OMIA indicates that PDP1 "controls the rate of tricarboxylic acid entry into the critic acid cycle". The citric acid cycle is the process in the mitochondria that produces the molecule of cellular energy called ATP. A dog without ATP goes nowhere.
Let's have a look at what PDP1 does. Below is a schematic of metabolic pathways in the cell that looks like a map of the New York city subway system; don't let it confuse you. The thing to notice is that it's complicated (!), but we're going to look at just a tiny piece.
In the highlighted area below, you can see the molecule pyruvate on the left, and its conversion into a molecule called acetyl coenzyme A (Acetyl CoA) by pyruvate dehydrogenase.
So if there's a mutation in the PDP1 molecule, the enzyme won't work properly and pyruvate doesn't get converted into Acetyl-CoA. Then the question becomes how important is Acetyl-CoA?
One thing that's clear is that Acetyl-CoA is a very busy molecule and is a substrate for many biochemical reactions that are essential to cell function. Shut down the enzyme that creates Acetyl-CoA, and you're in serious trouble, because the cells can't make enough ATP to support the energy cost of exercise. Just as the dog gets really going, the well runs dry of ATP and the dog collapses.
Remember from the information in OMIA above - this is a single base change in the PDP1 molecule, and the mutation is expressed as an autosomal recessive. So if one of the alleles of the gene is a functional copy, a dog is not affected; two copies though, and you have exercise-induced collapse. Of course, a mutation like this would be strongly selected against, because it produces profound dysfunction if homozygous, so we would not expect it to be common in a population of animals. But in purebred dogs, the breeding of relatives ups the odds of producing animals with a double dose of the mutated allele, and that's why we're seeing it in various breeds of purebred dogs.
You don't have to squint hard at this chart of metabolic pathways to be awed by the number and complexity of chemical steps that have to occur in just the right sequence and at just the right rate to produce the energy a dog needs not just for exercise but for life. Everywhere you see that blue enzyme marker is another protein that can be broken by just the tiniest little error when it is produced. And these are just the pathways for energy metabolism in a cell. There's an equally tangled chart on the Roche website for cellular and molecular processes.
When I took biochemistry, I could envision all of these processes in my head, following a molecule through steps that remove a water molecule, add a hydroxyl group, clip off a hydrogen, and spin off those ATPs that are the currency of life in our bodies. The complexity fascinated me, and it does even more so today as I learn about the many little glitches that can result in a dog with a disease. And looking at that chart, it's humbling to think that there must be many, many ways the whole process can be broken by just the right mutation in a critical step. I'm sure we've only just begun to scratch the surface. | https://www.instituteofcaninebiology.org/blog/archives/10-2014 |
Suicide Prevention Awareness for City businesses (Thursday 2 December, 9-11am)
Help us to prevent suicides in the City, by joining us for this short and interactive session.
This short, interactive and practical City-specific Suicide Prevention Awareness session is delivered by Samaritans volunteers, in collaboration with the City Corporation’s Public Health team and the City of London Police. It is aimed at anyone working in a City organisation – including those with responsibility for HR, facilities or security. It aims to provide the tools to start difficult conversations in a careful and sensitive way.
Find out more about the session on the Eventbrite page.
Attendees at previous sessions said the following about the training:
- “Excellent session and very informative”
- “This is a difficult subject that was communicated with compassion and sensitivity to provide a valuable learning experience”
- “Even though I have experience in this field, the training has increased and added to my understanding”
- “In the organisation I work in we already deliver ‘managing mental health’ workshops and this workshop was a lovely refresher and CPD”
- “It was great to learn other ways Samaritans have to communicate and how much is being done around the city in general”
- “Makes me see how important it is to step in if you see someone who looks like they’re suicidal”
- “Really useful session – covers some difficult issues really carefully but also succinctly”
Speakers: | https://www.businesshealthy.org/event/suicide-prevention-awareness-for-city-businesses-thursday-2-december-9-11am/ |
Aug 13, 2012, 02:23 ET
ST. PETERSBURG, Fla., Aug. 13, 2012 /PRNewswire-USNewswire/ -- The Poynter Institute and craigconnnects, the Web-based initiative created by Craig Newmark, have teamed up with the Paley Center for Media to host the Poynter Digital Journalism Ethics Symposium. To be held Tuesday, October 23, at the Paley Center in New York City, the daylong symposium will give a select group of authors the opportunity to test the ideas they are contributing to Poynter's new book on journalism ethics in the digital age.
(Logo: http://photos.prnewswire.com/prnh/20110323/MM70721LOGO)
"The Paley Center for Media has a record of hosting forward-thinking people in exciting discussions," said Poynter President Karen Dunlap. "It is the right place for Poynter to engage a conversation on the ethics of digital media."
Thus far, the authors include: Clay Shirky, writer, consultant and professor; Vivian Schiller, Chief Digital Officer, NBC News; Ann Friedman, writer and editor; Eric Deggans, Media Critic, Tampa Bay Times; Emily Bell, Director of the Tow Centre for Digital Journalism, Columbia University; Tom Huang, Sunday and Enterprise Editor, The Dallas Morning News; Monica Guzman, Columnist, The Seattle Times; Steve Myers, formerly of Poynter and now Deputy Managing Editor at The (New Orleans) Lens; and danah boyd (preferred spelling), professor and Senior Researcher, Microsoft Research. The book will be co-edited by Kelly McBride, Poynter's Senior Ethics Faculty, and Tom Rosenstiel, founder and director of Pew Research Center's Project for Excellence in Journalism.
Based on the work of the contributors, McBride and Rosenstiel will propose a new set of guiding principles for journalists and other content creators concerned with democratic standards of truth in the digital age.
"This event is exciting, because right now there's not a lot of agreement about what the standards for truth and accuracy should be," said McBride. "I'm not delusional, but I think we can make great progress toward setting a baseline for ethical standards."
Craig Newmark, who contributed a $25,000 grant through his craigconnects initiative to host the symposium authors, said he is hoping the forum will deal with some of the tougher questions.
"I've been saying that the 'press is the immune system of democracy' for a couple of years now," Newmark said. "My interest is motivated by conversations with people in journalism; they'd like to restore trustworthy behavior to news media, not in just a few pockets of it."
"In these challenging times it's essential to bring the best minds in our community together to chart the way forward to promote a robust Fourth Estate," said J. Max Robins Vice President and Executive Director of Industry Programs of The Paley Center for Media. "This is core to the mission of The Paley Center and we are thrilled to be partnering with Poynter and craigconnects."
Because seating is limited, attendance at the symposium will be by invitation. The Paley Center will stream the events live and provide a video archive. For information about accessing the event online, please visit http://www.paleycenter.org/mc-dialogue-poynter. The live stream will also be available through Poynter's website, http://www.poynter.org.
About The Poynter Institute
Founded in 1975 in St. Petersburg, Fla., The Poynter Institute is one of the nation's top schools for professional journalists and news media leaders, as well as future journalists and journalism teachers. Poynter offers training throughout the year in the areas of online and multimedia, leadership and management, reporting, writing and editing, TV and radio, ethics and diversity, journalism education and visual journalism. Poynter's News University (www.newsu.org) offers journalism training to the public through more than 200 interactive modules and other forms of e-learning. It has more than 200,000 registered users in 225 countries. Poynter's website, (www.poynter.org) is the dominant provider of journalism news, with a focus on business analysis and the opportunities and implications of technology.
About craigconnects
Launched in March 2011, craigconnects is Craig Newmark's personal, Web-based initiative to support philanthropy, public service, and organizations getting results in both areas. The initiative spotlights individuals, organizations and agencies working for veterans and military families, open government, public diplomacy, back-to-basics journalism, consumer protection, election protection and voter registration, and technology for the common good. craigconnects is a fiscally sponsored project of Community Initiatives.
About Media Council at the Paley Center for Media
The Media Council brings together top executives and leading thinkers in the global media industry to discuss critical issues that will define the media and its role in society for generations to come. The Media Council's global reach and extraordinary convening authority enable it to host a broad spectrum of gatherings—including Roundtable Breakfasts, Boardroom Luncheons, Dialogues, Panel Discussions, and other special events—at its distinctive locations in New York City and Los Angeles. Serving as both catalyst and independent forum, it facilitates discussions on pressing topics that span all areas of the industry, including communications, technology, creative content, and journalism. For information about becoming a member of the Media Council, please contact Stephanie Kousoulas at 212.621.6732 or [email protected].
CONTACT: | https://www.prnewswire.com/news-releases/poynter-digital-ethics-symposium-to-be-held-oct-23-in-nyc-first-authors-commit-to-day-funded-by-craig-newmark-166006536.html |
Sub-Saharan Africa’s growth is expected to increase to 2.9 per cent in 2020, the World Bank said on Tuesday, 14th January, in its Global Economic Prospect Report for the year. The bank cited enhanced investor confidence in some large economies, easing energy obstacles and rise in oil production that had led to oil exporters recovering, as indicators. Robust growth among the region’s agricultural commodity exporters was also cited as taking substantial credit for such growth. This is a poor forecast that would represent slower demand from key trading partners, lower commodity prices and adverse domestic developments in several countries than previously expected.
Growth is expected to pick up up to 0.9 per cent in South Africa. Growth is likely to creep up to 2.1 per cent in Nigeria while Angola is projected to accelerate to 1.5 per cent. Growth in Kenya is seen to be up to 6 per cent. “Growth is expected to be steady at 6.4 per cent in the West African Regional Monetary Union,” the World Bank said.
The competition in the market for goods in Sub-Saharan Africa has been low relative to the rest of the world. Country-level data shows that as regards the global distribution of competition metrics, more than 70 per cent of countries in the region are below the median. Company markups, directly measured using company data, corroborate the macro-level findings and indicate that markups in Sub-Saharan African countries are higher on average than in other emerging and developing economies, especially in the services sectors. A study of the price levels of globally comparable products and services shows that, at a similar level of development, prices in the region are relatively higher than in other regions, which can be attributed at least in part to low product market competition. Empirical analysis shows that increased competition will help boost economic growth and welfare through increased productivity and profitability in exports and lower prices for consumers. Such results are supported by evidence at the company level, which indicates that market structure influences the actions and output of businesses and eventually forms macroeconomic outcomes. In particular, a decrease in markups is correlated substantially with a rise in firm production, exports, productivity growth and labour output share. In the manufacturing sector, these effects are more pronounced relative to services and appear to be greater for domestic firms as opposed to foreign firms. Thus highlighting the need to increase competition in the food market in Sub-Saharan Africa. Though product-market reforms were undertaken in several countries in the region in the late 1990s and early 2000s, which helped boost competition and gave growth gains, the reform momentum has stalled in recent years. Yet, given the nearly triple increase in the number of countries that have implemented competition laws since 2000, on-the-ground progress remains minimal. Considering that several factors affect competition, a holistic approach that incorporates the following key elements is required to boost competition in the region:
• Consumer market reforms that eliminate systemic and regulatory barriers to the participation of the private sector in the markets for goods and services and enhance business capacity.
• An appropriate structure for competition policy including an acceptable competition law and an autonomous, adequately funded and staffed enforcement agency.
• Complementary policies on trade and foreign direct investment to enhance foreign competition and improve access to intermediate inputs.
• Carefully designed fiscal and procurement strategies that do not hinder competition by helping only a few players in the market.
Though these policies are relevant individually, they appear to reinforce one another. Trade and investment liberalisation, for example, help increase competition, but an effective structure for competition policy is essential to ensure that benefits from foreign competition are realised and that a few large companies do not control the markets through unfair trade practices. In the same vein, development policies aimed at advancing selective sectors deemed essential for boosting productivity and growth should not give way to a decline in competitiveness and a rise in corporate market power which could impose costs on the rest of the economy and offset the potential effects of the original policies. More broadly, countries need to maintain a stable and sound macroeconomic and financial environment in order to attract private investment and to ensure that competition-stimulating policies are implemented. In addition, the collaboration between national competition authorities needs to be improved in the current context of growing regional trade and convergence to counter any anti-competitive activities by large pan-regional firms, as reported by the International Monetary Fund. | https://thetimesofafrica.com/sub-saharan-africa-competitiveness-and-development/ |
457 So.2d 206 (1984)
STATE of Louisiana, Appellee,
v.
Robert Lee WOOD and Patricia Wood, Appellants.
No. 16610-KW.
Court of Appeal of Louisiana, Second Circuit.
September 26, 1984.
*207 John S. Stephens, Coushatta, for appellants.
William J. Guste, Jr., Atty. Gen., Baton Rouge, William R. Jones, Dist. Atty., Coushatta, for appellee.
Before HALL, FRED W. JONES, Jr. and SEXTON, JJ.
HALL, Judge.
After defendants' motion to suppress evidence seized at their home under an allegedly invalid search warrant was denied, the defendants were tried and convicted of possession of marijuana (LSA-R.S. 40:966). Defendants applied for a writ of review, urging as error the trial court's denial of the motion to suppress. Perceiving that the affidavit upon which the search warrant was based was insufficient to establish probable cause and that the exclusionary rule required suppression of the seized evidence, we granted the writ of review. Upon review, we find the search warrant to be invalid, but applying the "good faith" exception to the exclusionary rule recently established in United States v. Leon, ___ U.S. ___, 104 S.Ct. 3405, 82 L.Ed.2d 677 (1984), hold that the seized evidence is admissible and that the denial of the motion to suppress is correct.
Defendants urge that the search warrant was invalid because of deficiencies in the affidavit on which it was based, particularly in that the affidavit failed to set forth a time when two confidential informants allegedly saw marijuana and/or other contraband at the defendants' residence. It is urged that the exclusionary rule requires suppression of the evidence seized.
The state argues that the affidavit was sufficient, considering the totality of the circumstances. Alternatively, the state argues that even if the search warrant was invalid, the evidence seized is admissible under the Leon "good faith" exception to the exclusionary rule.
A deputy sheriff obtained a search warrant from a judge authorizing the search of defendants' trailer home for narcotics, marijuana, drug paraphernalia, and a chain saw. The deputy, with other officers, searched the home and found a small bag containing a small quantity of marijuana on top of a refrigerator. The other items listed in the warrant were not found on the premises.
*208 Validity of the Search Warrant
The affidavit states that the affiant has good reason to believe that located in a certain house trailer owned by defendant and a nearby tin barn are "various narcotics, marijuana and other drug paraphernalia, and also a 5200 Poulan chain saw with a bow bar." The affidavit further states that affiant believes said property to be so located because: "I received information from two (2) confidential informants that the above listed items have been seen at the above location. These two (2) informants have found to be very reliable in past investigations."
The essential facts for establishing probable cause to issue a search warrant must be contained in the affidavit. LSA-C. Cr.P. Art. 162; State v. Daniel, 373 So.2d 149 (La.1979); State v. Koncir, 367 So.2d 365 (La.1979); State v. Westfall, 446 So.2d 1292 (La.App.2d Cir.1984).
In Illinois v. Gates, 462 U.S. 213, 103 S.Ct. 2317, 76 L.Ed.2d 527 (1983), the United States Supreme Court enunciated a "totality of the circumstances" analysis in determining whether an affidavit in support of a search warrant based on hearsay established probable cause for the warrant to issue. The Louisiana Supreme Court has followed Illinois v. Gates in State v. Lingle, 436 So.2d 456 (La.1983) and State v. Brooks, 452 So.2d 149 (La.1984). See also State v. Westfall, supra.
Illinois v. Gates expressed the analysis as follows:
"* * * The task of the issuing magistrate is simply to make a practical, common-sense decision whether, given all the circumstances set forth in the affidavit before him, including the `veracity' and `basis of knowledge' of persons supplying hearsay information, there is a fair probability that contraband or evidence of a crime will be found in a particular place. And the duty of a reviewing court is simply to ensure that the magistrate had a `substantial basis for ... conclud[ing]' that probable cause existed."
In State v. Westfall, we held:
"Even under the `totality of the circumstances' standard `sufficient information must be presented to the magistrate to allow that official to determine probable cause, and his action cannot be a mere ratification of the bare conclusion of others.' Illinois v. Gates, supra....
"While we realize when an affidavit is based on hearsay that under Gates it may well not be necessary to set forth underlying circumstances and details sufficient to provide a substantial factual basis by which the magistrate might find both the informant and the information given by him reliable, it follows with reason that an affidavit based on hearsay must at least come from a credible source or the information from an unknown or untested source must be shown by sufficient facts and underlying circumstances to be probably reliable."
In the instant case, the affidavit is insufficient in several respects and does not afford a "substantial basis" or "sufficient information" to establish probable cause.
First, the affidavit does not state that the confidential informants themselves saw the listed items in the premises to be searched. It states only that the informants gave information that the items "were seen" without stating who saw them. The affidavit does not disclose personal knowledge on the part of the informants or the basis of their knowledge. See State v. Paciera, 290 So.2d 681 (La.1974).
Second, the affidavit does not state when the items were seen or when the information was given by the informants to the affiant. An affidavit which fails to make any reference to the time when the offense took place and which is phrased in the past tense does not provide a magistrate with sufficient facts to determine that probable cause to search exists at the time the warrant issues. State v. Loehr, 355 So.2d 925 (La.1978); State v. Thompson, 354 So.2d 513 (La.1978). Compare State v. Ogden, 391 So.2d 434 (La.1980).
*209 Third, the statement that the informants have been "found to be very reliable in past investigations" falls short of providing specific facts sufficient to determine the reliability of the informant. See State v. Koncir, supra.
Fourth, the affidavit provides no facts to connect the chain saw to any criminal activity whatsoever.
Considering the totality of the circumstances set forth in the affidavit, it afforded insufficient information for the issuing judge or magistrate to make a commonsense decision that there is a fair probability that the contraband or other evidence of a crime would be found at the described premises. The warrant should not have been issued and is invalid.
Applicability of the Exclusionary Rule
Prior to Leon, the judicially created exclusionary rule required that evidence seized under an invalid search warrant be excluded from admission into evidence in the trial of a criminal prosecution. In Leon, the United States Supreme Court established a "good faith" exception to the exclusionary rule. The exclusionary rule should not be applied so as to bar the use in the prosecution's case-in-chief of evidence obtained by officers acting in an objectively reasonable good-faith reliance on a search warrant issued by a detached and neutral magistrate but ultimately found to be invalid.
According to Leon, suppression of evidence obtained pursuant to a warrant should be ordered only on a case-by-case basis and only in those unusual cases in which exclusion will further the purposes of the exclusionary rule, that is, the deterrence of illegal police conduct.
The court in Leon made it clear, however, that exclusion is not "always inappropriate in cases where an officer has obtained a warrant and abided by its terms." Although "a warrant issued by a magistrate normally suffices to establish that a law enforcement officer has acted in good faith in conducting the search", nevertheless, "the officer's reliance on the magistrate's probable cause determination and on the technical sufficiency of the warrant he issues must be objectively reasonable... and it is clear that in some circumstances the officer will have no reasonable grounds for believing that the warrant has properly issued."
The court enumerated four exceptions to the "good faith" exception, or instances in which suppression "remains an appropriate remedy": (1) if the magistrate or judge in issuing a warrant was misled by information in an affidavit that the affiant knew was false or would have known was false except for this reckless disregard of the truth; (2) where the issuing magistrate wholly abandoned his detached and neutral judicial role; (3) where the warrant is "based on an affidavit so lacking in indicia of probable cause as to render official belief in its existence entirely unreasonable"; and (4) where the warrant is so facially deficient, i.e., fails to particularize the place to be searched or the things to be seized, that the executing officers cannot reasonably presume it to be valid.
Only the third exception noted above is arguably applicable to the instant case so as to require application of the exclusionary rule. Some light is shed on the meaning of this exception by other statements in the Leon opinion. The court noted that suppression is appropriate only if the officers "could not have harbored an objectively reasonable belief in the existence of probable cause." The objective standard of reasonableness adopted "requires officers to have a reasonable knowledge of what the law prohibits." It is necessary to consider the objective reasonableness of not only the officers who executed the warrant but also of the officers who originally obtained it or who provided information material to the probable cause determination. By two references to "bare bones" affidavits, the court makes it clear that reliance by officers on a warrant based on a "bare bones" affidavit furnished by the officers will not meet the objective standard of good faith or reasonableness.
*210 As to what constitutes a "bare bones" affidavit, in Illinois v. Gates, supra, the court used that term is describing the affidavits involved in Nathanson v. United States, 290 U.S. 41, 54 S.Ct. 11, 78 L.Ed. 159 (1933) and Aguilar v. Texas, 378 U.S. 108, 84 S.Ct. 1509, 12 L.Ed.2d 723 (1964) which amounted to bare and wholly conclusionary statements of the affiants. In Nathanson the statement of the affiant was that "he has cause to suspect and does believe that" liquor illegally brought into the county is located on certain premises. In Aguilar, the statement was that "affiants have received reliable information from a credible person and believe" that heroin is stored in a home.
The affidavit in the present case is almost in the "bare bones" category, but not quite. It could be construed as stating that the contraband was seen on the premises to be searched by the confidential informants. It contains some information about the reliability of the confidential informants, stating that they were found to be very reliable in past investigations. The affidavit does more than state a bare conclusion of the affiant. While not passing constitutional muster as establishing probable cause, the affidavit is not so lacking in indicia of probable cause as to render official belief in its existence entirely unreasonable, nor can it be said that an officer with a reasonable knowledge of what the law prohibits could not have harbored an objectively reasonable belief in the existence of probable cause.
Although the judge should not have issued the search warrant on the basis of the insufficient affidavit, the reliance of the law enforcement officer on the warrant issued by the judge was objectively reasonable and in good faith. Accordingly, under the Leon exception to the exclusionary rule, the evidence seized by the officer executing the warrant was admissible.
The Leon decision establishes an exception to the exclusionary rule judicially created as a deterrent to police violations of the Fourth Amendment to the United States Constitution. Art. 1, § 5 of the Louisiana Constitution is not a duplicate of the Fourth Amendment or merely coextensive with it, and in some respects may afford a higher standard of individual liberty than that afforded by the jurisprudence interpreting the federal constitution. State v. Hernandez, 410 So.2d 1381 (La.1982); State v. Abram, 353 So.2d 1019 (La.1978); State v. Hutchinson, 349 So.2d 1252 (La. 1977): State v. Overton, 337 So.2d 1201 (La.1976). Decisions of the United States Supreme Court, although given careful consideration, do not necessarily control or dictate decisions by Louisiana courts construing the Louisiana Constitution, nor replace the exercise of independent judgment by the Louisiana courts so long as the state decisions do not infringe on federal constitutional rights.
Although the last sentence of Art. 1, § 5 of the 1974 Louisiana Constitution provides that "any person adversely affected by a search or seizure conducted in violation of this Section shall have standing to raise its illegality in the appropriate court", the Section does not mention the exclusionary rule and does not require its unlimited application as a remedy for violation of the rights established by that Section. The exclusionary rule was originally applied in Louisiana to state prosecutions because it was required by the decision of the United States Supreme Court in Mapp v. Ohio, 367 U.S. 643, 81 S.Ct. 1684, 6 L.Ed.2d 1081 (1961). See, for example, State v. Calascione, 243 La. 993, 149 So.2d 417 (1963); State v. Williams, 250 La. 64, 193 So.2d 787 (1967). There is no good reason why this state should not now apply the exception to or limitation of the exclusionary rule established by the United States Supreme Court in Leon. The decision is well grounded in law and supported by empirical facts and strong policy considerations. It advances the legitimate interests of the criminal justice system without sacrificing the individual rights guaranteed by the constitution. Exercising the independent judgment of this Louisiana court, we adopt the Leon exception to the exclusionary rule as applicable to cases arising *211 under Art. 1, § 5 of the Louisiana Constitution, for the sound reasons set forth in the majority opinion of the United States Supreme Court.
In the Leon case, the United States Supreme Court enunciated the new exception to the exclusionary rule and applied it in upholding the convictions of the defendants even though at the time of trial the exclusionary rule would have required suppression of the evidence needed to convict. The instant case is in the same posture, and we reach the same result.
Decree
Accordingly, the denial of the defendants' motion to suppress is correct. The defendants' convictions are affirmed.
Affirmed.
SEXTON, J., concurs without written reasons.
| |
Is My Coffee Habit Hurting People or the Planet?/
Sometimes at night, as I’m getting ready for bed, brushing my teeth and washing my face, I get this little stir of excitement in my stomach, because I know that I’ll fall asleep, and the first thing I’ll get to do when I wake up in the morning is have a big steaming cup of blessed coffee.
No, I’m not exaggerating. I love coffee, and the ritual of it is so embedded in my life that I can’t imagine doing without it. I come from a family of major coffee drinkers. Since I’ve been alive (and presumably long before that), my parents’ daily routine has been roughly thus: they wake up in the morning and begin their coffee intake, which is steady until about 5pm when they switch to wine. (As an aside, they are in their 60s and look freaking amazing, but I digress.)
However, all this inconvenient thinking about my impact on the planet (gah!) has unavoidably led me to consider coffee. And I must say, the research I’ve done doesn’t leave me feeling great about my love affair with the sweet dark nectar that is my daily cup of joe.
WHAT'S SO BAD ABOUT COFFEE PRODUCTION?
In broad terms, there are two issues with coffee production. First, there is the carbon footprint of coffee, which I addressed in this earlier post, and the environmental impact of widespread coffee production and the use of pesticides.
Second, there is the human rights element of coffee. Unfortunately, coffee growers are typically paid only about 2% of the retail price of coffee, and in many parts of the world, child labor is common. On many plantations, workers who are indebted to the plantation owners are forced into labor in order to pay off their debts.
The dorms where workers sleep are often not fit to purpose, and families are crowded together in close quarters without mattresses or toilets. Because workers are paid by yield, when prices are low they must work longer hours to make their wage. For the same reason, they often press their children into service, often preventing them from being able to go to school.
WHAT IF I BUY ORGANIC COFFEE?
That’s a great place to start. Coffee is one of the most chemical-laden crops out there, so buying organic is a good way to ensure that you’re not ingesting harmful pesticides, and that the people who grew the beans didn’t put themselves in harm’s way by spraying dangerous chemicals on the crops (which happens in a lot of places, and often the workers don’t have adequate training or protective clothing). There are still other factors to consider, though, such as whether your coffee was grown in shade or full sun, whether it was wet or dry processed, how the workers were treated and paid, and whether there was child labor involved.
THE PACKAGE SAYS FAIR TRADE. DOESN'T THAT MEAN IT WAS ETHICALLY GROWN?
Maybe. The Fair Trade Labeling Organization Internatonal (FLO) sets price standards for coffee growers, so that they’re not subject to the market fluctuations of coffee, meaning they’ll get paid a baseline price for their coffee no matter what. However, they’re required to pay for their Fair Trade certification to be able to get this price, which sometimes offsets the price benefits of being certified Fair Trade. For this reason, the very poorest growers are unable benefit from Fair Trade certification. Moreover, there is little conclusive evidence that the standards set forth by the FLO have any positive impact on the laborers, or that they provide any improvement for the poorest growers in areas like health and education. Fair trade is also not a guarantee that the coffee is organic.
WAIT, WHAT WAS THAT ABOUT SUN VS. SHADE? DOES THAT REALLY MATTER?
Actually, yes. Coffee naturally grows in the shade of other trees and is part of a very bio-diverse ecosystem. However, yields of shade-grown coffee are often low, so some farmers will plant their coffee in full sun in order to increase the yield. Unfortunately, this practice depletes soil, and shortens the overall lifespan of the coffee plant to about 15 years, which means the farmers will eventually have to move the crops to new soil. Sometimes they even raze forest land to make room for their full-sun coffee crops, which is detrimental to the environment and to the bio-diverse ecosystems I just mentioned. Shade grown coffee is more expensive, but it helps to protect these precious forests.
WELL, THIS SEEMS FREAKING HOPELESS. IS THERE ANY WAY TO KNOW MY COFFEE IS ETHICALLY GROWN?
Yes, there are a few coffee distributors who have wonderful transparency to their entire supply chain. However, one important way we can all help is to treat coffee more like a luxury than a commodity. The widespread consumption of coffee is what drives more coffee production, and in turn drives coffee prices down. Since farmers sometimes only see 2% of the retail price of coffee, the lower the market price, the less they get. This perpetuates the cycle of poverty and poor conditions for farmers.
If we buy coffee less often, and vote with our dollars for companies that are supporting their farmers with a living wage, safe environment, and beneficial social programs, we can help to ensure that they prosper. The best way to do this? Be curious. Ask questions. If you’re spending your money on a product, you have a right to know if people were exploited in its making. Foodispower.org gives a good list of ethical coffee sources. The three that I like the best are:
These three seem to have the most transparency about their growing and processing, how their farmers are paid, and how they are improving the lives of their laborers through social programs and benefits. You can also buy bulk bags from all three suppliers, though sadly, none of them appear to be compostable packaging. Sometimes we must choose between a little plastic and the knowledge that our food comes from ethical sources. Here again, the argument for drinking less coffee will also help us to use less plastic, especially if we buy in bulk!
Sources: | https://smallshop.co.uk/blog/2017/5/29/day-62-is-my-coffee-habit-hurting-people-or-the-planet |
What is Ontogeny?
Ontogeny describes how a behavior changes over the course of the animal’s lifetime. It often examines the roles of social and environmental conditions in the development of behavior, and how the underlying machinery of behavior—such as gene expression, neurocircuitry, hormonal pathways, and morphology—change in response to varying environmental conditions.
Web Building is Innate
The basic motor patterns involved in construction of the orb web are largely stereotypical within phylogenetic clades of orb weaving spider families, resulting in species-specific differences in web architecture (Blackledge, et. al, 2011). This suggests that there is a genetic basis for web-building behavior in these spiders, or rather that the behavior is innate. Spider hatchlings raised in the absence of adults were observed to initially lay silk in no clear structure, but after one month, the young spiders constructed webs showing features characteristic of the adult webs of their species (Witt, et al, 1972), supporting the hypothesis that the basic web building behavior is internal to the spider and does not have to be learned.
Changes in the Orb Web as Spiders Age
As a spider develops from juvenile to adult, the web that it builds grows in overall size and the size of the mesh becomes wider with greater spacing among threads. While the size of the web is correlated to a spider’s weight, the regularity in thread spacing is related to spider maturity. Because the weight of female spiders does not change after they reach maturity and produce eggs, the size of the web no longer needs to change to hold a greater mass. With further aging, the web texture becomes increasingly coarse with irregular spacing until the spider’s final molt (Witt, et. al, 1972).
There is no evidence that silk production varies with spider age, but larger spiders have been found to include a greater volume of silk in their webs. The figure to the right shows (a) that the overall stopping potential of the web increased isometrically (proportionally) with spider size even though (b) the stickiness per capture area increased at a rate lower than is predicted by isometry (isometry is indicated by the dashed regression line). The ability of the web to stop and retain prey is improved by larger spiders packing more silk into into their webs as well as tougher and more flexible silk threads. The results of this study suggest that by concentrating their silk resources, adult spiders are able to target larger and faster-flying prey (Sensenig, et. al, 2011). | https://www.reed.edu/biology/courses/BIO342/2011_syllabus/2011_websites/egts_site%20%202%20.3/ontogeny.html |
Sven Hulsbergen Henning, owner of HenningMade uses styles from different eras in ways they are complementary to one-another. Growing up surrounded by antiques and artwork from the 15th century up to 1950's, he was intrigued by how distinct aesthetic qualities of figurative and abstract styles influence heart and mind and how they could fuse into contemporary design.
'The essence of beauty is when power and fragility come together in one piece'
Education
Additive vs Behavioral Design
Additive Design
Behavioral/Parametric Design
In contemporary design techniques rule-based computer programs are written to automatically generate an endless variety of shapes within variable parameters. Although this form of parametric design is considered new to the digital revolution, it isn't.
Medieval gothic architecture for example, is based on exactly the same principle. The freedom of organization from a few relatively small elements within simple rule-based behavior. Behavioral design is very similar to natural growth.
This technique opens up an enormous richness of shapes that are entangled, or knotted together as opposed to the classic or additive design technique in which the elements of which an object is composed always stay recognizable as individual parts.
The medieval predecessor of parametric design inspired me to search for ways to make parametric designs without the need of programming; to keep the advantage of the human rational and intuition over cold computer calculatiuons.
The used patterns are carefully constructed with a technique that still allows an almost endless variety of possibilities while every design decision remains a conscious choice. Although computers are used for both design and fabrication, it is not a computer program generating the design.
The patterns are used in cobination with classic addative design techniques. Like with paintings there is a rational, pre conceived composition underlying the freedom of the intuitive patterns of the painters hand. This constrained freedom is what places an object between 'fauve' nature and the logic of human reasoning.
Patterns
HenningMade explores the esthetic power of patterns to bond objects with live and vice versa.
Patterns of growth and decay veil the surfaces of things with a complex mosaic of experience.
The mosaic shows the omnipresent forces behind entities transforming and shaping each other, hence connecting them.
It's a beautiful expression of unique character.
This beauty is not about representing a preconceived notion of reality.
It's a lived experience felt as we become aware of our place in space and time and find ourselves related to everything around us. | https://henningmade.nl/The%20Studio |
CORE FREEZING KINETICS OF 250 ML AND 500 ML BAGS OF FRESH FROZEN PLASMA.
The Froilabo CRP Blast Freezer is designed to rapidly freeze blood plasma in preparation for long term storage in ULT freezers. To preserve the quality and function of fresh frozen plasma (FFP) the freezing process must be rapid and homogeneous. During a Site Acceptance Test (SAT) the Froilabo CRP Blast Freezer was installed on a production line which processed plasma product. In this test, the aim was to verify the final temperature inside the plasma bags was -70°C.
Supported by:
TEST CONDITIONS
The test was performed in a temperature-controlled room at 20°C (+/- 2°C) with a humidity level of 60%. The blast freezer used was the CRP100 EGD model.
Measurement chain:
- Central units and calibrated probes
- 20 PT100 temperature probes
|Designation||Type||Manufacturer||Range||Type of Calibration|
|Acquisition unit||34970A||Hewlett Packard||–||Internal|
|Probes||Pt100 Ohms 4 Wires||Pyro-contrôle, Pyrocapt Atexis||– 80°C to 215°C||CETIAT Acc. COFRAC|
EXPECTED RESULTS
Core freezing kinetics must be achieved at a temperature below -70°C in less than 2.5 hours for a fully loaded blast freezer (30 litres of initial product at ambient temperature).
|Final temperature||Expected value (°C)|
|Cart probes||Between -70°C and -80°C.||-75.0 ± 5.0|
|Bag probes||-75.0 ± 5.0|
For this test, 250 and 500 mL bags of saline solution were used.
TEST PROTOCOL
The test used the following the protocol:
- Start the blast
- Select the preservation mode with a set point of -75°C (1 hour)
- Wait until the temperature inside the freezer has stabilized for 5 minutes.
- Open the door of the
- Connect the measurement probes to the control unit using the probe
- Begin the temperature 1 measurement/30 seconds
- Insert the cart into the tank/chamber of the
- Close the
- Start the freezing Target temperature -75°C, freezing cycle time 2h30.
LOADING MAP
Over-wrapping is removed before freezing. The total load of the unit is 30 L of 0.9% NaCl solution, mixed between 250 ml and 500 ml bags, with the probes placed in the 250 ml bags.
Figure 1: Front view of probe location
Figure 2: Top view of shelf 1
Figure 3: Top view of shelf 15
Figure 4: Top view of shelves 2 to 7 and 9 to 14
Figure 5: Top view of shelf 8
TEST RESULTS
Figure 6: Test result with final temperature of -70°C
As expected, in freezing mode, all probes reached the temperature of -70°C in around 2 hours. Thereafter, the freezer maintained the temperature of -70°C in preservation mode with a great stability thanks to the air circulation inside the tank. After several hours in preservation mode, the freezer switches to a 12-minute defrost mode to prevent frost from forming. This mode is not harmful to the samples inside the tank.
The customer has a requirement to freeze plasma bags is less than 2.5 hours. The CRP achieves this in around 2 hours, giving margin to the user. The Froilabo blast freezer is a reliable and easy to use method for the freezing of plasma bags. The frozen product can then be placed in long term cold storage. Froilabo offer a complete range of ULT freezers and cryogenic storage. | https://labbulletin.com/articles/new-application-note-freezing-kinetics-plasma-bags |
MEDEAS partners from the University of Valladolid (Spain) in collaboration with Iñaki Arto, researcher at the Basque Centre for Climate Change, have just published a scientific article in Renewable and Sustainable Energy Reviews, entitled "Assessing vulnerabilities and limits in the transition to renewable energies: Land requirements under 100% solar energy scenarios".
While fossil fuels represent concentrated underground deposits of energy, renewable energy sources are spread and dispersed along the territory. Hence, the transition to renewable energies will intensify the global competition for land. In this analysis, we have estimated the land-use requirements to supply all currently consumed electricity and final energy with domestic solar energy for 40 countries (27 member states of the European Union (EU-27), and 13 non-EU countries: Australia, Brazil, Canada, China, India, Indonesia, Japan, South Korea, Mexico, Russia, Turkey, and the USA). We focus on solar since it has the highest power density and biophysical potential among renewables.
The results show that for many advanced capitalist economies the land requirements to cover their current electricity consumption would be substantial, the situation being especially challenging for those located in northern latitudes with high population densities and high electricity consumption per capita. Replication of the exercise to explore the land-use requirements associated with a transition to a 100% solar powered economy indicates this transition may be physically unfeasible for countries such as Japan and most of the EU-27 member states. Their vulnerability is aggravated when accounting for the electricity and final energy footprint, i.e., the net embodied energy in international trade. If current dynamics continue, emerging countries such as India might reach a similar situation in the future.
Overall, our results indicate that the transition to renewable energies maintaining the current levels of energy consumption has the potential to create new vulnerabilities and/or reinforce existing ones in terms of energy, food security and biodiversity conservation.
The complete article can be found here and will be freely available until 21st June 2017, while an accepted author manuscript (AAM) will be permanently available here. | http://www.medeas.eu/news/scientific-publication-assessing-vulnerabilities-and-limits-transition-renewable-energies-land |
About the Faculty of Regional Sciences
The various fields engaged at the Faculty of Regional Science allow students to meet and interact with diverse communities of people and broaden their perspectives through firsthand experiences. Opportunities to learn both in and beyond the classroom as well as in society allow students to actively expand their potential. The regional subjects of study at the Faculty of Regional Sciences are not limited to a specific place, but rather include frequent exchanges with people in both local and international contexts.
A wide range of research topics and challenges are explored at the Faculty of Regional Sciences, which allow students to engage with regional community issues in order to broaden the scope of their interests while also honing their area of specialization. The faculty aims to foster students who can act as key players in regional societies. | http://www.rs.tottori-u.ac.jp/english/about-gakubu/index.html |
CROSS REFERENCE TO RELATED PATENT APPLICATIONS
BACKGROUND
SUMMARY OF THE INVENTION
DETAILED DISCLOSURE OF EMBODIMENTS
CONCLUSION
None.
Conventional folding of garments is oriented to display in retail sales. A substantial portion of retail sales clerk time is devoted to refolding clothes after inspection by a potential customer. Clothing folded for retail display can only be stored or transported in bags, boxes, or other containers or require pins and clips to retain the display configuration.
“1 Lay the t-shirt flat with the front of the t-shirt facing up. The side of the t-shirt should be facing you, meaning that the shirt's collar should be to your right and the bottom of the t-shirt should be to your left.
2 Put your left index finger in the middle of the t-shirt and “draw” a line toward you until you get about 1½ inches from the side of the shirt. Pinch both sides of the fabric at this point with your left hand.
3 Keeping the first pinch secure, use your right hand to draw another imaginary line from the first pinch up the t-shirt to the seam on the shoulder. Pinch the front and back fabric here with your right hand. Now both hands are pinching the t-shirt.
4 Bring your right hand down past your left hand to the bottom of the t-shirt directly under your left hand. Your arms will be crossed. (Again, imagine a line going from your left—pinched—hand to the bottom of the t-shirt.) Pinch this fabric with the right hand so that your are now pinching four layers of t-shirt with that hand: the front and back of the shoulder seam and the front and back of the bottom.
5 Make sure the t-shirt looks like a mess of fabric at this point. That means you're doing it the right way.
6 Holding firmly to all pinches, lift the t-shirt from the flat surface while uncrossing your arms.
7 Shake the t-shirt until it hangs from your hands smoothly. It should no longer look like a mess of fabric.
8 Lower the t-shirt back onto the flat surface sleeve-first so that the front of the sleeve that's hanging touches the surface, creating a horizontal fold symmetrical to the one created by your pinches.
9 Now, lay the t-shirt down towards you (with the front facing up), and you're done.”
Well known retailers train their new employees in procedures for example: How to Fold a T-shirt the Gap Way
Other methods of folding/rolling shirts apply only to sleeveless or short-sleeve shirts. In some methods the shirts are folded with the collar face down and the short sleeves folded back underneath the bottom half of the shirt. Other methods of folding T-shirts involve crossing your arms to grasp both the hem of the shirt and the top of the T near the collar. None of these methods retain their shape unless left undisturbed. In some methods the shirts are folded in two parallel folds and then rolled from collar to near the hem. In some cases the hem is first folded into a cuff which is used to enclose the rolled shirt. This tubular result is more suitable for storage and packing but makes identification of which shirt is which more difficult as all distinguishing characteristics are hidden by the hem. Furthermore rolling sleeves does not result in a thin tubular object. It becomes more like a fabric dumbbell. Thus it can be appreciated that what is needed is a method to distinguish a folded long sleeved garment from other folded shirts and enable the folded long sleeve garment to be easily packed, transported, and stored without necessary refolding after translation or disturbance.
The invention is a method of folding a garment which consists of a front and rear panel and two sleeves attached to the panels at a collar, and at an armpit. The garment is folded on the centerline of the panels along a bilateral axis of symmetry. The sleeves may be folded and aligned with the center line to protect fasteners or decorative attachments on the front or rear panel. A diagonal fold establishes a triangular core on which the remainder of the garment is wound. The collar and adjacent fabric is tucked into a pocket created adjacent to the armpits by the diagonal folding.
The method of the invention may be practiced on a flat surface such as a counter or a tabletop. The method of the invention may be practiced entirely in mid-air by raising or lowering hands or other maniples suitable for grasping. A garment is variously described as having a plurality of sleeves coupled to at least one panel. The method is most easily understood by referring to seams on the sleeves and the panels but the practice of the invention does not depend on having actual seams once the principle of the method is understood. Within this application we use the invented term sleeveseam to mean a longitudinal locus of points between the hole for the arm and the hole for the hand. It evokes the inseam of pants which is between the legs. The sleeveseam is meant to suggest the side of the sleeve closest to the torso ie. between the armpit and the palm of the hand at rest. Similarly we define the term sideseam to mean the longitudinal area on the garment on the torso or bodice or trunk between the armpit and the hip which is closest to the wearer's arms at rest. The sideseam is analogous to the term outseam on pants. Knit fabrics may not actually have a seam there but our definition applies to the longitudinal area where those practiced in the wearing of conventional clothing would expect to find it.
For ease of understanding, one illustrative embodiment provides for sleeves which are constructed with a sleeveseam from the armpit to the hand hole or annulus at the end of the sleeve most distant from the collar hole or annulus. Likewise, an illustrative embodiment assumes that a sideseam couples a front panel to a back panel of the garment from an armpit to a hem or waist hole or waist annulus of the garment.
It can easily be understood that the principles of the invention apply equally to tubular sleeves which have no sleeveseam connecting an armpit to a hand annulus. One familiar with garments with sleeves would know where a conventional sleeveseam would be and can practice the invention without further explanation. Similarly, a knit garment can be constructed in one panel without one or more sideseams but upon laying such a garment flat on a surface such as a table, those familiar with garments would easily distinguish the front of a garment from the rear of a garment and would be able to practice the invention as if a sideseam were present at the juncture of the front of a garment and the rear of a garment.
By aligning each sleeve with an adjacent sideseam and with the centerline of the bilateral axis of symmetry, the first and second diagonal folds are performed from armpit to collar. After aligning the armpits together and the sideseams at the hem of the waist opening or annulus of the panels, the first orthogonal fold is performed. The fold along the centerline of the axis of bilateral symmetry is orthogonal to an imaginary line which could be drawn from armpit to armpit. Buttons, bows, snaps, zippers, and other fasteners typically coupled to the garment adjacent to the centerline would either be protected or exposed by the orthogonal fold. By wristpoint we mean either the end of the sleeve if the sleeve is not long enough to extend beyond the lower bound of the garment or the point on the sleeve which touches the lower bound of the garment for a sleeve which extends below the lower bound of the garment. It is the lowest point at which you can grasp both the sideseam and the sleeveseam without slack in one or the other.
Assigning the approximate distance measurement from armpit to armpit as 2 A, the approximate distance measurement of an armpit to the nearest point on the centerline is A. The next fold is positioned at a distance 2 A from the armpit and is orthogonal to the centerline. In embodiments this second orthogonal fold can be performed before or after the fold on the centerline. Note the first and second orthogonal folds are mutually orthogonal and may be performed in either order.
On the condition that the waist hem of the garment is substantially 2 A from the armpits, the second orthogonal fold may be skipped entirely. Its purpose is solely to square the garment for the following series of alternating diagonal and orthogonal folds. In an embodiment, the third diagonal fold brings a point along the centerline to a point on the seams. In one embodiment this point is distance A from the armpit. It forms a triangular core around which the remaining steps are performed. A series of alternating orthogonal and diagonal folds are performed winding the garment around the triangular core. In an other embodiment, when a hem is an odd number of distance A from the armpit, the first diagonal fold will move the seam corner to the centerline. A next to last fold maintains an acute apex of the triangle at the armpit. The final fold consists of tucking the collar and adjacent fabric into a pocket formed where the seams would be in a garment with one or more seams.
One beneficial effect of the invention is to distinguish folded garments with long sleeves from rolled t-shirts with short sleeves. Another beneficial effect is to protect fasteners or decorative attachment on the centerline from being damaged or snagging other garments in storage or transportation. Another beneficial effect is to enable identification and selection of the garment from other similarly folded garments by exposure of the breast logo or rear panel yoke embroidery on the external surface of the folded garment. Another beneficial effect is to allow the folded garment to be tossed, thrown, or held in one hand without resultant inadvertent unfolding. An other beneficial effect is to enable flexibility of garment storage either vertically like books in a bookcase or horizontally like towels and bedding. Another beneficial effect is that while consumers may own garments of widely varying length they are less likely to have garments with diverse armpit to armpit dimensions so the folded garments are more likely to store in similar dimensioned triangles.
In an exemplary embodiment, this invention is a method for transforming a garment into a wrapped package comprising: placing a long sleeved garment on a flat surface with the collar opening toward you and the waist opening away from you and the sleeves spread to left and right; folding each sleeve diagonally from armpit to collar so that the edge of the sleeve is on top of the edge where the front panel is fastened to the back panel; noticing the width of the panels from armpit to armpit; grasping both sleeve and edge of lower panel and folding across the centerline to marry with the other sleeve and with the other edge of lower panel; folding the sleeve ends and the waist edge of the panel up at the point where the length to the armpit is approximately equal to the width of the panels from armpit to armpit; folding the corner at the centerline of the panels diagonally across to marry the two edges; folding the resulting triangular portion up so the lowest point is at the armpit; folding diagonally again from armpit to centerline so that you have a large triangle with a pocket toward the collar; and tucking the collar and adjacent material into the pocket formed between the edges of the front and rear panel.
Among various embodiments, this invention includes a method for transforming a garment having 4 annuluses, two sleeves, and a bilaterally symmetrical front panel coupled to bilaterally symmetrical rear panel into a substantially self-enclosed polyhedron that has two congruent parallel triangular bases, a triangular prism, for compact storage and transport, the method comprising: grasping and suspending said garment with a first maniple at a first armpit, wherein the armpit is a location on said garment where a sleeve is coupled to both the bilaterally symmetrical front and rear panels at a maximal distance from a collar annulus; grasping and elongating a first sleeve with a second maniple at a first wristpoint, wherein the wristpoint is a point on a sleeveseam; raising the first maniple and lowering the second maniple to further grasp a first sideseam at a point equidistant from the first armpit as the first wristpoint, wherein the sideseam is a locus of points at which the bilaterally symmetrical front and rear panels are coupled; raising the second maniple to be horizontally aligned with the first maniple which achieves a first diagonal fold between the first armpit and the collar annulus; grasping with the second maniple a second wristpoint on a sleeveseam of a second sleeve; grasping with the second maniple a second sideseam at a point equidistant from the second armpit as the second wristpoint, which achieves a second diagonal fold between the second armpit and the collar annulus; and grasping with the first maniple, the second armpit and horizontally elongating said garment between the two maniples which achieves a first orthogonal fold along an axis of bilateral symmetry on a centerline of both front and rear panels, wherein horizontally is substantially equidistant from a center of gravitational force.
Among various embodiments, this invention further includes among other steps: releasing the first and second armpits grasped by the first maniple; grasping with the first maniple both sideseams and sleeveseams at a point from the armpits equal to a width of the panels measured from the first armpit to the second armpit; releasing everything grasped by the second maniple; grasping with the second maniple the first orthogonal fold along an axis of bilateral symmetry on a centerline of both front and rear panels at a point which determines a second orthogonal fold which is substantially orthogonal to the first orthogonal fold; folding a waist of the garment and both wristends of the first and second sleeves toward the collar annulus; rotating the second maniple with respect to the first maniple; grasping in the second maniple the sideseams and sleevesearms at a point substantially halfway between the armpits and the first maniple, which achieves a third diagonal fold between the centerline and the sideseams; grasping in the first maniple the armpits, which achieves a third orthogonal fold which is substantially parallel to the second orthogonal fold; grasping in the second maniple along the axis of bilateral symmetry on a centerline of both front and rear panels, which achieves a fourth diagonal fold between the armpits and the centerline of both front and rear panels; releasing everything grasped in the first maniple; and tucking a portion of the front and rear panels immediately adjacent to the collar annulus into a triangular cavity formed among the seams by the third orthogonal fold and the fourth diagonal fold, whereby said garment is transformed into a triangular prism suitable for transportation in one hand and storage without additional pins, straps, bands, or enclosures to remain folded. It is understood that a maniple may be actuated by hydraulic, electrical, magnetic, or neuromuscular control signals and power. In an embodiment the maniple is controlled by a processor configured by a non-transitory store in which is encoded computer executable instructions.
Among various embodiments, the folding of the sleeves and panels encloses fasteners on the front or rear panels within the prism to avoid snagging or abrasion of the fasteners, wherein fasteners comprises buttons, zippers, and ties.
Among various embodiments, the folding of the sleeves and panels exposes decorative appliques on the front or rear panels on the exterior of the prism to enable identification of the sponsorship, ownership, or affiliation of the garment, go niners.
Among various embodiments, the wristpoint on the sleeveseam is determined by the shortest length of material from the armpit among the following components: the sleeve, the front panel, and the rear panel, when any one of the three is substantially shorter than the other components.
Among various embodiments, the wristpoint on the sideseam is determined by the shortest length of material from the armpit among the following components: the sleeve, the front panel, and the rear panel, when any one of the three is substantially shorter than the other components.
Among various embodiments, this invention includes a method for transformation of a sleeved garment into a portable wedge comprising: tucking a collar of a garment and adjacent fabric area into a triangular prism of folded sleeves and at least one panel.
In various embodiments, a plurality of panels comprise a front panel coupled to a rear panel at sideseams between an armpit and a hip.
In various embodiments, a sleeve comprises a tube extending from an armpit to a hand annulus and wherein the garment comprises two sleeves coupled to the panels at the armpits.
In various embodiments, a tube is a flexible fabric fastened to itself by a sleeveseam along two non-contiguous edges of the fabric.
Among various embodiments, this invention further includes among other steps: forming a triangular prism of folded sleeves and at least one panel by: folding the at least one panel on a centerline across the axis of bilateral symmetry to enclose the sleeves within the left and right sides of the at least one panel, diagonally folding a corner of fabric on a centerline to overlay an edge of the at least one panel whereby a quadrilateral is formed with two acute angles and two obtuse angles.
In various embodiments, this invention further includes among other steps: folding a corner of fabric at an acute angle orthogonal to the centerline to form a quadrilateral of fabric with one obtuse angle and one acute angle; and folding another corner of fabric at a diagonal to the centerline to form a triangular prism of folded sleeves and at least one panel.
In various embodiments, the sleeves are enclosed within the left and right sides of the at least one panel by diagonally folding a sleeve to align with the edge of a panel attached at an armpit of the sleeve.
In various embodiments, folding the at least one panel on a centerline across the axis of bilateral symmetry encloses fasteners, decorative components, or fragile materials within the triangular prism to prevent loss or damage.
In various embodiments, folding the at least one panel on a centerline across the axis of bilateral symmetry exposes embroidery, inks, colors, names, numbers, badges, logos, words, and trademarks on the visible exterior of the triangular prism to enable identification, affiliation, or promotion.
FIGS. 1-8
FIG. 1
Referring now to the , a top view is shown of a garment having two sleeves and at least one panel. In the dashed lines illustrate the folds which enables the sleeveseams to lie on top of the sideseams to which they are connected at their respective armpits.
FIGS. 2
FIG. 2
FIG. 2
a
b
a
b
2
In and the result of folding the arms is illustrated but two possible next folds are illustrated by the dashed lines. In a fold is shown by a dashed line on the centerline of the bilateral axis of symmetry. In , the fold is orthogonal to the centerline and serves to square the hem and ends of the sleeve at a point approximately at a distance of 2 A from the armpits on the sideseam where the distance measured from armpit to armpit is also 2 A.
FIGS. 3
FIGS. 2
FIG. 3
FIG. 3
a
b
a
b
a
b
3
2
Referring now to and there are the two possible results from the folds of and respectively. The next proposed fold for is to square the sleeves and hem of the garment. The next proposed fold for is on the centerline of the bilateral axis of symmetry. The result of either fold is the same.
FIG. 4
3
3
a
b
In the result of either or is illustrated. The edge of the folded panel is approximately a distance of 2 A from the armpit. The next diagonal fold in this embodiment brings the corner on the centerline to the sideseam at a point approximately 1 A distance from the armpits.
FIG. 5
In , there is an acute angle of the panel approximately 2 A from the armpits. The orthogonal fold which is orthogonal to the centerline will bring the acute angle to the armpit.
FIG. 6
In is illustrated the result of the orthogonal fold which results in a figure with two substantially right angles and by the dashed line is shown the final diagonal fold. Note that seams adjacent to the armpit will form two sides of a pocket upon completing the folding.
FIG. 7
shows a relatively thin portion of the garment adjacent to the collar and the thicker triangle of folded fabric with perfect corners. The final fold is illustrated with a dashed orthogonal line.
FIG. 8
However, the final method step further includes tucking the collar and its adjacent fabric into the pocket formed in the thicker triangle between the armpit and the centerline by the series of alternating orthogonal and diagonal folding steps. shows one triangular side of the wedge which can be easily handled without falling apart.
The present invention provides improved storage, transportation, and display of sleeved garments as a soft wedge. A triangular prism is transformed from flat fabric pieces by flexible fasteners and a method of folding. Sleeves and other delicate decorations and components are protected from damage by folding into a self enclosed polyhedron defined by two triangles and three trapezoid faces without pins, straps, ribbons, cans, cylinders, bags, boxes, or bands. The invention can be easily distinguished from placing clothes on a hanger or in a container. The invention can be easily distinguished from stuffing clothes into a bag sewn into the garment as a pocket. The invention can be easily distinguished from folding, rolling, or coiling a substantially flat substantially rectangular piece of fabric. The invention can be easily distinguished from conventional folded garments by picking up the transformed object in one hand and putting it down. In contrast conventionally folded objects can remain in their configuration if they are not disturbed, and once moved, must be refolded. Other methods of rolling simple shirts anonymize them into tubes with little or no distinguishing identification.
BRIEF DESCRIPTION OF DIAGRAMS
FIGS. 1-8
illustrate the unfolded and progressively folded garment. | |
Pope Francis has made the term his own in multiple addresses. Among these is his 2015 address in Bolivia where he expounded on its meaning in these words:
The world’s peoples want to be artisans of their own destiny. They want to advance peacefully towards justice. They do not want forms of tutelage or interference by which those with greater power subordinate those with less. They want their culture, their language, their social processes, and their religious traditions to be respected. No actual or established power has the right to deprive peoples of the full exercise of their sovereignty. Whenever they do so, we see the rise of new forms of colonialism which seriously prejudice the possibility of peace and justice.
This emphasis on empowerment is reflected in the development of comunidades eclesiales de base (base ecclesial communities) in Latin America and in the participation of many congregations in the United States in community organizing networks focused on social and economic change. Empowerment approaches can be found in neighborhood social assessments which focus, not just on “community needs,” but on “community assets.” Empowerment also is reflected in individual “client” assessments in social work and counseling which look to build upon personal strengths and capacities as well as to respond to challenges and weaknesses.
Two decades ago, a task force of Catholic Charities USA developed the following definition of empowerment: “Empowerment is a process of engagement that increases the ability of individuals, families, organizations, and communities to build mutually respectful relationships and bring about fundamental, positive change in the conditions affecting their daily lives.” The task force based this understanding on three principles: (1) People are the primary agents of change; (2) empowering changes happen through participative relationships; and (3) the human person is both social and spiritual; what affects one aspect of the person, affects the other.
This focus on empowerment has been part of Catholic social justice praxis over the last century or more in the emphasis on economic development, worker cooperatives, promotion of labor unions, worker ownership, and micro-enterprise loans used to develop the economic skills and assets of low-income individuals and families. As recently as May 2019, the Caritas Internationalis General Assembly reiterated this theme in these words:
We continue in our commitment of working together in partnership and fraternal cooperation, so that we can become agents of transformation, helping people to be artisans of their own destiny, defying the structures that make it difficult or impossible for people to prosper, and ensuring that our common home is sustained and respected for future generations.
This commitment continues to be critically important in so many Catholic ministries that emphasize both charitable outreach and structural transformation to promote social justice.
Pope Benedict XVI. (2009). Caritas in Veritate: Charity in Truth, 43.
Pope Francis. (2015, July 9). Address to the Second World Meeting of Popular Movements, Santa Cruz de la Sierra (Bolivia), Section 3.2., at http://w2.vatican.va/content/francesco/en/speeches/2015/july/documents/papa-francesco_20150709_bolivia-movimenti-popolari.html
Catholic Charities USA. (1998). A Catholic Charities Framework for Empowerment.
Ibid. | https://faith-justice.org/catholic-social-thought-and-empowerment/ |
You may watch below the official trailer of Leo Da Vinci Mission Mona Lisa, the latest Italian adventure animated movie directed by Sergio Manfio based on a script he co-wrote with Anna Manfio, Francesco Manfio, and Davide Stefanato:
LEO DA VINCI MISSION MONA LISA
“Life flows peacefully in Vinci: Leonardo is struggling with his incredible invention Lorenzo helps him and Gioconda observes them mockingly. The arrival of a storyteller comes to town, speaking of a hidden treasure, put Leonardo, along with his friend’s Lorenzo and Giaconda venture in search of this lost tresure and the begins of a great adventure!”
An animated movie loosely based on the Italian Renaissance polymath Leonardo da Vinci? Well, I want to see it!
There’s also an Italian poster:
The film has yet to get an official US release date, but it was released in Italy on January 11, 2018.
Stay tuned for updates. | https://teaser-trailer.com/leo-da-vinci-mission-mona-lisa-movie-trailer/ |
Achievement Exhibition of Art Kaleidoscope
Taipa Village Cultural Association collaborated with AAMA (ARK – ASSOCIATION OF MACAU ART) to present the “2018 Art Kaleidoscope Exhibition” which showcases the paintings and crafts created by the public who participated in three different workshops. Organised by AAMA and co-organised by Art Stationer Education Center and Grace Art Design Studio, these three workshops include painting, printmaking and English calligraphy under the theme of “Art Kaleidoscope.” The objective is encourage the public to get into the world of paintings and merge arts into their life. This also aims to promote the art and creative diversity of the public by utilizing different materials and techniques to interpret their own unique Kaleidoscope, representing a diverse range of scenes.
Introduction of 2018 Art Activities
Art Kaleidoscope - as the name suggests, it is a campaign consisted of a series of art activities that advocates art pieces created in different means and different media with the goal of cultural exchange in paintings and crafts. It supports painting works created with an innovative concept but at the same time do not give up the traditional drawing techniques. The campaign encourages the public to get into the world of paintings and merge arts into their life.
Project Objectives and Expected Outcome
Art Kaleidoscope will use paintings as the main axis, with which to integrate with different elements, to come up a series of innovative art activities. Through promoting new form in art creation, the campaign aims to cultivate the public’s interest and appreciation in arts. It aims to abolish the concept that art can only be appreciated with a distance. It aims to bring arts into life and community. It provides an art space that allows the public to approach arts in a relaxed and carefree mood. The ultimate goal of the campaign is to integrate arts into daily life and stimulate the artistic atmosphere in the community. This campaign, consisting of 3 workshops in different art forms, aims to bring out views exchange in arts and inspire the attendees to create work of their own idea and style. Selected finished works will be displayed in a public exhibition. This is to convey a clear message – artwork creation is not limited to artists. If we are willing to try, everyone can be an artist. | https://taipavillagemacau.org.mo/en/2018/05/28/%E8%97%9D%E8%A1%93%E8%90%AC%E8%8A%B1%E7%AD%92%E5%AD%B8%E5%93%A1%E6%88%90%E6%9E%9C%E5%B1%95/ |
Corrosion of Metals:
The phenomenon of a surface of metal being attacked by air, water or any other substance around it is known as corrosion and the metal being attacked is said to corrode.
Examples of corrosion:
- When iron is exposed to moist air for a long time, its surface acquires a coating of a brown, flaky substance called rust, which is mainly hydrated iron oxide (Fe203.xH20).
- The surface of copper acquires a green coating of basic copper carbonate in moist air.
- The surface of aluminium gets coated with a thin layer of aluminium oxide which prevents the metal underneath from further damage.
- Silver articles become black after sometime when exposed to air as silver reacts with sulphur present in the air to form a coating of siLver sulphide.
- Noble metals such as goLd and silver do not corrode readily. Both air and water are necessary for corrosion to take place.
Example 1.
You must have seen tarnished copper vessels being cleaned with lemon or tamarind juice. Explain why these sour substances are effective in cleaning the vessels.
Answer:
Copper vessels become tarnished when the copper metal gets corroded and forms a layer of basic copper carbonate by reacting with carbon dioxide present in air.
Sour substances such as lemonjuice ortamarind juice are acidic in nature as they contain citric acid and tartaric acid respectively. Being acidic, they neutralize the basic copper carbonate and dissolves the layer of copper carbonate formed.
Prevention of Corrosion
Rusting of iron can be prevented by painting, oiling and greasing, galvanizing (by coating iron objects with zinc), chrome plating etc.
Galvanization:
Galvanisation is the method of protecting steel and iron from rusting by coating them with a thin layer of zinc. The galvanized article remains protected from corrosion even if the zinc layer is broken.
Important
If the zinc coating is broken, the galvanised object remains protected against rusting because zinc is more reactive than iron and hence can be easily oxidised .Thus when zinc layer breaks down, the zinc continues to react and gets oxidized.
Alloying:
Alloying is a method for improving the properties of a metal thereby getting desired properties.
Example: Pure iron is very soft and stretches easily when hot. It is therefore never used in its pure state, but instead mixed with a small amount of carbon (0.05 %) due to which it becomes hard and strong.
Alloys: An alloy is a homogeneous mixture of two or more metals, or a metal and a non-metal. It is prepared by first melting the main metal and then dissolving the other elements in it in a definite proportion.
Amalgam:
An alloy whose one of the metals is mercury is known as an amalgam.
Properties of alloys:
- The electrical conductivity of an alloy is less than that of pure metals.
- Sometimes an alloy has lower melting point than any of its constituents.
Some common alloys: | https://ncertmcq.com/corrosion-of-metals-definitions-equations-and-examples/ |
The Murdock Trust is grateful to partner with hundreds of nonprofit organizations each year who are committed to working for the common good in the Pacific Northwest. We see each of these organizations as strategic partners in the communities and regions we serve. Like any community or team, we want to ensure we are working to serve and support them as best we can, and have several ways we gather their feedback and input.
Through nearly 47 years of service to the nonprofit sector, we’ve developed a robust playbook of effective practices that contribute positively to those we support. However, we recognize that growth and improvement must be ongoing. The needs of our communities evolve. Business tools and strategies change. And as such, we seek to regularly assess our own work and performance to ensure we are continuing to constructively and positively partner with our constituents.
One way we go about this is by utilizing a variety of channels to gather candid feedback from our applicants, grantees, and the broader nonprofit sector. We recognize that not every channel necessarily works for every individual so to increase accessibility we offer multiple ways for gathering this input. We also recognize that there is unfortunately an inherent power imbalance in the funder/grantee relationship, and one of the ways we seek to mitigate this imbalance is to allow our constituents to speak honestly and candidly as often as possible.
A few of these platforms for gathering input include:
- Every grant applicant is assigned a program director who walks with them through the application process, and with whom applicants are encouraged to share their thoughts on our process.
- A sampling of grantees each quarter receive a “Seven-Minute Survey” which gathers an ongoing pulse of anonymous feedback.
- Approximately every three years, we conduct a Grantee and Applicant Perception Report with the Center for Effective Philanthropy (see below).
- We periodically convene development consultants, contract grant writers, and leaders to test new online systems and features and provide feedback.
- We also pay attention to and participate in online conversations and websites where nonprofits can share their feedback, such as our statewide nonprofit associations.
Throughout 2021, the Trust completed our most recent Grantee and Applicant Perception Report administered by the Center for Effective Philanthropy (CEP). The report gathers anonymous hard data from grantees and applicants about our process, communication, accessibility, relational interactions, and more. In addition to providing benchmarks for the Trust to assess over a multi-year horizon, a particular strength of this survey is that it provides comparable data from other foundations that CEP has surveyed, giving us a glimpse into how we compare to other foundations our grantees and applicants might be working with. Today, we wanted to share some of the key takeaways from this report.
Key Findings: What’s Working Well
- The Murdock Trust approach serves grantees well. Consistent with results from our last two surveys, the Trust was rated in the top 10% amongst cohort foundations for impact on the grantee organization. Many grantees shared that Trust support went beyond just funding, and included non-monetary support, a more credible reputation thanks to the backing of a Trust grant, and increased chances for collaboration. For example, survey participants shared:
- “Many organizations just want to fund programs. Murdock Trust sees beyond the program to the needs of the organization as a whole.”
- “To get approved for a Murdock grant, I feel, gets the attention, and often support, of other donors. The support they give reaches well beyond the borders of just our organization but allows us to collaborate with others.”
- Grantees appreciate the Trust’s relational approach to grantmaking. Rather than feeling transactional, grantees shared that they felt supported and cared for during the application process. The Trust was rated within the top 6% amongst cohort foundations for transparency and communication of its goals and strategies. As one participant shared:
- “If a problem arises, they answer my questions in a very timely manner and are very approachable. They guided me along the way and I never felt pressured or that I was asking silly questions.”
- Grantees feel that Trust staff are committed to understanding and serving the unique needs of their communities. 90% of grantees indicated that Trust staff embodied a strong commitment to serving and understanding the people, culture, and communities their organizations served. We were encouraged to hear this because it reflects our deep commitment to understanding and responding to the diverse needs in our region.
Key Findings: Where We Can Improve
- We will continue to streamline our application process. Many grantees shared that while our application process does help them think strategically about their project and their work, the process was time-intensive, which presents challenges, especially for smaller organizations without dedicated grant staff. This was helpful feedback, as we are currently piloting a simplified means to capture financial data as well as updated materials for the application process, which will be informed by what we have learned. We are also exploring a new grant program for smaller organizations that would complement our Strategic Grants program.
- We will continue to refine our vetting process. While we frequently hear how much helpful information is on the website, we are in the process of updating application instructions and examples of the types of organizations we fund on our website. This, in addition to our Letter of Inquiry process, is part of our ongoing work to reduce the number of applicants who go through the whole process only to be declined because they do not fall within Trust funding priorities, guidelines, and practices.
- We could more clearly communicate our commitment to cultural competency. While grantees largely felt that Trust staff were committed to understanding the varied needs and cultures of their communities, we heard from a few respondents that they would appreciate hearing about the Trust’s journey to build its own cultural competency.
Every one of these responses will help us refine our processes and practices moving forward, and for that, we are deeply appreciative of all respondents. While we are grateful to learn ways we can better serve our constituents, we are equally heartened to hear the positive ways our work is serving our region. Thank you for engaging in these conversations with us and helping us all work together better as we seek the common good in the Pacific Northwest.
If you have feedback about our grants application and evaluation process you would like to share, please email [email protected].
Photo by LilArtsy on Unsplash. | https://murdocktrust.org/2022/06/evaluating-our-efforts-with-feedback/ |
To learn about our reporting requirements on the Accessibility for Ontarians with Disabilities Act, 2005 please click here.
The Township of North Dundas is committed to providing quality goods and services that are accessible to all persons that we serve. Documents are available in various accessible formats upon request.
Individuals are advised to contact the Township Office and the Township will work with the individuals to provide a format that meets their needs.
As part of its compliance, the Township has implemented Accessibility Customer Service Standard and Accessible Formats and Communications Support policies pertaining to the serving of individuals with disabilities.
Updates to the Township of North Dundas Accessibility Plan are also available for viewing during regular business hours at the municipal office.
The Township encourages feedback from individuals to help us improve our services. Feedback will be used to assist with revisions of policies and procedures to provide accessible customer service.
Click here to access the Accessible Customer Service Feedback Form.
To ensure the delivery of goods and services to those individuals with disabilities is provided in a timely manner, you are invited to provide your feedback in in person, by telephone at number 613-774-2105, by email through the contact us information, or any other agreed upon method. | https://northdundas.com/town-hall/clerk/accessibility-plan-and-training/ |
NEWPORT, Ore. (KTVZ) – It’s widely understood that animals such as salmon, butterflies and birds have an innate magnetic sense, allowing them to use the Earth’s magnetic field for navigation to places such as feeding and breeding grounds.
But scientists have struggled to determine exactly how the underlying sensory mechanism for magnetic perception actually works.
In a paper published this week in the Proceedings of the National Academy of Sciences, an international team of researchers, including scientists from Oregon State University, outlines a new theory. Magnetite crystals that form inside specialized receptor cells of salmon and other animals may have roots in ancient genetic systems that were developed by bacteria and passed to animals long ago through evolutionary genetics.
The theory is based on new evidence from nanoscopic magnetic material found within cells in the noses of salmon. The paper’s lead author is Renee Bellinger, who began the research as a doctoral student at Oregon State, completing her Ph.D. in fisheries science in 2014.
“The cells that contain magnetic material are very scarce,” said Bellinger, who now works as a research geneticist at the U.S. Geological Survey and is affiliated with the University of Hawaii, Hilo. “We weren’t able to definitively prove magnetite as the underlying key to magnetic perception in animals, but our study revealed associated genes as an important tool to find new evidence of how potential magnetic sensors may function.”
“Finding magnetic receptors is like trying to find a needle in haystack. This work paves the way to make the ‘needle’ glow really bright so we can find and understand receptor cells more easily,” Bellinger said.
The findings have the potential for widespread application, from improving salmon management through better understanding of how they use the ocean to targeted medical treatments based on magnetism, said coauthor Michael Banks, a fisheries genomics, conservation and behavior professor at Oregon State.
“Salmon live a hard and fast life, going out to the ocean to specific areas to feed and then coming back to their original spawning grounds where they die. They don’t have the opportunity to teach their offspring where to go, yet the offspring still somehow know where to go,” Banks said. “If we can figure out the way animals such as salmon sense and orient, there’s a lot of potential applications for helping to preserve the species, but also for human applications such as medicine or other orientation technology.”
Bellinger’s work built on research from more than 20 years ago by Michael Walker of the University of Auckland in New Zealand, who initially traced magnetic sensing to tissue in the noses of trout.
“He narrowed it down to magnetite in the olfactory rosette,” Bellinger said. “We were expecting to see chains of crystals in the noses of salmon, similar to how magnetite-producing bacteria grow chains of crystals and use them as a compass needle. But it turns out the individual crystals are organized in compact clusters, like little eggs. The configuration was different than the original hypothesis.”
The form in which magnetite appears, as tiny crystals inside specialized receptor cells, represents biomineralization, or the process by which living organisms produce minerals. The similarity between magnetite crystals of bacteria and fish suggests that they share a common evolutionary genetic history, Bellinger said.
The mechanism for developing magnets was developed by bacteria more than two billion years ago and then passed on to animals. Today, these tools to perceive magnetism continue to be present across a broad array of animal species, said Banks, who is affiliated with OSU’s Department of Fisheries, Wildlife, and Conservation Sciences in OSU’s College of Agricultural Sciences and the Coastal Oregon Marine Experiment Station at OSU’s Hatfield Marine Science Center.
The process for sharing them across animal life may have been similar to the evolution of mitochondria, which control how animals release energy. Mitochondria originated in bacteria and were then transferred to other organisms, he said.
Understanding the evolutionary history of magnetite is a step toward further pinpointing the underlying process, the researchers said. Banks, Bellinger and colleagues would next like to test their new understanding and associated markers to further address the mystery of why and how some life forms have well-tuned tools for long and precise migratory strategies.
Co-authors of the paper are Jiandong Wei of Shanghai University in China; Uwe Hartmann of Saarland University in Germany; Herve Cadiou of the Institute of Cellular and Integrative Neuroscience in France; and Michael Winklhofer of the University of Oldenburg in Germany.
Bellinger’s work was supported in part by a Mamie Markham Research Award; several awards of up to $10,000 are available to support research by graduate students at Hatfield Marine Science Center each year. These funds allowed Bellinger to travel to France to conduct primary research for the project. | |
On July 3, 2015, Public Works and Government Services Canada (PWGSC) announced a new government-wide integrity regime for procurement and real property transactions. While the spirit of PWGSC’s “Ineligibility and Suspension Policy” is consistent with previous iterations of the federal government’s procurement practices, the contents of this policy are notably different.
Indeed, in an effort to ensure that the Government of Canada conducts business with ethical suppliers in Canada and beyond, the following key features have been introduced:
- 10-year ineligibility to contract with the Government of Canada for suppliers who have been convicted or absolutely/conditionally discharged of a listed offence in Canada or abroad;
- Up to 5-year reduction of ineligibility period if the supplier addresses the causes of conduct leading to ineligibility;
- No automatic penalty for a supplier based on the actions of an affiliate in which they had no involvement; and
- Up to 18-month ineligibility to contract with the Government of Canada for suppliers who have been charged with a listed offence or have admitted guilt.
Among other things, collusion, bid-rigging, and many other ant-competitive activities under the Competition Act (the “Act”) are considered listed offences.
What does this mean for suppliers?
In addition to the stiff penalties provided for bid-rigging, for example, under the Act, i.e. up to 14-year jail terms and unlimited fines (at the discretion of the Court), companies may also be barred from carrying on business with the federal government. Such action can have devastating, long-term implications for businesses.
What is more, the Ineligibility and Suspension Policy means that businesses must conduct themselves as scrupulously overseas as they do in Canada. A charge or conviction registered in another jurisdiction may have the same lasting consequences as a charge or conviction registered at home.
Finally, deviation from the Ineligibility and Suspension Policy places a company’s good will at risk. For example, PWGSC compiles and publishes a list of suppliers that have been debarred from contracting with the federal government. Of course, that is in addition to any negative publicity in the media and other similar sources. | https://blog.firstreference.com/changes-to-the-government-of-canadas-procurement-policies/ |
Pathways wind through moss-covered rocky gullies, across wooden bridges and between gnarly tree roots here in Puzzlewood in Gloucestershire’s Forest of Dean. Here in this ancient woodland, oak, ash, beech, lime and yew trees live side-by-side, forming a heavy canopy overhead, creating an environment which has proved popular as a location for TV and film productions too, including Star Wars and Doctor Who.
Part of its charm can be found in its unusual, mossy rock formations, known as scowles, formed over millions of years as water eroded limestone here. It is no wonder that (unconfirmed) rumour has it that Lord of the Rings author JRR Tolkien drew inspiration for the forests of Middle Earth here. Would-be visitors will have to wait to discover it for themselves - Puzzlewood is currently closed due to the coronavirus pandemic. | https://peapix.com/bing/31574 |
Hanoi is based on the old puzzle Towers of Hanoi. Hanoi is my first attempt at writing a simple game to run on my Nokia 6210 mobile phone. It uses J2ME and should be able to run on any Java MIDP1.0 enabled phone. You may download the source or game to play for free, but I won't be able to offer any support for it. Hopefully I'll be able to start on some other games soon.
Screen Shots
|The game starts here.||The object is to get here.||If you get stuck, you can get hints.|
How to play Hanoi
- There are three piles where you can place disks.
- The game starts with between 3 and 6 disks on the left-most pile.
- The object of the game is to move all of the disks from to the right-most pile.
- Each turn you may pick up a disk from the top of a pile, and put it on top of another pile.
- You may only place a smaller disk onto a larger disk.
- Each turn, press a keypad number (1, 2, or 3) corresponding to the pile that you would like to pick up from.
- Then press the number corresponding to the pile that you would like to put it down on.
- If you change your mind after picking up a disk, press the same number again to put it down.
Menu Options
- Suggest - If you get stuck, select suggest to get a hint about where to move.
- Reset - Start a new game. The disks, move counter and timer are reset.
- Mix - Starts a new game, but with the disks in random places. This can sometimes be more challenging then when the disks start in the correct place.
- Solve - The game will automatically make moves until the puzzle is solved.
- Level 3-6 - Starts a new game with this number of disks.
- About - Instructions on how to play.
Downloads
- Click here to download the game
- Or navigate to http://ylett.com/projects/hanoi/hanoi.jad with your phone. | http://ylett.com/projects/hanoi/ |
This lesson can be used as a pre-lesson for the Area Models and Multiplying Fractions lesson plan.
Objectives
Academic
Students will be able to multiply fractions using an area model.
Language
Students will be able to review their thinking as they multiply fractions using color-coding and area models.
Introduction(5 minutes)
- Tell students you went to a party and you have ⅔ of your cake left. Ask them to draw a tape diagram on a piece of scrap paper that represents how much cake you have left over.
- Observe student work to gather information about their ability to represent fractions pictorially and whether they can solve the problem. Have students share their pictures with their partners.
- Tell them that you brought the remaining cake to the teacher's lounge and ½ of it fell down on your way there.
- Give them a minute to think about how to figure out how to find out the remaining amount of cake using their tape diagrams. Have them write notes on the Review Your Thinking! worksheet. (Note: they're not expected to know the answer yet, but this is a good way to gather background information on how familiar they are with fractions and representing them in pictorial models.)
- Explain that during the lesson they'll learn how to make fractional parts of fractions to figure out what is remaining using area models. In this case, they'll find out what fraction of your birthday cake is left over.
Explicit Instruction/Teacher modeling(6 minutes)
- Model how to solve the multiplication problem (i.e., ½ × ⅓) using an area model and note the similarities and differences between an area model and tape diagram (e.g., tape diagrams show one fraction, while area models show two, etc.).
- Explain that the length of the rectangle should represent ⅔ of the cake left over and should have the rectangle split vertically into thirds, with two of the three pieces shaded in. Then the width should represent ½ of the cake that fell. The rectangle should be divided horizontally in half with one half shaded. The shaded pieces that overlap represent the numerator and the total pieces in the whole rectangle represent the denominator (i.e., 2/6, or ⅓ when simplified).
- Distribute and read the vocabulary card for area model and have students read the definition and copy your teacher markings from the board onto the back of their vocabulary card for the term "area model." Also, create a multiplication equation for the visual on the front of the area model vocabulary card (i.e., ½ × ⅓ = ⅙) and label the sides of the area model with the correct fraction.
- Explain that the visual of the area model represents the multiplication of fractions. The word problem asked students to determine a fractional piece of another piece. The keyword that showed multiplication in this case was "of." Tell them that the word "of" tells us we need to multiply because it is showing there is a fractional part of a different fractional part.
- Ask a student to paraphrase the process you used for solving the cake problem aloud to the class. Allow other students to add to the explanation as necessary.
Guided Practice(7 minutes)
- Display the A Fraction of a Wall worksheet and ask students to think about how they would solve problem #1. Then, ask them to think about what operation they will use and why.
- Have them write their thoughts on the first cell of the Review Your Thinking! worksheet and then turn and talk to their partners about their strategy. Before they separate into partners, model answers using the following sentence frames:
- "I hadn't considered that ____."
- "This makes me realize that ____."
- Have volunteers share their ideas on how to solve the problem. Allow other students to correct misconceptions. Make sure everyone has an idea on how to solve the problem before allowing them to break off into groups to solve it.
- Ask volunteers to share their area models.
Group work time(12 minutes)
- Tell students to recount how they solved the problem about the fallen cake and the A Fraction of a Wall problem. Now, have them talk in partners and reassess their understanding from the beginning of the lesson on the Review Your Thinking! worksheet. Allow them to discuss their new ideas about solving area models for fraction multiplication and write them down in the middle cell.
- Assign them Problem #2 from the A Fraction of a Wall worksheet and read the problem aloud. Then, give the students one minute to think about how they will solve the problem. Allow volunteers to share their ideas aloud and then have them solve the problem on their own.
- Choose volunteers to share their work with the class and solve the problem for the class. Correct any misconceptions and model using the sentence frames written on the board to model reviewing your thinking about area model multiplication with fractions and its applicability to real-world situations. For example, "I didn't realize that all rectangles in the real world could be area models too," or "I hadn't considered that I would have to solve a subtraction problem to find the wall space left to paint before I solved the multiplication problem."
Additional EL adaptations
Beginning
- Allow students to use their home language (L1) or their new language (L2) in all discussions.
- Encourage them to use the vocabulary cards and terms in their conversations and writing. Allow them to draw pictures to support their understanding of the terms.
- Provide reference materials in their L1 to assist in their vocabulary word acquisition.
- Allow them to color-code the area model and use folded paper to further emphasize the overlap with the fractional pieces. Cut out the pieces too if necessary.
Advanced
- Pair students with mixed ability groups so they can offer explanations and provide feedback to beginning ELs when appropriate.
- Ask them to consider what happens to the denominator as they multiply the fractions and have them make suggestions as to why the denominator gets bigger.
- Challenge them to create their own real-world problem involving multiplication that does not immediately suggest students should multiply the given fractions.
Assessment(5 minutes)
- Refer to the last cell of the Review Your Thinking! worksheet and ask students to write down one thing they learned from the lesson and something they still wonder about the topic. Provide the following sentence frames:
- "I learned ____."
- "Before I thought ____, now I think ____."
- "I wonder ____."
Review and closing(5 minutes)
- Allow one student to share their assessment information and allow the other students to answer the presenter's question.
- Ask students to consider if area models can help them solve larger fractions, for instance 14/27 × 4/19. Have them consider the challenges and benefits involved.
- Explain that area models are great when you have to multiply smaller fractional units, but they can get more difficult as the denominator gets larger because the amount of pieces of the whole increase. Tell students that area models can help them visually understand the math behind multiplying fractions, but they'll soon learn a faster way to solve the problems. | https://www.education.com/lesson-plan/el-support-lesson-visually-multiplying-fractions/ |
It’s not easy being green. Between all the paper, food waste, and electrical demands, most workplaces aren’t exactly eco-friendly. So how can your organization encourage better employee habits and become more environmentally conscious?
4 Employee Benefits Trends for 2019
Open enrollment may have just ended, but it’s never too early to start thinking about emerging benefit trends. Benefits are an increasingly important factor in attracting and retaining top talent, and can play an important role in setting your company apart from competitors. Fifty-seven percent of employees consider benefits to be one of the top factors in accepting a new job.
How to Start an Office Meditation Program
Close your eyes. Breathe in, breathe out. Relax your muscles. In the darkness under your eyelids, you see something taking shape. It’s an article on meditation at work.
The 6 Principles of Mental Wellness at Work
It was time for a trip to go see my grandmother in her final stage of life.
How to Help Employees Cope with Depression at Work
Mental health has become an increasingly important piece of the ever-evolving benefits puzzle.
Beyond Perks: How to Uplevel Your Benefits Package
The field of human resources is changing. In our HR Redefined series, we give innovators a medium to share personal reflections, professional advice, and best practice guidance.
Stay Updated
Get the latest news from Namely about HR, payroll, and benefits.
9 Ways to Handle HR Burnout
Everyone experiences work burnout at different points throughout their careers. Often, the burden falls on HR to support employees through these tough periods. But what happens when HR is the one to experience burnout? Especially on smaller teams, it can feel like there’s no time to take a break from putting out fires. We’re all human, and HR is no exception. It’s important to acknowledge burnout, ask for help, and take the time you need to get back to your full speed.
3 Overlooked Workplace Safety Concerns
No one wants to put the health of their employees at risk—but far too often, safety falls on the back burner due to lack of time or resources.According to the Bureau of Labor Statistics, an average of 80,000 office and administrative workers suffer on-the-job injuries each year. Many of these are preventable by tackling commonly overlooked hazards, like eye strain and environmental toxins.
7 Programs to Inspire Employee Health
Last year, over 62,776,640 people searched “get healthy" on Google. Doing so is part of a familiar nationwide declaration that this will finally be the year to lose those five pounds, cook every night, cut out sugar, exercise four times a week, the list goes on. But more often than not, life gets in the way of the commitment it takes to execute on these goals. Let’s be honest—between work and family, it can be hard to find time to make healthier choices. But what if your workplace offered resources to make it easier? Fortunately, many HR teams are already on the case, introducing a variety of employee health and wellness initiatives and perks.
How to Build a Financial Wellness Program
Wellness has taken the benefits world by storm. Whether it be onsite health screenings, in-office yoga, cosponsored gym memberships, or sleep wellness workshops, wellness perks often tend to focus on the physical aspect of health. Though mental wellness has also shown an uptick in importance, other areas of wellness are also gaining steam. | https://blog.namely.com/topic/wellness |
To develop understanding of Maori world view and bicultural frameworks that inform Midwifery Practice in Aotearoa. Te Tiriti o Waitangi, cultural safety and Turanga Kaupapa guidelines are explored to support midwifery relationships with whanau and to ensure cultural integrity during pregnancy and childbirth is maintained.
Programmes
- HL0901
NZQA Level
Level 5
NZQA Credits
15
Delivery method
- Web-Enhanced
Learning hours
- Directed hours
- 75
- Self directed hours
- 75
- Total learning hours
- 150
Resources required
- Learning Outcomes
- 1. Demonstrate an understanding of Maaori world view and Tikanga Maaori within midwifery practice.
2. Examine Te Tiriti o Waitangi and its significance for Aotearoa.
3. Identify and describe the mechanisms of colonisation within an Aotearoa context.
4. Demonstrate an understanding of the principles of cultural competence and Turanga Kaupapa guidelines in relation to midwifery practice.
- Content
- • Te Ao Maaori – Maaori worldviews, values and belief systems
• Tikanga Maaori
• The powhiri process
• Whakawhanaungatanga – Processes for building relationships
• Pepeha and Maaori greetings
• Karakia and waiata
• Te reo Maaori pronunciation
• Te Tiriti o Waitangi
• Colonialism and imperialism
• Mechanisms of colonisation in Aotearoa
• Hauora and Maaori models of health
• Partnership model
• Turanga Kaupapa and Nga Maia o Aotearoa me Te Wai Pounamu
• Cultural responsiveness in midwifery practice
- Assessment Criteria
- *This assessment must be passed to pass the module.
The portfolio is comprised of multiple components.
Students need to provide evidence against all learning outcomes, gain an overall mark of 50%, and attend the Noho Marae to pass this module.
- Teaching and Learning Strategy
- May include: Project-based Learning; Flipped Classroom; Blended Learning; Work-integrated Learning; Inclusive Practices.
Methods may include workshops and practical classes, tutorials, case-based learning, inquiry-based learning, group activities & discussion, supported online learning, e-portfolio, practice simulation. | https://www.wintec.ac.nz/modules/HSMW523 |
The Grand Opera House is committed to engaging directly with the wider community organisations delivering performing arts activities, creating partnerships to deliver community based projects as well as Festivals and other events throughout the year. If your community group is interested in future partnerships with the Grand Opera House, please contact the Grand Opera House Creative Learning Manager on [email protected] or call 028 9024 0411.
Through this social history project, the Theatre worked in partnership with local community groups to create four new plays to reflect the history of areas across the city over the lifetime of the Grand Opera House, which will celebrate its 125 Anniversary in 2020.
These new plays premiered at Belfast Harbour on 11 April 2018 and then moved to the Baby Grand Studio for two public performances at the Grand Opera House. This project was delivered in partnership with c21 Theatre Company and local community groups – Donegall Community Forum Association, Belfast South Community Resource, Carrick Hill Community Centre and Lower Shankill Community Development Association.
Working in partnership with Dale Farm, a new piece of theatre was commissioned for children aged 10-11 years of age, highlighting the hazards around the farmyard and the importance of working safely. The play toured to 21 rural schools across County Antrim, and was accompanied by a workshop through which the children further explored the areas of danger, the risks involved, and how these can be avoided.
The project culminated with three performances at The Balmoral Show, Northern Ireland’s largest agriculture show, on Friday 18 May 2018.
This project was delivered by Pintsized Productions. | https://www.goh.co.uk/get-involved/community-engagement/ |
Our teaching philosophy at Generations Learning Center is to establish a safe, warm, nurturing environment that stimulates a child’s natural interests and helps children develop autonomy, self-esteem, self-discipline, self-regulation, communication skills, and a passion for learning that stretches into adulthood. At GLC, we incorporate the philosophies of Maria Montessori, Jean Piaget, and Reggio-Emilia. We believe that children are more likely to be engaged in environments that are set up in such a way that are powered by the children. This type of environment originates from the interests of the child and is framed by the teacher. We believe that children can reach their full developmental capacity by constructing their own knowledge through inquiry play in both large and small group settings as well as through one-on-one teacher directed activities. We also believe that children learn best through repetition, routine, and saturation. We strongly encourage children to communicate their individual interests to us and from there, we can set up an environment, task, project, or center that is specific to their interests. We want each child to be constantly involved in structured, hands-on, natural indoor and outdoor environments that are set up specifically for different social, emotional, physical, and academic skills. We believe children should be given choices each day to engage in both indoor and outdoor activities using different educational manipulatives that stimulate different senses and meet each child's learning needs.
All teachers are highly qualified and trained in the areas of education. Many of our teachers have a college degree and we have staff members on site that are CPR and First Aid certified. The love of children and learning are evident by the acute care each child receives. We choose faculty members who love the early childhood profession and have a strong, positive curiosity about life that sparks confidence and joy in those around them. Above all, employees at Generations Learning Center believe that Jesus is the Son of God and the Bible is God breathed.
Our Red Room is for children around the ages 4-5 years. Our fully Pre-K class, this room focuses on learning and prepping for your childs next educational steps! With our “Creative Curriculum”, our Red room gets to approach learning from a joyful and fun view point, all while focusing more on numbers, letters, and so much more!
reading and writing, days of the month, seasons, animals and so much more!
Engaging in music and imagination is also a huge part of this age groups development.
Our Orange Room is for children around 2 years old. Kids in this room enjoy even more arts & crafts, music time and group activities!
We love this fun space and atmosphere for our 2 year olds!
Our Yellow Room is for children starting around 15 months of age. Kids in this room are busy! They are typically walking, done with bottles, and transitioned into one nap a day! Such a joyful, bright room for these little ones!
Our Blue Room is for children around 8 to 15 months of age.
Children in this room can sit up, start to engage more with other kiddos and are learning to crawl and then walk!
Our Teal Room is for children from 6 weeks old to about 8 months of age.
We love seeing the sweet babies grow and develop in this space. Such a blessing!
Assistants and “Floaters” are a huge blessing to all of our room teachers and kids! They help within different rooms throughout the day, ensuring we have lots of hands on deck to help create space for fun, peace and joy!
Our Admin team is awesome! They work hard to create an organized and efficient atmosphere as well as caring for your needs as parents and everything else that comes with taking care of all the sweet kiddos at GLC! | http://daycarefranklintn.com/about/classrooms-teachers/ |
- The project consists of a Conditional Use Permit to operate a Massage Establishment within a Planned Development (PD) Zoning District (PA 01-063). The proposed 1,049 square foot Massage Establishment will operate within an existing tenant space at the San Ramon Village Plaza Shopping Center. The Massage Establishment will consist of four massage rooms, two bathrooms, and one waiting room. No exterior modifications are proposed.
- Contact Information
-
Jeri Ram
City of Dublin
100 Civic Plaza
Dublin, CA 94568
Phone : (925) 833-6610
Location
- Cities
- Dublin
- Counties
- Alameda
Notice of Exemption
- Exempt Status
- Categorical Exemption
- Type, Section or Code
- Sect. 15301
- Reasons for Exemption
- The project is consistent with the General Plan Land Use Designation of Mixed Use and all applicable General Plan policies as well as with the PD (Planned Development) zoning designation and regulations. The proposed development occurs within city limits on a project site of no more than five acres surrounded by urban uses. The project site has no value as habitat for endangered, rare, or threatened species because the Project is located in an urbanized area. As conditioned, approval of the project would not result in any significant effects relating to traffic, noise, air quality, or water quality. The site can be adequately served by all required utilities and public services because the Project is proposed on an existing developed lot within a existing structure; the project site has access to public streets and is served by public utilities and services.
Disclaimer: The document was originally posted before CEQAnet had the capability to host attachments for the public. To obtain the original attachments for this document, please contact the lead agency at the contact information listed above. You may also contact the OPR via email at [email protected] or via phone at (916) 445-0613. | https://ceqanet.opr.ca.gov/2009128062 |
Virtual reality is no longer just an expensive curiosity, but a real option for many trainers and educators. Like many new media, it runs the risk of being used badly. Strategies and methods from earlier media are not necessarily suited to new media. If you apply classroom methods to VR, what will you miss? If you apply video strategies to VR, what will you get wrong?
In this session, you will learn initial ideas and frameworks for how to use virtual reality for learning. Specifically, you’ll review how virtual reality provides affordances for addressing cognitive load theory’s three types of load. You will review what multimedia learning theory says about using graphics and audio, and how these lessons can be applied to VR. You’ll also review situated learning and experiential learning. Throughout this session, key findings from the worldwide network of VR labs will be applied to inform discussion. You will develop an overall framework for looking at when, and how, virtual reality training works.
In this session, you will learn:
- About the Proteus effect
- About situated learning in VR
- About creating presence in VR
- Frameworks for thinking about learning in VR
- About differences between expert and novice learning in VR
- About limits to learning in VR
Audience:
Intermediate to advanced designers, project managers, managers, and directors.
Technology
discussed in this session:
HTC Vive, Samsung Gear VR, gaming engines (including Unity), and WebVR.
Hugh Seaton
GM
Adept Reality
Hugh Seaton is GM of Adept Reality, a software company focused on using VR/AR in adult learning. Prior to Adept, Hugh founded AquinasVR, a VR/AR software company which he sold to the Glimpse Group, parent of Adept. Hugh’s focus, whether in immersive technologies, IoT or artificial intelligence, is on the intersection of learning science, creativity, and the cutting edge technologies that can bring learning to new levels of effectiveness. | https://www.learningguild.com/realities360/sessions/session-details.cfm?event=574&date=07%2F26%2F2017&time=14%3A30%3A00&fromselection=doc.5018&from=sessionslist&session=8525 |
If you are reading this post, it means that you have already heard about a critical vulnerabilities discovered in CPUs of different vendors including Intel, AMD and ARM. If you have been living under a rock then go to Meltdown ans Spectre
Smartphones, tablets, laptops, PCs, MACs, cloud devices, servers and 99% of CPUs released since 1995 are affected.
What is this about?
Meltdown and Spectre are two major vulnerabilities in modern processors, undermining layers of security features introduced by applications on your system, resulting in leakage of ANY data in clear form (encrypted/not encrypted).
In short, if you are using the most secure password manager with the latest encryption technology to secure your credentials and private information, you need to be worried.
More exploits are underway, but below you can see one demonstration of stealing passwords via Meltdown vulnerability in real-time:
Spectre vulnerability exploitation:
When a lower layer of your system (hardware) is vulnerable to such extent, the security of the upper layers (software) does not really matter.
How does this happen?
Normally CPUs isolate application memories from each other by marking kernel addresses as non-accessible. This design is introduced for security reason, we do not want
Application A to access sensitive data of
Application B stored in a memory, because this would allow a malicious actor to access all your secrets with a single application. Both Meltdown and Spectre make use of bug in isolation, resulting in potential leakage of ANY data stored in memory
Meltdown affects any operating system running on a vulnerable CPU and it does not depend on vulnerabilities in a software. It enables reading memory of other processes and virtual machines in the cloud at the speed of 503 KB/s.
Meltdown - allows applications to access arbitrary system memory. After installing a malicious software on your system, attacker can access sensitive data stored in the system memory.
Important issues with these vulnerabilities is that when exploited, they do not leave any traces or logs. Thus undermining the need for accountability.
Am I affected?
Windows users you can run following PowerShell commands with elevated privileges to check if you are affected. This will install Microsoft's SpeculationControl module, run it and display results.
- Run
PowerShell as Administrator
Install-Module SpeculationControl- Install Microsoft's SpeculationControl module
- You will be asked to confirm if you want to download and install the module
- If you are receiving an error, then most probably you need to change the execution policy of PowerShell by running following command:
Set-ExecutionPolicy Bypass
- Run
Get-SpeculationControlSettings- this will check your system and output results
If you see text in red similar to screenshot below, it means that your system is affected:
For Linux users you can refer to a method of checking, published on GitHub:
Am I affected by Meltdown?! Meltdown (CVE-2017-5754) checker
Issues with patches and fixes
- Microsoft will release patches for the system which have a specific key in a registry. Company instructed vendors, that in order to protect customer devices, AV companies should insert a string in a registry, confirming that the patches will not affect their software and/or users. So when you receive a confirmation from AV that patches will work as intended, if you don't want to wait for an update make following changes:
- Open
Regedit
- Go to
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\QualityCompat
- Add following
DWORDvalue:
cadca5fe-87d3-4b96-b7fb-a231484277cc
- Open
- Patches that are underway might incur at least 30% performance depreciation in all your existing implementations. Therefore you are strongly advised to speak with cloud service providers asap about potential increased costs and available patches
What should I do?
- Install BIOS and Firmware updates from official manufacturers
- Install latest Operating System patches
- Note, certain Antivirus software might not be compatible with latest patches resulting in stop errors a.k.a "Blue Screen of Death". Before installing patches you are advised to check this with your provider.
- Keep your browsers and software up-to-date
- Amp your safe browsing game by installing following plugins:
- Or use browsers like ToR and Brave to limit scripts on some extent
- Take care of physical security of your devices - do not give access to smartphones, laptops and other devices to untrusted users
- Do not install suspicious or pirated software - Pirated software are well known to exposing users to different types of malware
What should the business do?
- Warn your employees about the vulnerability and potential risks in layman's terms, ask them to be vigilant- refer technical audience to official research papers
- Test and install BIOS and Firmware updates from official manufacturers
- Test and install latest Operating System patches, also check if Antivirus Provider is supporting those patches
- Push updates to browsers
- Run installed software checks, make sure that any unauthorized software is removed
- Be wary of waves of phishing attacks and prepare a plan to counter measures
- Contact third parties and vendors, make sure that they are aware of this issue.
- If you are using cloud services - contact providers and ask for timelines for fixing vulnerabilities and clarify additional processing costs. | https://cypherowl.com/meltdown-and-spectre-vulnerabilities-quick-overview/ |
Highlights and Achievements
HBP research has resulted in over 2200 journal publications to date, unique new research infrastructures, and high-level scientific events. Here we highlight some of them.
New method for measuring brain activity could help multiple sclerosis patients
Researchers of the Human Brain Project have developed a new methodology to calculate the delay of signal propagations in brains of patients suffering from multiple sclerosis, a chronic inflammatory disease that affects more than 2 million people worldwide.
Conscious perception of sound is carried by dedicated assemblies of neurons in the brain
A new study co-led by Human Brain Project researchers in France has revealed how consciously listening generates sound-specific assemblies of neurons in the brain.
Human Brain Project researchers identify new marker of ALS outcome
A study by Human Brain Project researchers identifies a new marker for predicting the clinical outcome of patients of Amyotrophic Lateral Sclerosis (ALS) through magnetoencephalography.
Researchers of the Human Brain Project identify seven new areas in the insular cortex
Researchers of the Human Brain Project have identified seven new areas of the human insular cortex, a region of the brain that is involved in a wide variety of functions, including self-awareness, cognition, motor control, sensory and emotional processing.
Human Brain Project researchers map four new brain areas involved in many cognitive processes
Researchers of the Human Brain Project (HBP) have mapped four new areas of the human anterior prefrontal cortex that plays a major role in cognitive functions.
Multiscale simulations unveil molecular mechanisms that shape brain plasticity
Scientists of the Human Brain Project have used simulation tools to uncover molecular mechanisms of a family of enzymes that is key to processes related to brain plasticity and learning.
HBP scientists have simulated how the Parkinson’s brain responds to deep stimulation at multiple scales
Researchers of the Human Brain Project have created the first multiscale model of how a Parkinson’s brain responds to deep brain stimulation.
Brain simulation augments machine-learning–based classification of dementia
Brain simulation methods can be used to improve the classification of dementia and could in the future constitute a new diagnostic tool to help direct patients towards the right treatment.
Energy Efficiency of Neuromorphic Hardware Practically Proven
Human Brain Project researchers collaborate with Intel to bring AI closer to the energy efficiency of the brain.
HBP scientists have developed personalised brain models to improve the treatment of depression
A novel, high-resolution, personalised model of Deep Brain Stimulation for patients suffering from depression has been developed by scientists of the Human Brain Project and colleagues.
HBP researchers reveal how the volumes of brain regions change in Parkinson’s disease
Researchers of the Human Brain Project found that in Parkinson’s disease the volumes of certain brain regions decrease over time in a specific pattern that is associated with clinical symptoms and largely coincides with the pattern described in Braak’s famous staging theory.
New implant offers promise for the paralyzed
A system developed by Grégoire Courtine and Jocelyne Bloch at HBP partners EPFL and CHUV now enables patients with a complete spinal cord injury to stand, walk and even perform recreational activities like swimming, cycling and canoeing.
Human Brain Project: Researchers design artificial cerebellum that can learn to control a robot’s movement
Researchers at Human Brain Project partner University of Granada in Spain have designed a new artificial neural network that mimics the structure of the cerebellum, one of the evolutionarily older parts of the brain, which plays an important role in motor coordination.
HBP scientists outline in Science how brain research makes new demands on supercomputing
In the latest issue of Science, Katrin Amunts and Thomas Lippert explain how advances in neuroscience demand high-performance computing technology and will ultimately need exascale computing power.
When algorithms get creative
Human Brain Project scientists present new approach to reveal neuronal learning principles with algorithms that mimick biological evolution.
Human Brain Project researchers demonstrate highly efficient deep learning on a spiking neuromorphic chip
Scientists from Heidelberg and Bern have succeeded in training spiking neural networks to solve complex tasks with extreme energy efficiency.
EBRAINS shares access to improved laptop-to-supercomputer brain simulator
EBRAINS, the new digital research infrastructure set up by the EU-funded Human Brain Project, has made available an enhanced brain simulation software, NEST 3, with wide practical use in fields such as neuroscience and robotics.
A robot on EBRAINS has learned to combine vision and touch
On the new EBRAINS research infrastructure, scientists of the Human Brain Project have connected brain-inspired deep learning to biomimetic robots.
EBRAINS robot simulation one step closer to in-hand object manipulation
A team of scientists in the Human Brain Project is using the EBRAINS research infrastructure to learn more about how the brain coordinates complex hand movements.
EBRAINS powers brain simulations to give insight into consciousness and its disorders
The European research infrastructure EBRAINS powers a new approach to understand the brain mechanisms underlying consciousness.
New EBRAINS-enabled tool to help guide surgery in drug-resistant epilepsy patients
Ultra-high definition predictive brain tool seeks to give surgeons a sharp eye to spot epilepsy in a patient’s brain.
HBP research contributes to new treatment for spinal cord injury
A team of scientists has developed a treatment that allows patients to regain control of their blood pressure, using targeted electrical spinal-cord stimulation.
HBP-supported innovation: A brain prosthesis for the blind
Human Brain Project research has helped lay the foundation for a brain implant that could one day give blind people their sight back.
A centerpiece of EBRAINS’ human brain atlas is presented in “Science”
"Julich-Brain" is the name of the first 3D-atlas of the human brain that reflects the variability of the brain’s structure with microscopic resolution.
Epilepsy: International researchers propose better seizure classification
An epilepsy model developed by the Human Brain Project provides the basis for the novel framework, which could also push forward basic understanding of the disease.
New in EBRAINS: A map of 25,000 synapses in the hippocampus
A team of HBP scientists in Madrid have published detailed 3D-maps of around 25,000 synapses in the Human Hippocampus.
Optimizing neural networks on a brain-inspired computer
New research now shows how so-called “critical states” can be used to optimize artificial neural networks running on brain-inspired neuromorphic hardware.
New learning algorithm should significantly expand the possible applications of AI
The e-prop learning method forms the basis for drastically more energy-efficient hardware implementations of Artificial Intelligence.
New approach for a biological programming language
Researchers have succeeded in mathematically modelling the emergence and interaction between so-called "assemblies".
EBRAINS now a recommended data sharing service for Nature Scientific Data
EBRAINS services join the list of recommended repositories for neuroscience data.
LOCALIZE-MI: an open source dataset of simultaneous intracerebral stimulation and HD-EEG in humans
The LOCALIZE-MI dataset constitutes the first open dataset that comprises EEG recorded electrical activity originating from precisely known locations inside the brain of living humans.
New neuroscience method disentangles crossing fibers in brain tissue
Scientists at Forschungszentrum Jülich have now found that scattered light can be used to resolve the brain’s substructure like the crossing angles of the nerve fibers with micrometer resolution.
“Noisy” Chips: Insights from Brain Research Offer Benefits for Neuromorphic Hardware
Neuromorphic chips modelled on the human brain have enormous potential, offering a promising and efficient alternative for artificial intelligence (AI) tasks in particular.
Real-time cortical simulation on neuromorphic hardware
Researchers at the University of Manchester have conducted a real-time simulation of a large-scale biologically representative spiking neural network on SpiNNaker neuromorphic hardware.
Simulation-based method to target Epilepsy goes into clinical trial
In what represents a major milestone on the path to clinical application, a novel method to improve outcomes of Epilepsy surgery has now received approval for clinical testing in 13 French hospitals.
A Coming Generation Of Robots Will Have More Human Hands
The Shadow Robot Company designs and develops highly dexterous robotic hands that are as realistic as possible to human hands and makes them available to researchers within the framework of the Human Brain Project.
Second Generation SpiNNaker Neuromorphic Supercomputer to be Built at TU Dresden
Saxon Science Ministry delivers 8 Mio Euro to TU Dresden for second generation SpiNNaker machine, to be called “SpiNNcloud".
New model predicts how targeted stimulation can make the brain change from one state to another
Using a computational model of the brain, an international group of scientists led by HBP researcher Gustavo Deco of the Pompeu Fabra University in Barcelona and by Morten L. Kringelbach (Ahrus and Oxford universities), have developed an innovative method to improve the precision of brain stimulation.
ICEI Resources Used in the First Detailed 3D Hippocampus Model
ICEI resources are being used in a large-scale project that aims to develop the first detailed and realistic 3D model of an area of the hippocampus.
Probabilistic cytoarchitectonic maps for 32 new human brain areas released
The JuBrain cytoarchitectonic atlas consists of probabilistic maps of cortical areas and subcortical nuclei, defined by histological analysis of ten human post-mortem brains for each structure. This cytoarchitectonic atlas is a key element of the HBP Human Brain Atlas, representing the most detailed micro-structural parcellation of the cortex currently available. The existing atlas of 74 individual maps has now received an update, and is extended by 32 new cytoarchitectonic maps.
The scientific case for brain simulations
In a new perspective article scientists from the HBP argue why such simulations are indispensable for bridging the scales between the neuron and system levels in the brain. The article has been published as a featured perspective in the leading journal Neuron.
New brain atlas of transgenic mouse disease models shared via the Human Brain Project infrastructure
In a recent Scientific Data paper, researchers from the University of Oslo and The Baylor College of Medicine in Houston present a novel online brain atlas of transgenic mouse lines that are used to generate mouse models of brain diseases, in which gene activity (so called “expression”) can be regulated by administration of the compound tetracycline.
MRI and AI to develop a brain virtual biopsy tool
A novel tool has been developed to meet the needs of both fundamental research on the human brain, in particular to decode the cyto- and fiber architectures of the cerebral cortex in vivo, and the clinical research to provide clinicians with a virtual biopsy tool which could eventually replace invasive surgical biopsies.
New neuronal model with potential to tackle spinocerebellar ataxias disease
Within the partnering environment of the Human Brain Project, a Future and Emerging Technologies Flagship, researchers of the Politechnic University of Milan have developed a simplified neuronal model with potential to tackle the spinocerebellar ataxias disease.
Brains of smarter people have bigger and faster neurons
Scientists working within the Human Brain Project have for the first time uncovered a direct relation between brain cell size and IQ level.
HBP research contributes to breakthrough neurotechnology for treating paralysis
Three patients with chronic paraplegia were able to walk again thanks to precise electrical stimulation of their spinal cords via a wireless implant.
New insights into autism through the HBP's human brain atlas
Data obtained within HBP’s brain atlas work has contributed to the discovery of a "short distance" brain connectivity deficit that is associated with a lack of social interaction and empathy.
Improving epilepsy care: HBP researchers involved in major clinical trial
In a world’s first, personalized brain modelling is providing the basis for a large-scale clinical trial in epilepsy.
Neuroimaging and the future of brain cartography
HBP scientists Simon Eickhoff and Sarah Genon this month explained in the journal Nature Reviews Neuroscience how rapid advances in Magnetic Resonance Imaging may help revolutionize our understanding of the brain’s organization and its relation to human behavior.
Reading and writing the mind with brain implants
Scientists are continuously improving methods to read out brain activity and adjust or control it with brain stimulation techniques.
How brain cells work together for spatial memory and imagery
Researchers Neil Burgess and Andrej Bicanski from University College London (UCL) have developed a computational model showing how the mental images we have drawn from our memories can be explained by the firing of individual brain cells.
Progress in brain simulation on neuromorphic computers
HBP researchers in Manchester and Jülich have compared the accuracy, speed and energy efficiency of the neuromorphic system SpiNNaker with that of the supercomputing software NEST during a large-scale brain simulation.
The hippocampus as never seen before
Researchers from HBP´s research area Human Brain oroganization have imaged a post-mortem human hippocampus with an 11.7T preclinical MRI machine at unequalled resolution.
Measures of consciousness in unresponsive patients
Scientists in HBP´s research area Systems and Cognitive Neuroscience (SP3) are working on more accurate and clinically useful ways of measuring whether a patient is conscious, in order to help doctors to make decisions on treatment and care.
Linking gene expression to brain microstructure
A team of researchers from the European Human Brain Project (HBP) has developed the JuBrain Gene Expression tool (JuGEx) that combines the benefits of a genetic and an anatomical atlas.
How to simulate the structural plasticity of the brain at unprecedented scale
Understanding the dynamics of the connectome will allow insights into how learning, memory, and healing after lesions such as stroke work. This is the aim of the Model of Structural Plasticity (MSP) by Butz and van Ooyen, which describes under which conditions neurons connect to each other.
Virtually studying rehabilitation-induced cortical remapping after stroke
On the Neurorobotics Platform a virtual rodent can be accessed for experiments, consisting of a musculoskeletal and spinal cord model and a data-driven whole brain model.
Epilepsy: Building Personalised Models of the Brain
Human Brain Project scientist Viktor Jirsa is the head of a team creating personalised brain models for patients with intractable epilepsy.
The brain is still ‘connected’ during non-REM sleep
New research shows the brain remains interconnected during non-REM sleep, which was thought not to happen.
Progress in building Europe’s new platform for understanding the brain
Over 500 scientists and engineers from 19 countries met in Glasgow at the 5th Summit of the Human Brain Project.
Zülich-prize for research into consciousness
Steven Laureys received the Klaus Joachim Zülch 2017 prize from the Max Planck Society. He was honored for his “fundamental discoveries in the neurology of consciousness and coma.”
SpiNNaker-1 – A Spiking Neural Network Model of the Lateral Geniculate Nucleus
A model with a synaptic layout which is consistent with biology has been used to simulate biologically plausible dynamics of the Lateral Geniculate Nucleus (LGN) on the neuromorphic computing system SpiNNaker.
Understanding vision with the help of neurorobotics
A modular, flexible visual system is in development that any neuroscientist or roboticist could use for his/her own purpose is in development on the Neurorobotics Platform.
New web-based interactive 3D viewer for Terabyte-sized brain templates
A web-based 3D viewer for exploring Terabyte-sized brain templates together with their surfaces and parcellations has been developed by the team of Timo Dickscheid at Forschungszentrum Jülich, in collaboration with the group of Jan Bjaalie at University of Oslo.
BrainScaleS-1 – Training a deep spiking network on the wafer system
Emulating spiking neural networks on analog neuromorphic hardware offers several advantages over simulating them on conventional computers, particularly in terms of speed and energy consumption. However, this usually comes at the cost of reduced control over the dynamics of the emulated networks.
Release of the first batch of more than 500 MRI images on NeuroVault
The release consists of high-resolution MRI images which represent activations of 12 subjects of the Individual Brain Charting (IBC) cohort for the first set of functional contrasts. This is the first step towards normative datasets that will be the basis of a brain atlas based on cognitive representations. | https://www.humanbrainproject.eu/en/science-development/scientific-achievements/highlights/ |
The main aim of the socio-political group, which embodies the Rabubiy’ah Order, is to provide the individual with full scope for self-development. Its basic principles are that the individual is the focus of value and that the group exists to enable the individual to develop and express himself to the full extent of his capacity. It lays primary stress on personal worth. A society based on these principles will be composed of free individuals, each enriching his life by working for the enrichment of all life, and each moving onwards by helping others to do the same. This society should be judged by the solutions it offers for the social, political and economic problems that confront all human groups. We will first consider the economic system it advocates.
II. Capitalism and the Rabubiy’ah Order
Capitalism is the oldest of economic systems. In course of time it was invested with an air of sanctity. People believed that it was the only system which was suited to "human nature. They could not imagine that society could prosper and flourish under any other type of economic organisation. The industrial and commercial revolutions gave it a powerful impetus and it reached its peak in the nineteenth century. When Capitalism was carried to the extreme, its defects became obvious and could no longer be ignored. No doubt, Capitalism has certain merits and, in the earlier stages of social evolution, it helped man to create civilisation and achieve a higher standard of life. It calls forth some of the best qualities in man, such as initiative, ingenuity, imagination and a capacity for hard work. But its weakness, which washes away all its good points, is that it overemphasises one factor of production, namely capital—nay, it gives all credit to it-—and fails to do justice to the other equally—rather more—important factor, namely labour. The result is that the bulk of the wealth produced goes to the man who contributes capital and the labourer has to be content with a mere pittance. Capital tends to accumulate in the bands of the few while poverty is the lot of the labourers who constitute the bulk of the population. This unequal distribution of the national wealth, a necessary consequence of Capitalism, is tolerated for a time, but, sooner or later, it generates class struggle and paves the way to the dissolution of society. Capitalism is based on two assumptions. The first assumption is that man has an inviolable right to the property that he has acquired. The second is that society can prosper only when it does not interfere with the economic activity of the individual. The Capitalist pins his faith on the doctrine of laissez-faire and holds private property to be sacred. He argues that what he has earned through his own ability, skill and effort, must be exclusively his own. Nobody can claim a share in it. He may, if he likes, give a part or the whole of it to another but no one can force him to do so. He will be doing no wrong if he keeps to himself. This attitude is exemplified in Korah whose story is narrated in the Quran. When he was asked to give a part of his immense wealth to the needy and the poor, he replied exactly like the Capitalist of today. "Why should I? This is the result of my own capability" (28: 78). The Quran tells us that man commits a grave mistake if he believes that lie owes his wealth exclusively to his own ability and effort:
Now, when harm falls on man, he cries to Us, and afterwards, when We have granted him a boon from Us, he says: "Only by means of my own ability I obtained it." Nay, it is a mischief (to think so) but most of them know not (39: 49).
The main fallacy inherent in the Capitalists' argument is made evident when we look at the conditions on which the production of wealth depends. Four factors, stated below, contribute to the production of wealth:
1. Man's physical and mental capacities.
2. The education and training he has received.
3. The opportunities available to him.
4. His industry.
It is obvious that man can take credit for only the fourth factor, i.e., the work he puts in. His natural endowments are a gift of God. He did not acquire them through his own efforts. He is indebted to his community for the education and training he has received. Society too provides him with opportunities for producing wealth. It follows that man can justly claim only that portion of the wealth he has produced which is the outcome of the labour he had put in. The work he has performed entitles him to a share in the wealth produced and not to the whole of it. The Quran puts it clearly:
Man shall have only that for which he strives (53: 39).
If this principle is accepted and acted upon in good faith, the conflict between workers and employers will disappear and a serious menace to internal peace will be removed. The Capitalist will willingly spend the major portion of his profits for the welfare of the community and the workers will be able to live in comfort and security. This principle is challenged on the ground that there are innate differences among men and it is unfair to treat them as equal in respect of ability. Those who possess greater ability can justly claim a greater share in the national wealth. The Quranic view is that the personal worth of man does not depend on his talent to do a thing but on what he actually does. All men are equal in the sight of God, whatever may be the differences among them. Moreover, the argument of the Capitalist had weight so long as it was believed that intellectual work was more valuable than manual work. We now believe in the spectrum of values. Any type of work is as valuable as any other, provided man puts his heart into it. Manual work can have as much value as intellectual work. Besides this, the differences among men bestow on each his unique individuality. However different men may be in respect of intelligence, they can be equal in respect of personal worth, if each works conscientiously to the limit of his capacity. So it is in the interest of society that some men should possess more ability in a particular sphere than others. According to the Quran, the difference in ability amongst various individuals is for the purpose of division of labour (43: 32), and should not constitute a ground for creating inequality in society and meting out different treatment to different sets of men. The knowledge that men are unequal should not be allowed to induce us to relax our efforts to raise the general standard of living in the society. The Rabubiy’ah Order is committed to provide the means for the development of each and every individual. It treats as sacred the right of every man to have full scope for his development.
Division of labour is meant to ensure maximum production of wealth. It does not imply that the man who does manual work is inferior to the man who organizes the industry. No doubt, the work of one person be more than that of another. The Quran takes the position that a person who earns more should not keep it all to himself, but should give the surplus to those who, through lack of ability or opportunity cannot earn enough to satisfy their needs. In the ideal society emphasis would be on mutual help and not an individualism. The following verse puts it clearly:
And Allah has blessed some of you above others in respect of capacity to earn livelihood, yet those who are blessed (with abundance) restore not their provision to those subordinate to them so that they may share equally with them. It is then the blessing of Allah which they deny? (16: 71).
The blessing of Allah" comprises those advantages that the individual enjoys which have not been gained through his own effort, namely his innate capacities education and other opportunities. In gratitude for those gifts, he should use his wealth to held those who are less fortunate than himself. He should regard his wealth as the gift of God and his gratitude to God should be expressed in acts of beneficence. We should all live as member of a single family, and we are really that, being so to say, "God’s children." The father does not discriminate between his children. He loves them all alike. God, as the Quran says, is Rabbul-alamin (1:1). He takes care of every living being in the world," developed during the last decade, was foreshadowed by the Quran a long time ago.
A necessary consequence of this view is that the means of production should not be owned by any one person or group but should be held in common by all. The Quran throws valuable light on this point as will be shown in the next section.
III. Means of Production
Land is the most important of the means of production. The desire to possess it has proved to be a fertile source of strife between individuals as well as between states. Most of the wars have been waged for the acquisition of land. Endless litigation has been the result of disputes regarding the ownership of land. The Quran categorically states that the earth belongs to God and serves the purpose of providing subsistence to all living creatures. Private ownership of land is thus ruled out:
And the earth (land) He has created for the benefit of all living beings (55: 10)
It is the source of livelihood for men as well as other creatures:
And We have provided therein (in the land) sustenance for you, and for those whom you do not provide (15: 20).
The point is stressed in another verse.
And after that He spread the earth and brought forth from its water and its pasture. And mountains He firmly set. (All this He did) as a provision for you and your cattle (79: 30-33).
It is thus clear that land, like water and air, heat and light, is God's gift to all men. For a men to claim proprietary right to them is, therefore, tantamount to claiming equality with God. The Quran declares in no uncertain terms:
Say thou: Do ye indeed believe not in Him Who created the earth in two long ages and ascribe ye unto Him rivals? He (and none else) is the Nourisher of the universe. And He placed therein stable mountains above it and blessed it, and measured therein its foods in four periods (seasons of the year) alike for those who stand in need of it (41: 9-10).
Just as the amount of work put in by man determines his rightful share in the wealth produced, so his share in the produce of the land shall be proportionate to his labour on it. If it had not been for diverse favourable factors, his labour would have been in vain. The Quran points out this in the following verses:
And have you seen that which you cultivate? Do you make the seed to grow or do We make it to grow? If We willed We could surely make it dry, then you cease not to exclaim: Lo! We are laden with debt, nay but we are deprived of harvest. And have you observed that water which you drink? Is it you who shed it from the rain-cloud, or are We the shedder? If We willed We could make it bitter. Why then are you not grateful? And have you observed the fire which you strike out? Was it you who made the tree thereof to grow, or were We the grower? We (have mentioned all this just to) remind you (of the real facts). Remember 1 We have made all this means of provision for the hungry (56: 63-73).
We are, therefore, driven to the conclusion that in: participating in the Divine programme of the Rabubiy’ah Order, we are participating in a joint business venture in which the capital investment is made by God and we contribute only labour. We can claim only that part of the land's produce which we have earned through our labour and must hand over the rest to God, that is, devote for the benefit of society. The poet Iqbal has expressed this idea in lines of exquisite beauty, translated as below:
Who nourishes the seed in the soil which no ray of light penetrates?
Who raises clouds from the waves of the ocean?
Who (drove hither favourable wind from the West?
Whose is the soil, whose the light of the Sun?
Who has filled the ear of corn with pearly grain?
Who has taught the seasons to change with regularity?
Landowner! The land is neither thine nor mine
Thy forefathers did not own it, nor dost thou nor I.
(Bal-e-Jibril, p.161).p.161).
The Quran declares that the produce of the earth is the "means of sustenance for mankind" (50: 11). The slightest change in the natural order could deprive man of the means of sustenance:
Who is he that will provide for you if He should withhold His provision? (67: 21).
The same idea is elaborated in the following verses
Let man consider his food.
How We pour water in showers
Then split the earth in clefts
And cause the grain to grow therein
And grapes and green fodder
And olive-trees and palm-trees
And garden-closes of thick foliage
And fruit and grasses.
Provision for you and your cattle (80: 24-32).
Ownership of land is not sanctioned by the Quran, nor is that of any other means of production. The animals eat as much as they need and leave the remainder for others. Man alone is plagued with the desire to hoard and takes pride in his store, thus keeping for himself what he does not really need:
And how many a living creature that does not carry its sustenance (29: 60).
The desire to hoard starts the process which culminates in the Capitalistic system. Capitalism, by enabling the rich to exploit the poor, has filled the world with misery, hatred and mutual suspicions. It has turned the world into a veritable hell. The Quran has denounced Capitalists as the enemies of mankind:
They who hoard up gold and silver and spend it not for the cause set forth by Allah, unto them give tidings (O’ Muhammad!) of a painful doom, on the day when it will all be heated in the fire of Jahannam, and their foreheads and their flanks and their backs will be branded therewith (and it will be said unto them): Here is that which you hoarded for yourselves. Now taste of what you used to hoard (9:34-35).
Capitalism appeals to the self-seeking motives of man and tempts those who have amassed wealth to give free rein to their anti-social tendencies. Let them not forget the doom which, in the words of the Quran, is sure to overtake those who profit by a system so detrimental to the real interests of mankind:
And let not those who hoard up that which Allah has bestowed upon them of His bounty, think that it is better for them. Nay, it is worse for them. That which they hoard will be their collar on the occasion of the manifestation of the results of their deeds; and Allah's is the heritage of the heavens and the earth, and He is well aware of what you do (3: 179).
Capitalism is a fertile source of misery for mankind and is thus an inhuman system. It will certainly be abandoned when men become more enlightened and have a clearer perception of their real interests:
Lo ! ye are those who are called to spend for the cause set forth by Allah. And as for him who hoardeth and thus depriveth others of the provision for life, really depriveth his own self thereof. And Allah is the rich and ye are the poor. And if ye turn away, He will bring in your stead a people other than you; and they shall not be like you (47: 38).
This is the verdict of history too. The Quran exhorts us to pay attention to the fate of nations which devoted themselves to amassing wealth and turned their back on high ideals. They were supplanted by other nations:
And how many a people that dealt unjustly, have We shattered; and raised after them another folk (21: 11).
Man is under an obligation to work to his utmost to earn his livelihood, then to keep for himself what he needs and hand over the remainder to his society. The Quran is explicit this point:
And they will ask thee: "What it is they should give away." Say thou: "The surplus" (2: 219)
IV. Period of Transition
However the Capitalist system cannot be abolished by the stroke of a pen. It is firmly established and appears to be essential to modern society. It will be some time before it is uprooted and replaced by the Order of Rabubiy’ah. We must face this fact without giving way to despair. We should bear in mind that man can Progress only slowly and gradually. So long as he is moving steadily in the right direction, he need not get impatient. It is not easy to attain a high objective. He should work hard and wait patiently but confidently for ultimate success. The Quran advises us to proceed cautiously in this matter and not to be hasty and rash. It has proposed diverse measures to guard against the accumulation of wealth in the hands of the few. Usury, i.e., money earned by capital, is declared to be unlawful. The law of inheritance is designed to ensure the equitable distribution of a deceased person's wealth among all his relatives. Man is enjoined to help his parents, relatives and all others in need, generously and to make all possible concessions to those who owe him money. By prohibiting hoarding, it ensures that money is kept in circulation. In short, the Quran has recommended the steps by which ultimately the Rabubiy’ah Order might be inaugurated. All these measures, however, are valid only during the period of transition. Under the Rabubiy’ah Order, every man will willingly make over to his society whatever he does not need for satisfying his basic wants. The Rasool, being the head of this Order, was the first to show by practical example how this higher goal should be achieved. He never hoarded a Single penny throughout his life, nor owned any property. By following his example we can hope to make progress towards the goal of perfection. What is needed is the realisation Order alone can bring peace, prosperity and happiness to mankind, and can open the way to progress and development of man. When this realisation has dawned, it will not be a difficult task to transform modern society into the Rabubiy’ah Order. Already there are signs that the process has started:
Verily, the promised revolution is sure to come; there is no doubt about it; yet most of mankind believe not (40: 59).
The Divine creative activity which makes for progress, is certainly at work in the world of man as it is in nature:
And He it is Whose Laws operate in the heavens (outer universe) and in the earth (human society) and he is the wise and the knowing (43: 84).
To sum up, the Rabubiy’ah Order ascribes supreme value to the human self and aims at creating conditions in which the self can freely develop and gradually attain perfection. This distinguishes the Order from other systems and ideologies. We should not allow ourselves to be misled by superficial resemblance between the Communist state and the Quranic society. The Communist state is no doubt free from the vices of Capitalism, but it functions in the interest of the group or rather the party and is not interested in the individual man. The masses are mere raw material which the party leadership can mould as it likes. The Quran, on the other hand, seeks to protect, preserve and enhance man's self. This intense preoccupation with personal worth distinguishes Islam from Communism and Totalitarianism.
Note As already stated in the Introduction, the economic system of Islam has been touched upon only casually in the present work. It has been discussed in detail in another book which is likely to come out before long. | https://www.newageislam.com/books-and-documents/the-rabubiy-ah-order--its-aim-and-scope-by-allama-ghulam-ahmad-parwez/d/1806 |
Made in France.
Product Details
- Each piece is handcrafted by Artisans so no two pieces are exactly alike
- Olive wood is prized for its intense graining and variations in characteristics bringing elegance to any kitchen
- Berard utilizes sustainable practices in the harvesting and production of their olive wood
- Satin finished, made up of mineral oil and beeswax
- Hand wash only
- To help maintain the olive wood's natural luster, regularly wipe with a light coat of food-safe mineral oil/beeswax
CUSTOMER REVIEWS
My Bag
Your shopping bag is empty.
MY WISHLIST
Your wishlist is empty. | https://kitchenvirtue.com/products/berard-hand-crafted-olive-wood-double-salt-keeper |
Ever-evolving technological innovation creates both opportunities and challenges for educators aiming to achieve meaningful and effective learning in the classroom and to equip students with a well-honed set of technology skills as they enter the professional world. The Handbook of Teaching with Technology in Management, Leadership, and Business is written by experienced instructors using technology in novel and impactful ways in their undergraduate and graduate courses, as well as researchers reporting and reflecting on studies and literature that can guide them on the how and why of teaching with technology.
Browse by title
You are looking at 1 - 10 of 11 items :
- Teaching Methods x
- Education x
- Titles x
Edited by Stuart Allen, Kim Gower and Danielle K. Allen
Edited by Colin Jones
How to Become an Entrepreneurship Educator is the first book to tackle the pressing issue of where to find the educators to meet the global demand for entrepreneurship education. Chapters unite the developmental trajectories of 20 eminent contemporary experts at different levels of enterprise education, to share the collective lessons learned. This book is an invaluable guide to educators from numerous backgrounds looking to reflect on their own practice and to contemplate new strategies for teaching enterprise and entrepreneurship.
Debby R. Thomas, Stacie F. Chappell and David S. Bright
Classroom as Organization (CAO) is a powerful teaching methodology, particularly well-suited for teaching business topics, that can enliven students’ learning experience while giving them the opportunity to practice and develop workplace-related skills. This book provides a comprehensive background to the CAO teaching methodology, including its origins, evolution, and various applications. From this basis, the considerations of how to teach and design a CAO are explored. If you are familiar with CAO, but have been afraid to try it, this book provides the support to take the next step in your practice of experiential teaching and learning.
Teaching Strategic Management
A Hands-on Guide to Teaching Success
Sabine Baumann
Teaching Strategic Management: A Hands-on Guide to Teaching Success provides a wide scope of knowledge and teaching resources on methods and practices for teaching strategic management theories and concepts for a multitude of settings (classroom, online and hybrid), course levels (bachelors, masters, MBA, executive) and student groups.
Edited by Helen Walkington, Jennifer Hill and Sarah Dyer
This exemplary Handbook provides readers with a novel synthesis of international research, evidence-based practice and personal reflections to offer an overview of the current state of knowledge in the field of teaching geography in higher education. Chapters cover the three key transitions – into, through, and out of higher education – to present a thorough analysis of the topic.
The Art of Mooting
Theories, Principles and Practice
Mark Thomas and Lucy Cradduck
This book examines the theories relevant to the development of skills necessary for effective participation in competition moots. By consideration of underlying theories the authors develop unique models of the skills of the cognitive, psychomotor and affective domains and effective team dynamics; and emphasise the importance of written submissions. The authors use this analysis to develop a unique integrated model that informs the process of coaching moot teams according to reliable principles.
Learning and Teaching in Higher Education
Perspectives from a Business School
Edited by Kathy Daniels, Caroline Elliott, Simon Finley and Colin Chapman
There is often little guidance available on how to teach in universities, despite there being increasing pressure to raise teaching standards, as well as no official requirement for academics to have any specific teaching qualification in many countries. This invaluable book comprehensively addresses this issue, providing an overview of teaching in a business school that covers all stages of student learning.
Edited by Charles H. Matthews and Eric W. Liguori
The third volume of the Annals of Entrepreneurship Education and Pedagogy critically examines past practices, current thinking, and future insights into the ever-expanding world of Entrepreneurship education. Prepared under the auspices of the United States Association for Small Business and Entrepreneurship (USASBE), this compendium covers a broad range of scholarly, practical, and thoughtful perspectives on a compelling range of entrepreneurship education issues.
Teaching Human Resource Management
An Experiential Approach
Edited by Suzanne C. de Janasz and Joanna Crossman
Filled with over 65 valuable case studies, role plays, video-based discussions, simulations, reflective exercises and other experiential activities, Teaching Human Resource Management enables HR professors, practitioners and students at all levels, to engage and enhance knowledge and skills on a wide range of HR concepts. This book breathes life into the teaching of Human Resource Management and readers will be able to better relate theoretical concepts to workplace decisions and dilemmas.
Teaching Leadership
Bridging Theory and Practice
Gama Perruci and Sadhana W. Hall
We can teach leadership. The authors share their personal experiences of how they have bridged theory and practice in curricular and co-curricular settings to set the pace and tone for leadership development and life-long learning. Starting from theories of leadership, they share how it can be taught with rigor, intentionality, structure, and organization. Assessment is key from conception to implementation. Scholars, educators, and practitioners from different fields and professions are invited to adjust, adopt, and adapt concepts, ideas, methods and processes discussed in this book to their own institutional contexts and reality. | https://www.elgaronline.com/browse?level=parent&pageSize=10&sort=datedescending&t=Teaching_Main_ID&t0=Education_Main_ID |
Are you looking for an exceptionally unique property? Well once upon a time this building housed the criminals and characters of interest of Bedford County and local travelers being originally built in 1895 to be the Bedford County Jail. This property offers a great location and endless possibilities for someone with a little imagination, such as a theme restaurant, boutique, delightful cafe, bed and breakfast, restaurant, Pub, and so much more. Situated in historic Bedford County, being centrally located between major cities and minutes to the PA turnpike and I-99. If you look closely at the pictures you will see a fun after hours event that took place at the Jail.
-
- Amenities
- Other/See Remarks
-
- Bathrooms
- 1.1
-
- Bedrooms
- 5
-
- Lot Size
- 0.66 Acres
-
- Mls Id
- 53554
-
- Price
- $369,900
-
- Property Tax
- $2,655
-
- Year
- 1895
Listing Provided by
- Juniata Realty
- 8146522234
Categories
Post a comment as
Report
Watch this discussion. Stop watching this discussion.
Welcome to the discussion.
Keep it Clean. Please avoid obscene, vulgar, lewd,
racist or sexually-oriented language.
PLEASE TURN OFF YOUR CAPS LOCK.
Don't Threaten. Threats of harming another person will not be tolerated.
Be Truthful. Don't knowingly lie about anyone or anything.
Be Nice. No racism, sexism or any sort of -ism that is degrading to another person.
Be Proactive. Use the 'Report' link on each comment to let us know of abusive posts.
Share with Us. We'd love to hear eyewitness accounts, the history behind an article. | https://www.bedfordgazette.com/bedfordcountyhomefinder/housing/sale/201-w-penn-street-bedford-pa-15522/realestate_983c02ae-98c9-57bc-aa4a-d7f62c7d15e5.html |
Cute Pumpkin (free pattern)
Hello all! It’s November, so I figured I’d post a late Halloween pattern I wrote up. I would’ve done this earlier but honestly the Seasonal Affective Disorder got the better of me. Anyways, I’ll be showing you how to make this adorable little pumpkin, and how you can alter this pattern to make it any size you want it to be!
This will mostly be an instruction and not a pattern per say. There’ll be LOADS of pictures!
You’ll need:
– Yarn & Colors Epic in orange (about 60yds) and green (just 2ish yds)
– End of black yarn for mouth
– 3mm crochet hook
– Darning needle
– 7mm safety eyes
You can use any yarn+size hook you’d like, as long as you make sure that no stuffing comes out or shows with the finished product.
To start:
Chain 21. Leave a long yarn tail at the start of your chain, you’ll be using this for assembly later.
Turn, sc in 2nd chain from hook, crochet 1sc in each chain (20). Chain 1, turn.
For the next 30 rows:
In BACK LOOP ONLY, sc1 in every st, chain 1, turn.
When you’re done, you’ll see that you have a ribbed piece of crochet, with the yarn tail on the same side at each end (see picture 2, upper corners). This is important.
Next, you’ll fold your piece in half lengthwise and slip stitch the 2 sides together, creating a tube. Cut your yarn and leave a long tail for sewing. Turn the tube inside-out so the slip stitches are on the inside.
Assembly:
It doesn’t look like much yet, but now you’ll be turning the nice tube into a nicer pumpkin. Thread your darning needle with one of the yarn ends. Using the picture below as a guide, sew through every rib on the inside of the pumpkin.
When you have sewn through every inside rib, pull your thread tight and watch how the hole closes up! For extra security or to shape it differently, you can add some extra stitches on the inside of the pumpkin.
Now you’ll add the face to the pumpkin. I put mine about halfway up the pumpkin, but feel free to experiment.
Now, you’ll sew the top of the pumpkin the same way as you did the bottom! Stuff it as you go along, not tightly but about 3/4th of maximum capacity. Otherwise you won’t be able to get it into shape.
Don’t finish off. Instead, sew the thread from the top to the bottom of the pumpkin and pull. Now it’ll start looking more like a pumpkin! Weave the thread from top to bottom and back a few times, pulling it tight every time. Finish off with a good double knot.
Stem
Grab your green yarn. Chain 6, start in 2nd chain from hook, sc back to the end (5sc). Finish off.
Pull the 2 yarn ends from the stem through the pumpkin’s body to the bottom and knot them together.
Congrats, you’re finished!
Customisation
You can make this pumpkin in any size you please. The ratio width to length is 2:3. In the pattern, the width is 20 stitches, the length is 30 stitches (not counting the first row, but that’s OK). This means that for every 2 stitches you add in width, you’ll have to add 3 in length.
Example:
20:30 (this one)
22:33
30:45
40:60
60:90
200:300 (if you make this one, send me a picture please.)
Adjust the length of the stem accordingly. | https://gnarwhalcrochet.com/cute-pumpkin-free-pattern/ |
As Alabama’s legislative session gets closer, the momentum for prison reform is at an all-time high. The current prison overcrowding crisis provides a great opportunity for the Legislature to address a problem that has increased in urgency for years.
It is the goal of conservative leaders in the Legislature to see our prison problems addressed throughprinciple-based solutions, not irresponsible quick fixes. Specifically, reform proposals designed to alleviate our prison crisis should achieve three objectives: improve public safety, institute measures for accountability and cost-efficiency, and foster a heightened level of personal responsibility from any individual who passes through the system.
Improve Public Safety: The central role of government is to protect the lives, liberty and property of its citizens, thus the effectiveness of a correctional system should first be measured by the safety of the public. Following an analysis of Alabama’s prisondata, one of the key findings was that our parole system lacks the resources and tools to best determine when individuals should be kept behind bars and when they should be released.
Further, the level of supervision that individuals receive upon release from prison is not always tailored to their risk of reoffending. This keeps recidivism rates higher than they would otherwise be and results in an elevated threat to the public.
For low-risk offenders, spending extensive time behind bars at a high cost to taxpayers can increase their likelihood of reoffending and actually decrease public safety. When limited corrections resources are mostly eaten up by the high cost of incarceration, we end up releasing individuals with addictions or those lacking the basic skills required to succeed in society, and so jeopardizing public safety.
Cost-efficiency and Accountability: Eliminating waste and controlling government spending is the overarching goal of every conservative policymaker. Periodically, every agency, office, or program receiving public funds should be reviewed for effectiveness and efficiency. In a way, the demand to reduce our prison population requires that we apply these same principles to the oft-overlooked corrections system.
Thanks to the efforts of the Prison Reform Task Force and the Council of State Governments Justice Center, specific programs and procedures within the system have been scrutinized to determine whether the stated purposes of reduced recidivism and improved public safety are being served. There is a clear need to make additional investments into our system, but this must be coupled with diverting resources away from programs or processes that aren’t accomplishing these objectives. The goal of these reforms should not necessarily be to spend less, but to spend more wisely.
Foster Personal Responsibility: Because true freedom requires personal responsibility, government should aim to promote this, rather than suppress it. Our system unintentionally inhibits personal responsibility by churning offenders in and out of the prison system, only to eventually send them out to become prisoners of government dependence. Individuals who leave prison with addictions, no education, or no basic life skills are amongst those who most frequently reoffend. Once they are released, they lack the foundation to succeed and often wind up impoverished. This results in their reliance, or their families’ reliance, on government assistance programs at further cost to taxpayers.
For those prisoners who will eventually be released from prison, it is worth the state’s investment to provide opportunities to re-enter society under strict supervision, to seek treatment for addiction or mental health issues, or to develop an employable skill—after serving an appropriate sentence behind bars.
A stronger system of supervision also instills personal responsibility by setting clear expectations and then ensuring that consequences are uniformly applied and immediately experienced. There are a number of community corrections and re-entry programs in Alabama that employ similar practices with great success. These strategies should be duplicated in other parts of the state.
If careful consideration of each reform proposal is given in light of these guiding principles, we have every reason to expect a final product that Alabamians can be proud of. While no single piece of legislation can solve the challenges brought about over several decades, we are committed to substantive reforms that will represent a leap, not just a step, forward.
With that step, we can attain a prison system that will both preserve our state sovereignty and our state budget and will better protect the people of Alabama.
Cam Ward, R-Alabaster, represents Senate District 14 and chairs the Legislature’s Prison Reform Task Force and the Senate Judiciary Committee. Mike Jones, R-Andalusia, represents House District 92 and chairs the House Judiciary Committee. Katherine Robertson is vice president of the Alabama Policy Institute (API) and a member of the Prison Reform Task Force. | https://alabamapolicy.org/2015/02/23/prison-reform-through-conservative-lens/ |
Daniel Dennett, Author of ‘Intuition Pumps and Other Tools for Thinking’ - krg
https://www.nytimes.com/2013/04/30/books/daniel-dennett-author-of-intuition-pumps-and-other-tools-for-thinking.html?smid=tw-share&_r=0
======
fluidcruft
It's interesting because I'm from a neuroscience training and have
traditionally and reflexively and argumentatively sided with the idea that
things like consciousness and free will are emergent illusions.
However, I've recently had a crisis of confidence when for some reason I tried
to think about the fact that my experience of consciousness and color and etc
does actually exist somehow inside the universe and that others seem to
experience it similarly. Of course there's a similarity of physical structure
that's common to brains that have those experiences. But more from the point
of view of if you stumble upon some complex object for example, you could
wonder whether it is experiencing the illusion of free will. If that's all
purely emergent from known physical laws, we should be able to determine
whether a particular object possesses/produces the illusion, right? So, that
got me wondering what is the minimal hardware/spatial configuration that could
produce this sort of illusion and are illusions actually "something"? It seems
like it's presupposing that certain arrangements of matter generate
"illusions" and yet we seem to want to insist that at some level these
"illusions" don't really exist. How far down do illusions go? Some people are
born blind or deaf/have strokes, so the illusion of color vision or sound can
be isolated and removed from the illusion of consciousness. Do "and gates"
experience some sort of atomic illusion of "and"? Can the presence of illusion
be tested? Anyway, not sure I'm making a cogent statement, I'm still not
entirely clear on what's bothering me about this, but something "feels"
intuitively wrong/missing to me about the purely physical emergent
neuroscience approach now.
~~~
manicbovine
Have you read any of David Chalmers' work? He attempts to provide a framework
for thinking analytically about qualia in "The Conscious Mind: In Search of a
Fundamental Theory." [1]
Nearly all of his papers are accessible via his website. [2]
[1] <http://consc.net/books/tcm/>
[2] <http://consc.net/papers.html>
~~~
fluidcruft
Thank you for the book recommendation. I have not read David Chalmers' work,
but I have seen his name mentioned often. I've been trying to figure out where
to get started with learning about the current understanding of the mind-body
problem and keep getting recommendations to start back with the Greeks and
work forward (which seems both quite daunting but also baroque given lack of
influence from modern neuroscience). I think I'll start reading there. Thanks
again!
~~~
momerath
I would also recommend The Ego Tunnel by Thomas Metzinger.
------
Symbol
Makes me wish I had taken a class with him while at Tufts. It's a hard truth
that you can usually plot a good course through your university experience
AFTER you are finished.
~~~
whtrbt
I got to see him speak in Melbourne last year, he has a very enjoyable
speaking manner. If I recall correctly, it covered looking at things from a
teleological perspective (as a tool for thinking, not as proposed fact) and
then free will and morality in relation to that. I'm not certain, but I think
there was the suggestion that believing we have free will is useful because it
makes us act as though we have free will!
------
vinceguidry
> The elusive subjective conscious experience — the redness of red, the
> painfulness of pain — that philosophers call qualia? Sheer illusion.
I wonder if the qualia debate isn't just an elaborate exercise in missing the
point. Experience doesn't work the same way as matter, you can't subdivide it
until you get the "experiential atom". Divide an experience into parts, what
you get is separate experiences, each unique in their own right.
What if, instead of trying to define the "fundamental experience", we instead
acknowledged all experiences as unique, subjective to context and underlying
biology, and moved from there? Someone's experience of red will depend on his
rod/cone balance and his neuro-chemistry. He can think about a "abstract,
ultimate red" but that will be an experience of thinking about red, not an
actual ultimate red.
~~~
mjgoins
That's pretty much Dennet's position.
~~~
manicbovine
No, it's not. There is nothing to vary by Dennett's account. Our two "red
qualia" are exactly the same: non-existent.
Dennett argues (actually, he just asserts it over and over), that there is no
such thing as subjective experience, so to speak. He argues that, whether or
not my version of red is different than yours, we're sharing a common delusion
-- the delusion of subjective, conscious experience.
(Of course, we're aware of an experience, but more along the lines of a webcam
program that, upon sensing the color red, flips some boolean is_aware
variable.)
I'd add that -- ok, qualia have many variations. From where do we deduce that
it's a grand illusion? Matter has many different variations.
~~~
vinceguidry
> He argues that, whether or not my version of red is different than yours,
> we're sharing a common delusion -- the delusion of subjective, conscious
> experience.
I don't think Dennett is arguing that we're all zimboes.
<http://en.wikipedia.org/wiki/Philosophical_zombie>
~~~
manicbovine
Correct, he argues that the formulation is incoherent. That is, certain parts
of our physical makeup give rise to experience, and that we cannot entertain a
coherent world containing minds that lacks subjective, conscious experience
while retaining the other functional correlates of consciousness.
This is exactly what I meant by 'delusion'... that, according to Dennett,
there are no "qualia", nothing above and beyond the ordinary
physicalist/reductionist (and scientific) account of experience.
It's surprising from a personal view because experience seems so essential and
real.
------
raldi
Could the title be edited so that it doesn't sound like an obituary?
| |
For centuries, the Japanese government promoted ethnic homogeneity. The Tokugawa Shogunate prohibited foreigners from entering the country for some 200 years, and barriers to immigration remained high even after the isolationist policy ended in 1854. In recent decades, a lagging economy and aging population have compelled Japan’s leaders to reconsider traditional attitudes toward foreign residents. Still, experts have suggested that a long road remains before the country achieves harmonious heterogeneity.
In light of this situation, the Japanese American National Library (JANL), in conjunction with the University of Shizuoka, sponsored a panel called “The New Ethnic Identity for Sustainable Citizenship in Japan: Searching for the Meaning of ‘Belonging.’” Held on March 6 in San Francisco’s Japantown, the event assembled experts on three of Japan’s minorities to discuss varying perspectives on the issue of ethnic integration.
“I think the story of ethnic minorities in Japan is quite complex because each one has a different relationship to Japan, its history and the people,” Ben Kobashigawa, president of the Library, said in an e-mail. While Kobashigawa acknowledged that the panel “could only scratch the surface of the problems of groups ranging from Ainu to Burakumin and Okinawans to Koreans, Chinese, and most recently Vietnamese refugees in Japan,” he said it “made evident how inadequate the current Japanese policy of multiculturalism is.”
Takahito Sawada, a researcher at the University of Shizuoka, discussed one of the country’s most high-profile communities: Latin American immigrants. Although numbering less than other foreign groups, such as Chinese or Koreans, Latin Americans in Japan have been the subject of much media attention.
Japan encouraged Brazilians, Peruvians, and others from neighboring nations in the 1990s to immigrate to the country in response to labor shortages. But recent economic conditions compelled a change of policy. In 2009, the government initiated a controversial program that paid Latin American workers to return to their countries of origin. While not obligated to accept the offer, many immigrants have found themselves short of other options, due to layoffs and other economic woes.
Sawada addressed this dilemma in his presentation, “Economic Participation and Transforming Identity of Japanese Latino Immigrants after the Late-2000s Recession.” Most poignantly, he showed a video of a teenage girl whose parents were considering moving back to Brazil after having lived in Japan as guest workers. “My friends are my life,” she said, crying at the prospect of losing them.
Sawada heard similar accounts as a teacher and researcher in the city of Hamamatsu (in Shizuoka Prefecture), home to 20,000 Japanese Brazilians. Even workers who aren’t immediately facing leaving their homes in Japan reported difficulties, such as language barriers. “It’s hard to get alphabetism,” Sawada explained. And class adjustments weigh heavily. “Many have high education in Brazil,” he said, “but in Japan they live on a small scale.”
For these and other reasons, Japanese Brazilians have not coalesced. “They cannot build a community,” Sawada said.
On the other hand, UC Berkeley lecturer Yuko Okubo discussed an immigrant group that has very much come together: the Vietnamese living in Osaka. Her presentation, titled “Vietnamese Community in Osaka: From Human Rights to Cultural Heritage” traced this group’s evolution through an annual Têt festival commemorating the Lunar New Year.
The first such event occurred in 1998, after a tragedy galvanized the community. In December of the previous year, a 2-year-old Vietnamese girl died during a stay in the hospital. Linguistic impediments prevented her family from understanding the staff’s explanations, highlighting the need for a community organization.
“The parents and their friends thought they should be aware of their own situation,” said Okubo, “understand how the system works in Japan, and also have some legal assistance.”
To that end, these individuals founded the Betonamujinkai, or Vietnamese Association. Shortly thereafter, the group held its first event: the Têt festival.
Strong political overtones defined the inaugural celebration. “The purpose was to get the community together so they could organize,” explained Okubo. At its end, Association members read an original Declaration of Human Rights for Vietnamese Residents in Japan. “It was very political that they used the words ‘human rights,’” she added.
But within two years, the festival changed direction. Pressure from other ethnic groups to make joint demands on the city prompted the Vietnamese community to step away from politics. “The Vietnamese did not want to be a part of anything political,” said Okubo. “So the original agenda, protecting human rights, disappeared from the third festival on.”
When she visited in 2009, Okubo found that the festival’s focus had indeed evolved. “The festival has become more cultural and less political,” she explained.
Far from disbanding, however, the Vietnamese Association in Osaka has adapted to the community’s changing needs. One of the group’s leaders told Okubo that its present aim is to “teach the value of Vietnamese culture to the next generation.”
That mission has become increasingly pertinent, as the children of Vietnamese immigrants grow more removed from their parents’ culture and language. “Children born in Japan cannot communicate in the Vietnamese language,” said Okubo. “They like to use Japanese names; they’re culturally Japanese.” These trends have created concern for the future identity of the Vietnamese community among some of its older members.
The Vietnamese are not the only ethnic group in Japan to experience a changing identity. The Ainu, an indigenous people who live in Hokkaido, have straddled an ambiguous intersection between Japanese and non-Japanese classification since the end of the 19th century.
“Are Ainu Japanese?” asked presenter Mitsuhiro Fujimaki, a professor at the University of Shizuoka. “Maybe yes, and maybe no.”
The Japanese government answered this question affirmatively in 1899, when it passed the “Hokkaido Former Aborigines Protection Act.” The legislation conferred citizenship on the Ainu, but implicitly demanded their assimilation. Then in 2008, the government changed course, and recognized the Ainu as an indigenous people with a separate language and culture.
But that admission has meant little practically. “It was really cosmetic,” said Fujimaki. “Ainu are still kept off their land.”
A more fruitful battle for cultural preservation took place in Asahikawa, Hokkaido’s second-largest city. In 2008, the Asahikawa City Museum inaugurated an exhibit including items made by Ainu — a departure from the institution’s earlier policies.
The museum opened in 1993 with a commemorative exhibit about the city’s founding. “It was a typical story of colonial settlement,” explained Fujimaki, via e-mail.
But a new curator of Ainu ancestry advocated a more inclusive approach, made opportune by a 2007 remodel.
The result had positive effects on the city’s Ainu community. “It’s a place where they can access their culture,” said Fujimaki. | https://www.nichibei.org/2011/03/panel-explores-%E2%80%98inadequate%E2%80%99-state-of-multiculturalism-in-japan/ |
Unintended consequences of municipal water conservation
Indoor residential water conservation can have unintended consequences in places where wastewater reuse has been implemented, diminishing both the quantity and quality of influent available for treatment, according to researchers from the University of California, Riverside (UCR).
The researchers outlined their findings in a recent paper, which appears online in the journal Water Research, published by the International Water Association.
“Drought, and the conservation strategies that are often enacted in response to it, both likely limit the role reuse may play in improving local water supply reliability,” wrote Quynh K. Tran, a UCR Ph.D. student in chemical and environmental engineering; David Jassby, associate professor of chemical and environmental engineering; and Kurt Schwabe, professor of environmental economics and policy.
In the past, recycled water was only applied to areas such as low-value crops and median strips, Schwabe said. Recently, however, it has been considered safe to drink provided it either undergoes multiple rounds of treatment to remove concentrations of salts, nutrients, and other contaminants, or is injected into the ground and pumped back out later.
As domestic water use has fallen, rates of wastewater reuse have risen; the United States now reuses between 10% and 15% of its wastewater. Photo credit: Tran, Jassby, and Schwabe.
“You often hear it never stops raining at a wastewater treatment plant, meaning the influent from households will continue to flow regardless of whether we’re in a drought or not,” Schwabe said. “It may be true that it will continue to ‘rain,’ but the quantity of flow can be severely impacted by drought and indoor conservation efforts, which has implications for the reliability of the system, especially when it comes to downstream or end users of the treated wastewater.”
Exacerbating the problem, as wastewater flows decrease, their levels of salinity and other pollutants increase. Higher levels of pollutants present significant challenges for treatment facilities that are not typically designed to handle “elevated concentrations of total dissolved solids, nitrogen species, and carbon,” according to Tran, Jassby, and Schwabe.
Schwabe said while this research indicates indoor water conservation may affect the reliability and quality of water reuse during drought, the researchers are not suggesting people engage in less frequent conservation.
“These results highlight a central tenet of economics: that there’s a cost with every action we take,” he said. “Our results are intended to illustrate how different drought mitigation actions are related so agencies can plan, communicate, and coordinate in the most informed and cost-effective manner possible.”
The researchers said solutions to higher levels of pollutants are available.
“Cost-effective blending strategies can be implemented to mitigate the water quality effects, increasing the value of the remaining effluent for reuse, whether it be for surface water augmentation; groundwater replenishment; or irrigation of crops, golf courses, or landscapes,” they wrote.
To develop an economic model by which wastewater can be treated in a more cost-effective way, thereby increasing its value, the researchers identified feasible wastewater treatment technologies and wastewater treatment trains either in use or available for potential use. A treatment train is a sequence of treatments aimed at meeting a specific standard.
“Our solution is based on a system of blending water,” Schwabe said. “Traditionally, wastewater facilities have operated by the principle that all the influent is treated to the fullest extent possible. But depending on the sort of demand and regulations a treatment plant confronts for its effluent, managers may have the opportunity to be creative and achieve a much less costly outcome by treating only a portion of the influent with the most advanced technology and blending this with the remaining influent that has been treated but with a less advanced and thus lower-cost process.”
ES&E Magazine provides vital information for professionals that are engaged in the design, construction and operation of municipal water and wastewater treatment systems, stormwater management, industrial/hazardous waste management and air pollution. | |
I don’t know about you, but I’m a big personal space person. When I’m with someone socially, I like to maintain a safe distance between myself and them (crowded clubs aren’t my most favorite places), but I also need space in any place that I’m living in that’s just for me, that’s my domain, where I can go and be alone and recharge without any one intervening. (Introverts, unite!) It doesn’t have to be a room with a door, but it needs to be an area that I feel everyone knows is “mine.” For me, that space allows me to distance myself from others and take a break from being confronted or entangled with someone else’s energy. But, I'm also someone who likes to feel connected to those around me. So how do I honor my need for personal space while maintaining intimate relationships with the people in my life? Luckily, feng shui can help me navigate this.
Distance is Crucial in Feng Shui
In feng shui, we consider distance to be a major element of any space. Specifically, we look at how far apart people are from other people, how far apart people are from objects, and how far apart objects are from other objects. Distances are such an important part of feng shui because people are always moving around in their spaces, and we consider the act of putting people in different places to be a critical feng shui tool. We also believe that the space between objects is just as important as the objects themselves as they provide us with choices and alternatives.
How Distance and Connectedness Interplay
The distance between people, places, and objects conveys social cues, a sense of intimacy, and levels of connectedness. For instance, a diameter of approximately 1,500 feet is the distance a typical person in our culture will walk to a park, a bus stop, or a store. This circle becomes the person’s “home territory.” If something is outside of this “home territory,” people will no longer walk there but will instead start driving (and we all know how disconnected we can feel when isolated in our individual cars). We also know that when a building exceeds five stories, or approximately 50 feet in height, those living on the higher floors do not view the people on the street as being of their intimate concern. As the social theorist Jane Jacobs explained, without these “eyes on the street,” neighborhoods cease to be as safe or feel as connected, and the quality of life deteriorates (and we saw this play out in New York City). This theory further applies to large homes that are spread out laterally more than 40 feet in any direction or in which the master bedroom is on one side of the house and the children’s bedrooms are on the other. In these situations, people feel less connected and often struggle to communicate with each other, even though they’re living in the same house. There’s too much physical distance between them to maintain a sense of intimacy.
But the opposite is also true: when the distance between people or between people and objects is so small that they're bumping into each other or other things, people can feel confined, angry, or frustrated, prompting them to lash out at those around them or run away. They are also more likely to view these spaces as smaller and more claustrophobic. This can be the case even though the space may not actually be small - it just feels smaller because there is a greater restriction of free movement, and individuals don’t have enough personal space to feel comfortable and relax.
The Meanings Behind Different Interpersonal Distances
So what’s the ideal distance between people to maintain personal boundaries but still feel a sense of intimacy and connectedness? Well, that depends on who you are and the type of interaction you're engaging in. The chart below, based on Edward Hall’s model, outlines the meanings behind various interpersonal distances depending on the type of interaction. This also helps feng shui professionals understand how to best position people, rooms, and buildings based on what the client needs.
So the next time you're with someone, take note of the distance that you're standing away from them and consider what this tells you about how you're feeling in that moment: do you need personal space or are you craving a more intimate, close bond? I know I probably feel most comfortable in the "Social Rules" section of the chart, but when I'm with close friends or family, I'm ok to sit or stand closer. Having an awareness of the distance you're putting between you and another person will not only give you more insight into how you feel about certain people and in various situations but also your own personal space needs and preferences. | https://www.yourdigitalessence.com/post/lets-talk-about-personal-space |
# Interior design psychology
Interior design psychology is a field within environmental psychology, which concerns the environmental conditions of the interior. It is a direct study of the relationship between an environment and how that environment affects the behavior of its inhabitants, intending to maximize the positive effects of this relationship. Through interior design psychology, the performance and efficiency of the space and the well-being of the individual are improved. Figures like Walter Benjamin, Sigmund Freud, John B. Calhoun and Jean Baudrillard have shown that by incorporating this psychology into design one can control an environment and to an extent, the relationship and behavior of its inhabitants. An example of this is seen through the rat experiments conducted by Calhoun in which he noted the aggression, killing and changed sexual tendencies amongst rats. This experiment created a stark behavioral analogy between the rat's behavior and inhabitation in high-rise building projects in the US after WWII, an example of which is the Pruitt-Igoe development in St Louis demolished in 1972 only 21 years after being erected.
## Proxemics
Proxemics study the amount of space people feel necessary to have between themselves and others.
### Crowding and personal space
In this field of study, the phenomenon of territoriality is demonstrated continuously through unwritten indices and behaviors, which communicate, the conscious or subconscious notions of personal space and territoriality. This phenomenon is seen, for example, through the use of public seating and the empty seats on a crowded bus or train. "Crowding occurs when the regulation of social interaction is unsuccessful and our desires for social interaction are exceeded by the actual amount of social interaction experienced." Studies observing social behaviors and psychology have indicated, such as in the case of commuters that people will seek to maximize personal space whether standing or sitting.
In a study conducted by Gary W. Evans and Richard E. Wene, (who work within the field of environmental design and human development) of 139 adult commuters, commuting between New Jersey and Manhattan, (54% male) saliva samples were taken to measure cortisol levels, a hormonal marker of stress. Their research accounts statistically for other possible stressors such as income and general life stress. "We find that a more proximal index of density is correlated with multiple indices of stress wherein a more distal index of density is not." Concerns arising from the results of this study suggest that small deviations in increased seat density, controlled against income stress, would elevate the log of cortisol (i.e. stress levels) and diminish task performance and mood.
### Smooth and Striated space
According to Learning Spaces: Creating Opportunities for Knowledge Creation in Academic Life by Savin-Baden, it explored the concept of space in the physical sense when describing smooth and striated cultural spaces. Smooth spaces are described as "nomadic"; that is, in a constant state of movement. For example, the lobby of a hotel, an activity room where the seating directions are towards each other instead of focusing in one line, which provides a sense of relaxation and informality. These spaces are open, flexible, and owned by their inhabitants. Smooth spaces are where knowledge is contested and learning is co-created. They are messy and undisciplined, which often creates tension between stakeholders and users. Striated spaces, on the other hand, are described as bounded spaces, which refers to a certain orientation that focuses primarily in one direction, reflecting the organizational and pedagogical structure of the space. Classrooms and lecture halls are examples of striated spaces.
### Relationships between people
Closely related to the proxemics of space, in the area of privacy. In "Perspectives on Privacy" P. Brierley Newell from the department of psychology at the University of Warwick, Coventry defines privacy as "a voluntary and temporary condition of separation from the public domain." The desire for privacy is often identified as a link between stress and distress. The ability to obtain privacy within an environment allows the individual to separate themselves physically and mentally from others and relax. This notion is of key importance in determining the behavior and well-being of the individual. As above in the scenario of crowding and density on public transport, privacy dictates the perception of comfort, in relation to crowding and personal space. Dissatisfaction with one's environment can be related to close proximity with others, leading to stress and as a result, diminish mood and performance behaviors.
### Defensible space
This theory began development in 1962 when John B. Calhoun conducted a series of experiments on rats to study population density and social pathology. From these experiments, a breeding utopia was established for the rats in which they only lacked space. "Unwanted social contact occurred with increasing frequency, leading to increased stress and aggression. Following the work of the physiologist, Hans Selye, it seemed that the adrenal system offered the standard binary solution: fight or flight. But in the sealed enclosure, the flight was impossible. Violence quickly spiraled out of control. Cannibalism and infanticide followed. Males became hypersexual, pansexual and, an increasing proportion, homosexual. Calhoun called this vortex "a behavioral sink". Their numbers fell into terminal decline and the population tailed off to extinction"
This study linked population growth, environmental degradation and urban violence. Similar behavioral tendencies became apparent within the poor housing conditions at the Pruitt-Igoe development in St Louis. This development is now used as a key study of inhabitation by architects and urban planners, Oscar Newman one of the main developers of this field, references the observations of inhabitation at this establishment in his book Creating Defensible Space. He notes the stark difference between private space, which is clearly defined as personal territory, and the public space in this development. He notes that public spaces shared by relatively few families compared to those shared by many were much more hygienic and well-looked after, whereas those shared by larger numbers were often vandalized and unhygienic. He comments that the anonymity created by these largely shared public corridors and spaces "evoked no feelings of identity or control" This indicates our relationship with space affects our behavior and use of space. In this example lack of feelings of ownership of the space led to negative behavior within space and created feedback with negative effects on the well-being of the inhabitants.
## The perception of space
This perception can otherwise be termed as awareness between our bodies and the awareness of other bodies, organisms and bodies around us. Perceived beauty and personal involvement within an environment are key factors, which determine our perception of space. As defined in the Measurement of Meaning by Osgood, Suci and Tannebaum the factors influencing the perception of space are these 3 things: 1. Evaluation- including the aesthetic, affective and symbolic meaning of space 2. Power- the energy requirements to adapt to a space 3. Activity- links to the noise within a space and the worker's relationship and satisfaction with job and task In "Effects of the self-schema on perception of space at work" by Gustave Nicolas Fischer, Cyril Tarquinio, Jacqueline C. Vischer, the study conducted linking design and psychology in the workplace. In this study, they proposed a theoretical model linking environmental perception, work satisfaction and sense of self in a feedback loop. This is shown below in Fig. 1, to illustrate their findings on the direct relationship the environment has with the inhabitant and how through psychology this affects behavior.
(Awaiting copyright approval)
There is also something to be said about the way our increasingly popular open office designs may contribute to less productivity and higher distractions, versus traditional cubicle-like workspaces. According to an article from Fortune, "Evidence is mixed on whether open plans actually foster collaboration, and studies have shown that open office plans decrease productivity and employee well-being while increasing the number of sick days workers take. A study by the architecture and design firm Gensler found that workers in 2013 spent 54 percent of their time on work requiring individual focus, up from 48 percent in 2008." In order to combat this, future offices in our next generations will include sound-proof private rooms allowing workers to work solo without distraction, cubicle banks and private offices while continuing to sustain the open floor plan.
## The System of Objects
Developed by Jean Baudrillard as part of his sociology doctorate thesis Le Système des objets (The System of Objects). In this he proposed the 4 object valuing criteria, these being:
Function – a pen is used to write Exchange or economic value – a piano being worth three chairs Symbolic – an amethyst symbolizing a birth in February Sign – the branding or prestige of an object, with no added function being valued over another, it may be used to suggest social values such as class.
In this way, the objects and human relationships with objects in the interior environment have significant psychological meaning and impact. In "Social Attributions Based on Domestic Interiors" by M.A. Wilson and N.E. Mackenzie, it is proposed that: "people's interactions with the environment are determined by the meanings they attribute to it, and both stress the impact of expectations on behavior within a particular environment." The study they discuss further developed the theme, that objects and how we classify them, in turn, allows us to classify the social attributes of the owner of the objects, in relation to age and social class according to the object valuing system. This system suggests that our relationship with objects affects both our behavior as we use objects according to their function, but also how we are perceived in the eyes of others. This makes our relationship with objects and space pivotal to our psychology.
## Space-time relationships
Charles Rice references the thinking of Walter Benjamin, in The Emergence of the Interior, on the study of interiorization and experience. He proposes that in our faster-paced modern society experiences are instantaneous and through this, we are missing long experiences such as a connection with tradition and the accumulation of wisdom over time. To reforge a sense of this relationship and address the current lack he demonstrates that we might materially create such a relationship through inanimate objects in our environment. Giving the example: "that the hearth and the mantelpiece might materially encode the mythical fireside and the situation it provided for the telling of stories." In this way, one's relationship with objects can embody a sense of experience and fulfill the desire for a connection with tradition.
## Space and user experience
In the article "Storied Spaces: Cultural Accounts of Mobility, Technology, and Environmental Knowing" by Johanna Brewer and Paul Dourish, it mentioned the three themes that are directly related to user-experience in terms of campus planning: legibility, literacy, and legitimacy. Legibility refers to "our understanding of how the place and/or space provide information for us, both socially and culturally". Spatial Literacy refers to "how we interpret the information provided by the environment around us, the activities we engage in, and the relevance of those activities." Legitimacy refers to "how we seek information and find relevance within the environment around us." In the concept of campus design, legibility refers to the campus maps, signposts, as well as the lecture room numbers within the building. Literacy refers to the students' feelings and behaviors within a certain environment in the building and what an interior promotes students to do and don't, in general user-experience. And legitimacy refers to the method that students use to engage themselves into this environment, as well as the reason that they come in and leave.
## Space and human behavioural cognition
The interaction between humans and spaces tends to reach a certain balance by their interaction. When individuals are in a certain interior environment, they not only express their physical behavior, but also their emotions, thoughts, and willingness are impacted by the interior as well. According to what Ye Wenben mentioned in his article "Interior Design Psychology", the ultimate goal of interior design is to lead human behavioral cognition in a positive way and reach a relatively harmonious dynamic balance through its impact towards humans in terms of user experience and mental conditions.
### Security
Ye mentioned that within a certain space, it does not necessarily mean that the broader the space is, the better it is going to be for the users. The over-broad space tends to cause people a sense of loss and insecurity. The needs of safety and protection of people will make them willing to find certain objects to rely on. For example, in the environment of a train station and subway station, people do not tend to stay in the closest place to broad, instead multiple groups are formed and spread themselves around the waiting space, seats, and pillars, and maintain a certain space with other individuals. This concept of "security" has also prompted people to apply the use of interspersed space in order to provide a more stable and secure mentality within a space.
#### Self-congestion
According to the journal: Does Space Matter? Assessing the Undergraduate "Lived Experience" to Enhance Learning, by using time-lapse cameras and three years of observing and measuring the interactions and activities of people within these public spaces, it summarized the notion of "self-congestion": people tend to attract other people in public spaces even though they indicate that they prefer to get away from crowds. When it applies to interior design, we must also take in consideration gathered spaces instead of an evenly distributed distance with tables and chairs.
### Privacy and interpersonal distance
Privacy is people's basic need for the space, ensuring self-integrity, expressing one's perspective towards life, is the fundamental proven of freedom and respect towards an individual. Private space is the independent interior space that is restricted by the external materials and stabilized by one's mental awareness. It involves the relative requirements of visions and sounds within the space. Due to the different social scenario and interaction needs, the application for privacy and personal distances also have a clear discipline.
## Brief background
A greater awareness into this field has emerged since the 20th century when the function and performance of the interior became of chief importance in designing habitations, the start of user-centered design, for example, La Maison de Verre. This modern idea of the interior-designing for the user from the inside to the outside has coincided with psychological analysis on the effects on inhabitations.
In The Emergence of the Interior, Charles Rice rationalized the implications of the interior: • Under the context of modernity • Status of the experience • Presence of history and • Knowledge about subjectivity
The Importance of the development of this field is evident through the above areas of study
Understanding and implementation of interior design psychology can impact and improve the performance, efficiency and well-being of the individual inhabitant. As illustrated through the above categories this is an important and relevant developing field within design and planning. | https://en.wikipedia.org/wiki/Interior_design_psychology |
Pauline’s Lipman’s teaching, research, and activism grow out of her commitment to social justice, focusing on race and class inequality in education, globalization and the political economy of urban education. She is particularly concerned with the relationship of education policy, urban restructuring and the politics of race.
A member of the coordinating committee of Teachers for Social Justice-Chicago, Lipman is active in coalitions of teachers and community organizations. She is a co-director of the Data and Democracy Project and has led collaborations with organizations to produce policy reports on educational injustices.
In Chicago and nationally, she contributes frequently to forums of parents and teachers, as well as to media coverage of school issues.
Her most recent book, “The New Political Economy of Urban Education: Neoliberalism, Race, and the Right to the City,” argues that education is integral to neoliberal economic and urban restructuring, with attendant class and race inequalities, as well as to the potential for a new democratic social order. | https://today.uic.edu/experts/pauline-lipman |
To identify priority information needs for sea-level rise planning, we conducted workshops in Florida, North Carolina, and Massachusetts in the summer of 2012. Attendees represented professionals from five stakeholder groups: federal and state governments, local governments, universities, businesses, and nongovernmental organizations. Over 100 people attended and 96 participated in breakout groups. Text analysis was used to organize and extract most frequently occurring content from 16 total breakout groups. The most frequent key words/phrases were identified among priority topics within five themes: analytic tools, communications, land use, ecosystem management, and economics. Diverse technical and communication tools were identified to help effectively plan for change. In many communities, planning has not formally begun. Attendees sought advanced prediction tools yet simple messaging for decision-makers facing politically challenging planning questions. High frequency key words/phrases involved fine spatial scales and temporal scales of less than 50 years. Many needs involved communications and the phrase “simple messaging” appeared with the highest frequency. There was some evidence of geographic variation among regions. North Carolina breakout groups had a higher frequency of key words/phrases involving land use. The results reflect challenges and tractable opportunities for planning beyond current, geophysically brief, time scales (e.g., election cycles and mortgage periods). | https://earthobservatory.sg/resources/publications/science-needs-sea-level-adaptation-planning-comparisons-among-three-us |
World Shakuhachi Discussion / Go to Live Shakuhachi Chat
You are not logged in.
ON another forum that I visit there was a meditation challenge to motivate folks to practice sitting, walking, or other meditations regularly. It seemed to work very well. I thought that maybe I would start a similar thing here.
Let's keep in mind the discussion that developped in another thread about the difference between practicing and playing. This is NOT a challenge to play regularly, but a challenge to practice regularly. Here is what I am going to try for the next month or so.
1. Practice breathy techniques in long tones in order to develop a less pinched embochure for 10 minutes. (others may need to tighten theirs). Also, spend some time with my 2.4 in order to further loosen my face. I am going to try to do this for at least ten minutes at the beginning of each practice AND playing session.
2. I also want to spend at least 30 minutes each day practicing runs, combinations, and techniques that I have trouble with by going through the scores I am studying and isolating these. The trick here, for me, is to resist the urge to just start playing the entire score.
3. Also, I want to spend about 10 minutes playing games on the flute, like runs up the registers with various patterns. I am surprised at how challenging this is for me.
4. Finally, the other thing that I want to do is keep the "break time" during these practice sessions to a minimum. That is easier said than done as these kinds of things can get rather monotonous and boring, not to mention frustrating. By doing this, I hope to increase my attention span and concentrative endurance in order to better prepare me to play longer scores.
Others may have different areas that they would like to focus on, and I invite everyone to develop their own methods. However, let's try to remember that the idea is to practice, not play. I am going to try to have a practice session in the morning before work and then a playing session after works in the afternoon and evening.
This may be kind of a foreign concept to some, but it reminds me of my days as an athlete. We often did drills that did not resemble the sport we played or the event we undertook; however, the drills were carefully designed to hone particular skills and strengthen important muscles and greatly improved our abilities in the end. It might be a good idea to ask your teacher, if you have one, if s/he has any drills they think would benefit you in your playing.
Like AA, this thread is just meant to be a way to hold myself accountable for the course that I would like to pursue. I invite others to hold themselves accountable as well ... or not. It is up to you.
Offline
Can't WAIT til you do your 5th Step
Offline
Can't WAIT til you do your 5th Step
Yes, I dread the fifth step, finding all the flutes I have played poorly in the past and apologizing to them.
Offline
Fake it til you make it! | http://shakuhachiforum.com/viewtopic.php?id=3816 |
ST. JOSEPH, Mich.--- The St. Joseph Public Schools Board clarified its new Strategic Plan after a Facebook post went viral stating the district would be teaching Critical Race Theory. Critical Race Theory or CRT is a graduate level course usually only taught in law schools. "We felt that being transparent...
Related
FRANKLIN COUNTY, Va. (WDBJ) - The Franklin County School Board meeting continued into the early Tuesday morning hours. Nearly 100 people spoke about the state’s new policies on transgender students and how history should be taught in the school system. Parents, students and officials have concerns about how to be...
WASHINGTON D.C. – The United States Department of Education reversed plans to require the teaching of Critical Race Theory after receiving some pushback, including from Utah Senator Mike Lee (R), according to a press release from Lee. The release states that previous plans by the department would have required that...
ELKTON — On Wednesday, July 14, the Cecil County Public Schools board meeting exploded as protestors against the teaching of Critical Race Theory and allowing transgender people to use the facilities that aligned with their identity voiced their opinions to the board. CCPS Superintendent Jeffrey Lawson said Critical Race Theory...
On Monday morning and into the afternoon, Oklahoma’s State Board of Education weighed a number of big topics in one of its last meetings before the start of the school year in August. The board approved new rules around House Bill 1775, which dictates that students should not be made...
ROCHESTER, Minn. — Tensions ran high Tuesday night, July 13, as a boisterous crowd attended the Rochester School Board meeting to raise concerns about "government speech," critical race theory, and the mandatory use of masks throughout the district. None of those topics were on the agenda, except for a resolution...
A resolution calling for the Montoursville Area School District to oppose “public school and publicly-funded charter schools’ curriculum, instruction or material promoting Critical Race Theory or advocating similar divisive concepts relating to sex, race, ethnicity, color or national origin” was requested by school board member Ronald Snell during the final minutes of a recent meeting. Snell’s request was met with applause from some members of the community in attendance.
Many concerned parents showed up to speak out against Critical Race Theory at the Rochester school board meeting. Attendees were seen holding signs saying things like “Critical Race Theory = Race Pimps Monetizing Hate.”. The school board meeting had standing room only, after they added more chairs. The estimated number...
Last month, South Kitsap School Board member John Berg introduced a resolution to prohibit classroom teaching of theories that promote racial hatred. For that he was called a nut. District officials said they won’t be teaching critical race theory in Kitsap schools. The teachers’ union said the poor fellow must...
MONTGOMERY, Ala (WIAT) — The controversial issue of critical race theory dominated Tuesday’s meeting of the Alabama State School Board as members debated its true definition and considered drafting a resolution that would ban teaching the theory in state classrooms. “The board wants to deal with this issue head-on,” said...
Last month, South Kitsap School Board member John Berg introduced a resolution to prohibit classroom teaching of theories that promote racial hatred. For that he was called a nut. District officials said they won’t be teaching critical race theory in the Kitsap schools. The teachers’ union said the poor fellow...
Six of the seven Hood River County School Board members met at Parkdale Elementary July 14 for its regularly scheduled board meeting. The board unanimously elected Chrissy Reitz as new chair and Julia Garcia-Ramirez to continue as vice chair for the coming school year. The meeting was a first for...
(The Center Square) – Republicans in the GOP-controlled Missouri General Assembly Monday staged a three-hour hearing before a joint committee designed to prompt Gov. Mike Parson into issuing an executive order banning critical race theory from state K-12 curriculum. While saying he's opposed to teaching CRT in schools, Parson quickly...
MONTGOMERY, Ala. (AP) — Alabama might ban so-called “critical race theory” from being taught in public K-12 schools even though education officials said no schools are actually doing so. Members of the Alabama Board of Education on Tuesday discussed the wording of a possible resolution that could be voted on...
Members of the board of the Albuquerque Public Schools say the controversial critical race theory is not being taught in local classrooms. Critical race theory has become a source of controversy at schools nationwide. Critics argue it’s meant to teach children how to hate while supporters contend it’s designed to do just the opposite. Board members last night reviewed a statement from the public school system that said students are encouraged to be critical thinkers. Four parents spoke out against the theory, but Superintendent Scott Elder reiterated that it’s not being taught in Albuquerque public schools.
Senate candidate Katie Boyd Britt said Thursday that she agrees with Gov. Kay Ivey and that Alabama’s public schools “should be focused on improving our dismal national math and reading scores.” At Tuesday’s meeting of the State Board of Education, Ivey spoke in support of a controversial resolution that would ban the teaching of Critical Race Theory in Alabama’s public schools.
ALBUQUERQUE, N.M. (KRQE) – Will critical race theory be taught in the largest school district in the state? The Albuquerque Public Schools Board of Education discussed the controversial topic at their board meeting Wednesday night. APS said there are no changes expected to the curriculum in regard to critical race...
SPRINGBORO, Ohio — Critical Race theory will not be taught in Springboro Schools. In a Facebook post Monday, officials said the district "has not and will not support or implement CRT" into its curriculum at any grade level. District officials say they follow the Ohio Department of Education's learning standards,... | |
Neuroimaging technologies aim at delineating the highly complex structural and functional organization of the human brain. In recent years, several unimodal as well as multimodal analyses of structural MRI (sMRI) and functional MRI (fMRI) neuroimaging modalities, leveraging advanced signal processing and machine learning based feature extraction algorithms, have opened new avenues in diagnosis of complex brain syndromes and neurocognitive disorders. Generically regarding these neuroimaging modalities as filtered, complimentary insights of brain’s anatomical and functional organization, multimodal data fusion efforts could enable more comprehensive mapping of brain structure and function.
Large scale functional organization of the brain is often studied by viewing the brain as a complex, integrative network composed of spatially distributed, but functionally interacting, sub-networks that continually share and process information. Such whole-brain functional interactions, also referred to as patterns of functional connectivity (FC), are typically examined as levels of synchronous co-activation in the different functional networks of the brain. More recently, there has been a major paradigm shift from measuring the whole-brain FC in an oversimplified, time-averaged manner to additional exploration of time-varying mechanisms to identify the recurring, transient brain configurations or brain states, referred to as time-varying FC state profiles in this dissertation. Notably, prior studies based on time-varying FC approaches have made use of these relatively lower dimensional fMRI features to characterize pathophysiology and have also been reported to relate to demographic characterization, consciousness levels and cognition.
In this dissertation, we corroborate the efficacy of time-varying FC state profiles of the human brain at rest by implementing statistical frameworks to evaluate their robustness and statistical significance through an in-depth, novel evaluation on multiple, independent partitions of a very large rest-fMRI dataset, as well as extensive validation testing on surrogate rest-fMRI datasets. In the following, we present a novel data-driven, blind source separation based multimodal (sMRI-fMRI) data fusion framework that uses the time-varying FC state profiles as features from the fMRI modality to characterize diseased brain conditions and substantiate brain structure-function relationships. Finally, we present a novel data-driven, deep learning based multimodal (sMRI-fMRI) data fusion framework that examines the degree of diagnostic and prognostic performance improvement based on time-varying FC state profiles as features from the fMRI modality. The approaches developed and tested in this dissertation evince high levels of robustness and highlight the utility of time-varying FC state profiles as potential biomarkers to characterize, diagnose and predict diseased brain conditions. As such, the findings in this work argue in favor of the view of FC investigations of the brain that are centered on time-varying FC approaches, and also highlight the benefits of combining multiple neuroimaging data modalities via data fusion. | https://digitalrepository.unm.edu/ece_etds/439/ |
The Health Insurance Portability and Accountability Act, frequently referred to as HIPAA, has been on the books since 1996 to establish standards and guidelines for maintaining the confidentiality and security of patient health information. Healthcare providers and their office personnel that violate the rules of HIPAA can find themselves facing serious consequences.
Even though HIPAA compliance is nothing new for the healthcare industry, violations are still quite common. This stems from a variety of circumstances, including staff that is either undertrained or simply unaware of the potential consequences of a HIPAA violation. To help you and your office staff better understand the law, here is what can happen when a violation has been committed.
What Happens When You Break a HIPAA Rule?
The consequences of a HIPAA violation depend significantly on the nature and severity of the offense. In other words, a practice that has received a relatively minor consequence for a HIPAA violation in the past can’t automatically assume that a violation isn’t a significant issue going forward.
Deciding how a HIPAA violation will be handled can involve multiple players, including employers, professional boards and in some cases, even the Department of Justice. The elements of the violation that authorities will look at include the overall nature of the violation, along with details like whether or not there was knowledge that a HIPAA rule was being violated, if there was malicious intent, and if the violation involved the criminal provision of the act.
Depending on the nature of the violation, and the position of the person or operation that committed the offense, possible consequences could include termination, removal from professional boards, fines and criminal charges.
Civil penalties for a HIPAA violation can be issued by the Department of Health and Human Services Office for Civil Rights. When a HIPPA violation has been committed, they will consider the circumstances of the offense and issue a penalty based on a four-tiered system.
Tier 1: A fine ranging from $100 to $25,000 for violations where it was determined that the individual was unaware of a HIPAA law being violated or where exercising a reasonable level of due diligence could have prevented the violation.
Tier 2: A fine ranging from $1,000 to $100,000 for violations where reasonable cause was established.
Tier 3: A fine ranging from $10,000 to $250,000 where willful neglect of HIPAA rules is evident, but the violation has been corrected within a specified amount of time.
Tier 4: A fine ranging from $50,000 to $1.5 million where willful neglect of HIPAA rules is evident and no attempt to correct the violation has been made.
Although rarer, a HIPAA violation can fall into the hands of the Department of Justice if there has been a criminal violation of rules. Criminal penalties for a HIPAA violation come with a fine and potentially a prison sentence of up to 10 years. Criminal offenses can include violations that involve false pretenses or were made with personal gain or malicious intent.
One of the effective ways to prevent your practice from an accidental HIPAA violation is by investing in medical practice management software that’s been developed with patient security and privacy in mind. Contact Raintree Systems today to learn more about our software solutions for your medical practice. | https://www.raintreeinc.com/understanding-the-potential-consequences-of-a-hipaa-violation/ |
Get a free C.V. review by sending your C.V. to [email protected] or click the following link. Submit C.V.! use the subject heading REVIEW.
IMPORTANT: Read the application instructions keenly, Never pay for a job interview or application.
Click the Link Below to Get Targeted Job Updates
Sign Up For Targeted Job Updates Here
Senior Manager Systems Infrastructure
The Position:
Technology Infrastructure department oversees planning, deployment and operation of state-of-the-art infrastructure services that include server, storage, network, databases, and cloud that support mission critical services for the Bank.
The role holder will act as the leader a team of engineers responsible for planning, implementation, and operations of KCBs systems infrastructure. The environment is composed of diverse and mission critical systems running in virtualized and non-virtualized infrastructure, SAN storage and backup & recovery infrastructure.
Key Responsibilities:
- Lead a team of Systems Engineers maintaining KCB systems infrastructure that includes Storage, Storage Area Network, Servers, Virtualization and Backup.
- Provide leadership in designing, building, and scaling production services and servers across multiple data centers for complex and data-intensive services.
- Responsible for ensuring dimensioning of systems is done periodically based on demand and projected growth.
- Responsible for proactive infrastructure provision to meet required business needs without impact on time to market.
- Implement technically strategic solutions for Server and Storage solutions to meet the Bank’s business needs for today and in the future.
- Responsible for systems infrastructure capacity management and working all relevant stakeholders to ensure the capacity management processes are always implemented and adhered to.
- Conduct contingency planning for potential IT hardware unavailability to ensure business continuity.
- Develop and maintain systems infrastructure documentation and action plans including policies and procedures, disaster recovery plans, user guides and best practices that relate to systems platforms in the bank.
- Implement best practice security measures to ensure the integrity and continuity of systems and continuously monitors security compliance.
- Establish and manage SLAs with all stakeholders to ensure High availability of Mission Critical Servers thus ensuring resilience and continuity planning.
- Establish effective relationship with suppliers and partners to influence their plans and maximize value delivery from the relationship for all systems infrastructure services.
The Person:
For the above position, the successful applicant should have the following:
- 10 years’ progressive experience in Information Technology with at least 5 years’ experience in infrastructure planning and/or support in an environment with extensive infrastructure deployment with Windows and Linux environment.
- Certification in either VMware (VCP), RedHat (RHCSA/RHCE) or Storage.
- Experience with Automation and Configuration Management with preference for Ansible.
- Cloud certifications with Azure (Administrator/ Solutions Architect) and AWS (SysOps/Solution Architect) will be an added advantage.
- Experience with Container/PaaS orchestration/management platforms such as Kubernetes, OpenShift will be an added advantage.
The above position is demanding; for which the Bank will provide a competitive remuneration package to the successful candidate. If you believe you can clearly demonstrate your abilities to meet the criteria given above, please log in to our Recruitment portal and submit your application with a detailed CV.
CLICK HERE TO APPLY
To be considered your application must be received by Friday 12nd August 2022.
Qualified candidates with disability are encouraged to apply.
Only short-listed candidates will be contacted.
NB: In the event that you are invited to interview for any positions, we will require that you provide us with the following documents:
- National I.D.
- KRA Pin Card.
- Birth Certificate of self.
- Passport Photo (White Background).
- NSSF Card.
- NHIF Card.
.
.
. | https://careerassociated.com/2022/08/01/kcb-group-limited-senior-manager-systems-infrastructure/ |
Regardless of our life circumstances, we all want to ensure good health and well-being for ourselves and our family. This seems like a basic goal, but if you don’t have access to quality health services, this goal can become insurmountable. Whether you are in need of vaccines, prenatal care, emergency surgery or rehabilitation after an accident, health care is fundamental to quality of life and essential for survival for all people, particularly those who live in the world’s poorest communities and countries.
That is why your support of HVO is so crucial. The world faces an acute shortage of health care providers – estimated to reach 18 million by 2030 – resulting in millions of people without access to care. Your donation to HVO will lead to more trained health workers providing quality care to their communities, and, ultimately, more individuals will receive the care they need because of your gift.
By supporting HVO, you become part of a community dedicated to empowering health workers with knowledge, skills and professional development opportunities. HVO fosters partnerships among institutions, local providers, volunteers and sponsor organizations. We connect people with one another and with the resources necessary to bring education and training to health workers in low- and middle-income countries, ensuring they can provide the best care possible in their own communities:
Each of these stories was made possible by the connections formed between HVO volunteers and their colleagues overseas – and by donors like you. You provide the resources to recruit and prepare volunteers; you support essential evaluation tools and activities that ensure our projects meet the needs of our overseas partners; and you make possible innovations that address the evolving education and mentorship needs of health workers at our project sites.
The global shortage of health workers is a significant challenge, but bringing skilled, safe and compassionate care to people around the world is possible if we draw on our greatest strength: the power of people working together. Please, join us. | https://hvousa.org/ways-to-give/why-give/ |
For most of us, paying for electricity means paying for what we use. Municipalities do that, too, but they also have to pay something called a capacity charge for their large accounts. This is a charge assessed by ISO-New England, the grid operator, to ensure account holders pay in proportion to their usage and that there is enough electric “capacity” available on the grid. The charge is calculated for each account on the basis of its demand during the grid’s annual peak hour that year: the one hour of that year during which grid-wide demand was at its highest. It is like children being put on Santa’s nice list according to their behavior not over the whole year, but over a single hour.
If a large account can shut down large energy consuming systems during the peak hour, it can reduce its demand and significantly decrease its capacity charge for the whole year. The question is, which hour is it?
This is where MAPC’s Peak Demand Notification program can help cities and towns. Our program, now in its second year, helps municipalities manage capacity costs for their biggest energy users (often the High School, Middle School, City Hall and Library).
We know that every year, the peak demand hour occurs on one summer weekday afternoon – but narrowing down the forecast to the particular afternoon and hour takes additional work.
Using historical data on ISO-New England’s website, we defined three levels of risk that the peak demand hour would occur on any particular day: UNLIKELY, POSSIBLE, or LIKELY. We apply these levels of risk to ISO New England’s daily forecast of what the peak demand will be and when it will occur, and send our risk warnings to subscribers.
In our first year (summer of 2015), ten cities and towns participated. They achieved an average electricity reduction of 51% during the peak hour, and we conservatively estimate that they reduced a total of 15,245 KW across five load sheds. This translates to 6.7 metric tons of avoided (about the equivalent of one car driving on the road for a year). Additionally, the actions they took garnered an estimated total savings of $261,905 (average of $29,100 per community – nine communities). All the communities deserve to be congratulated, but the biggest load-shedder was the Acton-Boxborough School District, which reduced its demand by 80% and saved approximately $75,000.
This past summer, the program grew to 45 participating municipalities. We also had our first commercial business and a university participate in the notification program. While the data from this summer is still being collected, we can conservatively estimate that if at least ten additional communities took action to load-shed during the peak hour, we will have doubled our impact from the first year of the program.
Aside from some significant financial savings, and moderate energy and emissions reductions, the actions communities are taking are creating efficiencies in other important ways. At times of peak demand, grid operators like ISO New England are required to call upon some of the dirtiest power sources available to meet our electricity needs, making the emissions and pollution implications even greater. As more communities and businesses reduce demand during this time, the need for such sources can potentially be eliminated. As cities and towns start to manage their energy use more aggressively throughout the year, not just during the peak hour during the summer, we will hopefully begin to see a reduction of energy use grid-wide and deeper savings for individual communities.
If you’re interested in learning more about MAPC’s Peak Demand program please visit MAPC’s website. If you have any additional questions or want to sign up for the program next year, email [email protected]. | https://www.mapc.org/planning101/how-municipalities-are-saving-money-and-energy-through-mapcs-peak-demand-notification-program/ |
A system and method can alleviate congestion in a middleware machine environment with a plurality of switches in a fat-tree topology. The middleware machine environment can support a plurality of...
2
US20140328172
CONGESTION CONTROL AND QOS IN NOC BY REGULATING THE INJECTION TRAFFIC
Systems and methods described herein are directed to solutions for NoC interconnects that provide congestion avoidance and end-to-end uniform and weighted-fair allocation of resource bandwidths...
3
US20130077487
Controlling Registration Floods In VoIP Networks Via DNS
A mechanism controls global synchronization, or registration floods, that may result when a large number of endpoints in a Voice over Internet Protocol (VoIP) network such as an Internet Protocol...
4
US20140219088
QUALITY OF EXPERIENCE ENHANCEMENTS OVER WIRELESS NETWORKS
Systems and methods for providing content-aware adaptation of multimedia communications in wireless networks to ensure Quality of Experience (QoE) of the content transmitted by the multimedia...
5
US20080273460
METHODS AND SYSTEMS FOR COMMUNICATION BETWEEN NETWORK ELEMENTS
A system for synchronizing a first network device and a second network device. The first network device comprises an interface configured to release over a communication link a first signal...
6
US20150180780
CONGESTION ABATEMENT IN A NETWORK INTERCONNECT
A method and system for detecting congestion in a network of nodes, abating the network congestion, and identifying the cause of the network congestion is provided. A congestion detection system...
7
US20130215739
ROUTING METHOD FOR A WIRELESS MULTI-HOP NETWORK
Disclosed is a routing method named lock routing for a wireless multi-hop network, which can be based on next-hop routing and source routing and named next-hop lock routing and source lock routing...
8
US20130064086
Device and a System For IP Traffic Offloading
A Mobile Control Entity (125, 225) for a system (100, 200), arranged to communicate with a first Mobile Gateway (110, 210) and a Radio Network Control Entity (130, 230), to receive an instruction...
9
US20130308452
METHOD AND APPARATUS FOR MITIGATING AN OVERLOAD IN A NETWORK
A method and apparatus for managing an overload condition in a network are disclosed. For example, the method monitors the network for a traffic overload condition, and determines whether a more...
10
US20120176897
Multipoint Delivery Entity and Method
A multipoint delivery entity for receiving multipoint transmissions from an upstream entity and delivering said multipoint transmissions downstream to multipoint receivers, said entity...
11
US20120014249
System and Method for Routing Internet Traffic Over Internet Links
An apparatus and method for routing IP traffic in real time from at least one network user to a plurality of Internet links. Embodiments include assigning different ranks to different Internet...
12
US20140204744
MECHANISMS TO IMPROVE THE TRANSMISSION CONTROL PROTOCOL PERFORMANCE IN WIRELESS NETWORKS
A system located on either side of a wireless network for reducing the amount of collisions in the wireless network comprises a TCP server in communication with a TCP client using TCP protocols...
13
US20120201136
MECHANISMS TO IMPROVE THE TRANSMISSION CONTROL PROTOCOL PERFORMANCE IN WIRELESS NETWORKS
A system located on either side of a wireless network for reducing the amount of collisions in the wireless network comprises a TCP server in communication with a TCP client using TCP protocols...
14
US20110103223
System and Method to Determine Resource Status of End-to-End Path
Determining availability of an end-to-end physical path associated with reserved resources of a tunnel may include determining, for one or more nodes, a resource status for one or more resources...
15
US20120044807
METHOD AND SYSTEM FOR ENFORCING TRAFFIC POLICIES AT A POLICY ENFORCEMENT POINT IN A WIRELESS COMMUNICATIONS NETWORK
Embodiments of a method and system for enforcing a traffic policy at a Policy Enforcement Point (PEP) that controls the flow of traffic in a wireless communications network are described. In one...
16
US20110205896
Method and Device for Monitoring Service Quality in a Network
A method of monitoring the end-to-end quality of a service in a telecommunications network includes: a step (F10) of negotiating with the provider of the service a contract including at least one...
17
US20130003546
System and Method for Achieving Lossless Packet Delivery in Packet Rate Oversubscribed Systems
A system and method for achieving lossless packet delivery in packet rate oversubscribed systems. Link-level packet rate control can be effected through the transmission of packet rate control...
18
US20120213070
DYNAMIC SETTING OF OPTIMAL BUFFER SIZES IN IP NETWORKS
A communications system provides a dynamic setting of optimal buffer sizes in IP networks. A method for dynamically adjusting buffer capacities of a router may include steps of monitoring a number...
19
US20140204743
NETWORK LATENCY OPTIMIZATION
A network may provide latency optimization by configuring respective latency values of one or more network components. A latency manager may receive a request indicative of a maximum latency value...
20
US20110222403
METHOD FOR REPORTING QOS CONTROL-RELATED INFORMATION IN NETWORK AND NETWORK ENTITY THEREFOR
A method for reporting Quality of Service (QoS) control-related information in a network is provided, in which an intermediate network entity located within an end-to-end path generates the QoS...
21
US20130301410
Multiple Logical Channels for Use in Network Devices
A method for establishing a virtual channel between network devices is disclosed. In the case of a local network device establishing a virtual channel with a remote network device, a virtual...
22
US20110205895
METHOD OF ESTIMATING CONGESTION
Provided is a method of controlling traffic transmitted over a network path from a transmitter to a receiver via a router, the traffic comprising a plurality of packets, and the method comprising:...
23
US20090190472
End-to-end network management with tie-down data integration
A method for managing tie-down information in a network management system for a telecommunications network including: providing tie-down information to the network management system associated...
24
US20140204742
POSITIVE FEEDBACK ETHERNET LINK FLOW CONTROL FOR PROMOTING LOSSLESS ETHERNET
An Ethernet node includes a receiver and transmitter for coupling to an Ethernet link for transceiving Ethernet frames with a remote Ethernet node at a remote end of the Ethernet link. The...
25
US20050052998
Management of peer-to-peer networks using reputation data
A method of operating a computer entity in a peer-to-peer network is provided in which the computer entity carries out a reputation management process in which it collects reputation data items...
26
US20110242977
RADIO COMMUNICATION SYSTEM, RADIO NETWORK CONTROLLER, RADIO BASE STATION AND RADIO COMMUNICATION METHOD
A radio network controller activates a signal transmission suppression timer to suppress transmission of a congestion signal, when a new radio base station is added as a radio base station with...
27
US20090086631
Voice Over Internet Protocol Marker Insertion
A watermark is inserted or overwritten into a packetized voice stream in a VoIP environment to characterize the voice data stream for various functions, such as providing certain in-band audible...
28
US20150215814
Device-Assisted Services for Protecting Network Capacity
Device Assisted Services (DAS) for protecting network capacity is provided. In some embodiments, DAS for protecting network capacity includes monitoring a network service usage activity of the...
29
US20150103656
End-to-End Credit Recovery
Packets or data units and their related credit returns each include an assigned phase value. When a credit test is desired, the phase value of outgoing data units is changed, for example from 0 to...
30
US20140269289
METHOD AND APPARATUS FOR IMPROVING COMMUNICTION PERFORMANCE THROUGH NETWORK CODING
A method, apparatus and computer program product providing improved communication performance through network coding is presented. Coded packets are formed at a source node, the coded packets...
31
US20140003236
System and Method for Dynamic Rate Adaptation Based on Real-Time Call Quality Metrics
According to one embodiment, a network device is configured to measure a quality of service associated with a real-time streaming packet received at a first transmission rate. The network device...
32
US20060067221
UMTS call handling methods and apparatus
Methods and apparatus for transparently switching a local instance of a UMTS protocol-based call from a first card or module of a media gateway to a second card or module of the media gateway,...
33
US20120307634
METHOD FOR HANDLING LOCAL LINK CONGESTION AND APPARATUS
The present invention relates to the field of congestion control, and discloses a method for handling local link congestion and an apparatus. The method includes: negotiating, by a local link...
34
US20130301411
THROUGHPUT ESTIMATION DEVICE
A throughput estimation device 500 includes a wireless link quality information acquisition portion 501 acquiring wireless link quality information denoting a quality of a wireless link...
35
US20050286421
Location determination for mobile devices for location-based services
Information regarding a site-specific m-commerce transaction is used to determine the general location of a mobile device with which the m-commerce transaction is conducted. All that is required...
36
US20090129268
RANDOM REUSE BASED CONTROL CHANNELS
Systems and methodologies are described that facilitate wireless network transmitters blanking or reducing power on portions of bandwidth reserved for control information transmission. This...
37
US20130170349
GENERATING, AT LEAST IN PART, AND/OR RECEIVING, AT LEAST IN PART, AT LEAST ONE REQUEST
In an embodiment, an apparatus is provided that may include circuitry to generate, at least in part, and/or receive, at least in part, at least one request that at least one network node generate,...
38
US20120140625
BANDWIDTH INFORMATION NOTIFICATION METHOD, SERVICE PROCESSING METHOD, NETWORK NODE AND COMMUNICATION SYSTEM
A bandwidth information notification method includes: obtaining current bandwidth information of a link; and sending the bandwidth information to an endpoint of a service connection through the...
39
US20130182568
COMMUNICATION METHOD OF CONTENT ROUTER TO CONTROL TRAFFIC TRANSMISSION RATE IN CONTENT-CENTRIC NETWORK(CCN), AND CONTENT ROUTER
A communication apparatus and method of a content router control a traffic transmission rate in a content-centric network (CCN), and the content router. In the communication method and the content...
40
US20100008224
Method for Optimizing the Transfer of Information in a Telecommunication Network
The invention relates to a method for optimising the transfer of information in a telecommunications network comprising a plurality of producers of information Pi, a plurality of consumers of...
41
US20050259580
Fixed network utility data collection system and method
A fixed network utility data collection system includes a plurality of endpoints arranged in a tiered and spoke-like configuration relative to a central data collecting device. An RF transmission...
42
US20130010596
METHODS, SYSTEMS, AND COMPUTER PROGRAM PRODUCTS FOR PROVIDING TRAFFIC CONTROL SERVICES
Traffic control services include detecting an occurrence of an activity via a computer processor. The activity subject to the detecting includes a presence of a device. The services also include...
43
US20140133302
TUNING ROUTING METRICS TO REDUCE MAXIMUM LINK UTILIZATION AND END-TO-END DELAY VIOLATIONS
A metric tuning technique optimizes the link utilization of a set of links in a network and end-to-end delay or latency constraints. In the embodiments, a delay constraint between node pairs in...
44
US20060253622
Method and device for congestion notification in packet networks indicating several different congestion causes
A device for routing data units in a network, and a method of controlling a device for routing data units in a network, where the device 1 is capable of identifying one or more causes of...
45
US20150124604
Systems and Methods for Proactive Congestion Detection in Radio Access Networks
System and method embodiments are provided for proactive congestion detection in radio access networks. In an embodiment, a method in a network component for inhibiting the occurrence of...
46
US20130265871
SYSTEM AND METHOD FOR IMPLEMENTING LABEL SWITCH ROUTER (LSR) OVERLOAD PROTECTION
A method and apparatus for implementing Label Information Base (LIB) overload protection for a respective Forwarding Equivalency Class (FEC) type associated with a Label Switched Path (LSP).
47
US20110019547
METHOD AND APPRATUS TO CONTROL APPLICATION MESSAGES BETWEEN CLIENT AND A SERVER HAVING A PRIVATE NETWORK ADDRESS
A method to control communication traffic in a communication network. The traffic includes application-level messages between a client and a server having a private network address. The method...
48
US20120155263
METHOD FOR NOTIFYING BANDWIDTH INFORMATION, METHOD FOR PROCESSING SERVICE, NETWORK NODE, AND COMMUNICATION SYSTEM
The present invention relates to the field of communication technologies, and discloses a method for notifying bandwidth information, a method for processing a service, a network node, and a...
49
US20140321274
MANAGING BANDWIDTH ALLOCATION AMONG FLOWS THROUGH ASSIGNMENT OF DROP PRIORITY
A method is provided in one example embodiment and includes generating a transmission control protocol (TCP) flow; marking a plurality of packets of the TCP flow with one of two differentiated...
50
US20080198745
Dynamic multi-hop negotiations
Negotiation of RSVP reservations prior to the setup of a call, rather than negotiating reservation parameters during the call. RSVP reservation parameters are negotiated prior to ringing a device,...
Matches 1 - 50 out of 252
1
2
3
4
5
6
>
Home
Search
Services
Contact us
© 2004-2020 FreePatentsOnline.com. All rights reserved. | https://www.freepatentsonline.com/ACC-370-231.html |
Spaceflight compromises immune system of astronautsMay 15th, 2010 - 2:01 pm ICT by ANI
Washington, May 15 (ANI): Spaceflight takes a toll on the genes controlling immune and stress response, which apparently leads to more sickness among astronauts, according to a new study.
Astronauts are known to have a higher risk of getting sick compared to their Earth-bound peers.
The stresses that go with weightlessness, confined crew quarters, being away from family and friends and a busy work schedule - all the while not getting enough sleep - are known to wreak havoc on the immune system.
Between spaceflight affecting a crew’s susceptibility to infections and previous observations of sickness-causing microbes thriving in a near-zero gravity environment, long journeys to far-away destinations such as Mars pose a big challenge to manned space missions.
“Taken together, our results hint at the possibility that an astronaut’s immune system might be compromised in space,” said immunobiologist Ty Lebsack at the University of Arizona, the lead author of the study.
The researchers focused their study on the thymus gland, the organ that serves as a “factory” and “training academy” for T-cells that are key players of the immune system.
They compared gene-expression patterns in thymuses from four healthy mice that had spent 13 days aboard NASA’s STS-118 Endeavour Space Shuttle to those from an equal number of control mice on the ground.
They found that 970 individual genes in the thymus of space-flown mice were up or down-regulated by a 1.5 fold change or greater.
When these changes were averaged, 12 genes in the thymus tissue of all four space-flown mice were significantly up or down-regulated.
“The altered genes we observed were found to primarily affect signaling molecules that play roles in programmed cell death and regulate how the body responds to stress,” said Lebsack.
Programmed cell death plays an important role in a functioning body, for example in the disposal of cells that are no longer needed or damaged beyond repair.
However, cell death must be tightly regulated in the immune system to ensure the process does not get out of hand.
“Many of the genes whose activity was down-regulated in the space-flown mice play important roles in maintaining that balance. Potentially, you could get more cell death aboard a spacecraft because of these differences,” said Lebsack.
The results relate with experiments carried out on the ground to study how microgravity affects immune cells.
In these experiments, scientists mimicked weightlessness using clinostats - apparatuses that slowly rotate the study object so the Earth’s gravitational pull is never perceived as coming from one consistent direction.
“Previous studies with cell cultures in clinostats showed increased cell death in T-cells when you take away the gravity stimulus, so it was a logical step to test whether we find the same effects in animals exposed to an actual lack of gravity,” said Lebsack.
“We observed an overall pattern about the genes whose expression was changed by space flight: All of them are involved, in one way or another, in the development, control and programmed cell death of immune cells,” he added.
This study represents the first use of microarray technology to investigate gene expression in thymus tissue of space-flown mice, said the authors. | http://www.thaindian.com/newsportal/health/spaceflight-compromises-immune-system-of-astronauts_100364155.html |
Dr. Krupesh Thacker — Kyan Chhe Kano?
Kyan Chhe Kano?, is a Spiritual Journey album. A Spiritual Journey of each one of us that starts with the Search of God and ends with the realization of Ultimate Truth. The title track is a musical conversation between a daughter and a father.
Bhagwad Gita is the inspiration for Album songs. All the songs in the Album are interwoven by a story of Spiritual Journey, of each one of us, which begins from the Search of God to the realization of Ultimate Truth. By the time we realize the concept of “Universal God”, generation next is ready to ask the same question about the “Existence of God”. Thus the circle of Spiritual Journey continues. Title song is a conversation between a daughter and father where daughter asks about the existence of god and father explains the concept of “Universal God” in a simple and musical way. 7 Songs of this album includes 4 languages Gujarati, Hindi, Sanskrit & English.
Songs reflects the folk music of Gujarat with blend of Lavani, Goan music and western music. | https://www.krupmusic.com/release/kyanchhekano/ |
Reading nonwords aloud: evidence for dynamic control in skilled readers.
In two experiments, we examined whether the dynamics of the reading system are adjusted on a trial-by-trial basis, despite the use of stimuli that all require the same spelling-sound translation routine. Subjects read aloud easy and more difficult nonwords in a predictable alternating sequence (AABB). Dynamic control was inferred via the observation of switch costs in response times and/or accuracy (A to B and B to A) for both types of items. Consistent with online control, switch costs were observed for both kinds of items. Various ways in which the reading system could adjust in response to such stimuli are considered.
| |
Yard & Garden: How to keep tomatoes from splitting on the vine
Even when there is adequate irrigation, drying winds and intense sunlight in New Mexico can result in water deficits within the plant while there may be adequate moisture in the soil.
Q. I grew tomatoes last year in my home garden in Lea County (in southeastern New Mexico). The plants grew fine, but the fruits cracked. How do I keep them from splitting on the vine? They were terrific eating; I just cut around the splits.
Dennis H., via NMSU Extension
A. Tomato fruits often exhibit cracking when their skins lose elasticity and then the fruit receive water and enlarge. The loss of elasticity of the fruit skins is most common as the fruit are nearing maturity. This problem occurs in other states as well as in New Mexico and is often attributed to failure to irrigate properly to maintain consistent moisture in the soil. During dry periods the growth rates of tomato fruit slows and the skins become less elastic. When the plants are irrigated after the plants have dried, even if they dried only a little bit, the fruits absorb water, begin to enlarge, and in some varieties the skins are unable to stretch enough to accommodate the enlargement. The environment in New Mexico can make this situation more complicated. Even when there is adequate irrigation, drying winds and intense sunlight in New Mexico can result in water deficits within the plant while there may be adequate moisture in the soil.
Some varieties are more resistant to fruit cracking than other varieties. Larger fruited varieties may be the most susceptible and smaller fruited varieties may be more resistant. This is a generalization and may not be correct for all varieties. Gardeners should grow several varieties when possible, each year noting which varieties perform best and which have problems. Look for varieties that are reported to resist fruit cracking.
New Mexico gardeners should use techniques that gardeners all over the country use to maintain adequate soil moisture. We should irrigate often enough to prevent soil drying. We should use mulch to conserve soil moisture and maintain uniform soil temperatures. Both plastic mulch and organic mulches will benefit tomato plants and reduce the incidence of fruit cracking. We do little to control moisture from precipitation, but we can irrigate to prevent drying of the soil. We should also irrigate in a manner that avoids creating waterlogged soil.
In New Mexico, our intense sunlight and drying wind based problems may be managed to some degree by growing plants where they receive some shade during the hottest part of the day and in locations where the plants are protected from winds. This may be managed by carefully choosing tomato growing locations in relation to structures, or by employing other garden plants to provide shade and wind protection. Tomato plants planted among corn plants will receive some shading and wind protection from the corn plants. Rows of climbing plants or self-supporting tall plants may be planted upwind of the tomato plants.
Finally, careful application of nutrients can help tomato plants reduce the incidence of fruit splitting. Nitrogen application will stimulate vegetative growth that competes with the developing fruit for water. Phosphorus applications will stimulate flowering and fruiting. Potassium is important for water management and efficient plant use of water. The proper balance of nutrients, especially the “macro nutrients” listed above, and many other nutrients, results in better plant health and fruit production. Soil samples sent to professional soil testing laboratories can help determine the status of soil nutrients in your garden and determine the need for nutrient application. Your local NMSU Cooperative Extension Service can advise you regarding soil testing and interpreting the results of soil tests.
Send your gardening questions to Yard and Garden, Attn: Dr. Curtis Smith at [email protected] or leave a message at https://www.facebook.com/NMSUExtExpStnPubs. Curtis W. Smith, Ph.D., is an Extension Horticulture Specialist, retired from New Mexico State University’s Cooperative Extension Service. NMSU and the U.S. Department of Agriculture cooperating. | https://www.lcsun-news.com/story/life/sunlife/2016/03/13/yard-garden-how-keep-tomatoes-splitting-vine/81282440/ |
Day I: Visit and volunteer at Tunahaki Centre for Children Development, an orphanage caring and providing basic essential development services to orphans and vulnerable street children. There are more option to volunteer according to interest and professionalism as well time.
Day II: After breakfast, with lunch pack drive to Lake Manyara National Park offices for registration formalities; thereafter have a half day game drive in the park. This park is famous for its Tree Climbing Lions, a good number of primates, especially Baboons, Velvet and Sky (Blue) Monkeys, a great chance to visit Maji-Moto (Hot Spring Water), observe variety of water birds along the shallow Lake Manyara.
At late afternoon visit Mto wa Mbu Maasai town. On the surrounding plains you’ll see many Masai families in their traditional bomas, and the tall, elegant, proud Masai herdsmen draped in red cloths and hung with strings of beads and intricate earrings. If you’re lucky you might see one of the Masai cattle markets that occur occasionally on the plains outside of town. This is a rare spectacle when thousands of colorful Masai with their skinny cows gather to trade. Also you might have time to explore the Masais’ other social, economic and cultural activities including warrior (young men women) night fire dancing, overnight at Lake Manyara.
Day III: Drive with game en route to Serengeti National Park via Ngorongoro Conservation Area Authorities. Today in Serengeti you will have a “sunset” game drive at central Serengeti National Park, overnight at Central Serengeti.
Day IV: Have a full day game drive in Serengeti, worldwide known as endless; announced by the American “Regional Geographical” as the new 7th Wonders of the World! In that day you will have chance to explore the migration of hundreds of thousands of millions of Wildebeest, Zebras, Gazelles, Topi, Hartebeest will be wondering at Serengeti short grass plains, overnight in Serengeti.
Day V: Early in the morning around 6.00 am have coffee drink and leave the hotel for “sunrise” game drive to tick those not seen previously. Then come back for “branch”, pack the stuff before starting game en route with stopover at Oldovai Gorge, a historical site where the oldest human being in the world discovered to live thousands of years ago to Ngorongoro Conservation Area for dinner and overnight.
Day VI: Descend to the Crater for full day game drive. Ngorongoro Crater is a real Live Today’s World Paradise where wildlife, human being (Maasai) and domestic animals (livestock) all living together. In this bowl like intact Caldera is where Black Rhinos made their home. Ngorongoro Crater is also a home of Black Mane Lions, Cheetahs, Leopards, Foxes and Jackals, Hyenas, and good number of Herbivores like Buffaloes, Wildebeest, Zebras, Gazelles Eland and Elephant Bulls. Ascend and drive to Tarangire for dinner and overnight. | http://www.greatafricasafaris.com/tanzania-safaris/7-days-adventure-and-community-tour/ |
A template is a pattern or gauge used in constructing or measuring something. A cognitive template or core belief is like a code we mentally design to make our environment comprehensible. These fundamental concepts reside beneath conscious awareness, but they help shape a person’s perception of reality and determine his life choices, influencing and reinforcing the interpretation of experience. The power of cognitive templates is evidenced by the fact that rather than changing them to more accurately reflect the way things now are, most people tend to continue misperceiving present events so they will fit with past core beliefs.
Negative templates develop when we are young. A child innately seeks to understand the world around him, but when he is exposed to confusing or traumatic experiences and given either no explanation or inaccurate information, he formulates his own faulty worldview based on distortions and untruths (1 Corinthians 13: 11 BIBLE). For example, in the case of their parents’ divorce children always blame themselves.
By approaching life according to these defective patterns the youth has the impression he understands clearly, when in reality he is ignoring or misinterpreting events in the here and now. In depth counseling we help the recipient strip away unhealthy destructive or defensive templates to see truth more accurately and more completely.
Lies also can form core beliefs when others repetitively mislead us, whether deliberately or through ignorance. Abusers often intentionally deceive their victims to protect themselves from being exposed. They say, for instance, “If you try to tell your mother, she will never believe you”. In addition, untruths can shape life themes through demonic influence. Satan is the “father of lies” (John 8: 44 BIBLE).
Core beliefs or cognitive templates tend to fall into certain clusters, which can be loosely categorized for purpose of identification. Each collection is strongly influenced both by inborn temperament traits and early experience. Specific defensive behaviors are typical of each cluster.
In the first group of false core beliefs are convictions held by persons who believe they are no good alone, that life is not worth living unless someone loves me. Persistently feeling inadequate and guilty, those who are dominated by this set of templates blame themselves unnecessarily for situations beyond their control. As a result, such individuals become people pleasers, conforming to others’ wishes and avoiding assertive action in order to keep from rejection. They find themselves caught in the dependency/co-dependency cycle, to the detriment of their true identities. God tends to be seen as distant, inaccessible, and disapproving.
Second in classification are core beliefs that one is accepted only on the basis of performance; this results in compulsivity. These individuals govern their behavior according to what they believe they should do in order to be loved. They may say to themselves, “Perfection is the only acceptable standard for me”, or “I should be able to satisfy all my own needs”, or “I should be competent at all times.”
Such people can give to others, but have a hard time receiving because they feel they don’t“deserve” love. This makes it hard to accept God’s love as well, and often they feel isolated and alone. Compulsivity templates produce anxiety reactions, eating disorders, and other addictions. Being driven by a performance-oriented way of thinking prevents intimacy with a gracious God, who is seen as perpetually disappointed, always expecting more.
Confronting compulsivity core beliefs and their attendant lies may be particularly difficult in our society, where self-esteem based on earned approval is highly valued. In Matthew 20: 1-16 BIBLE, Jesus tells a parable emphasizing the inadequacy of this type of mindset. In the story, a landowner sends workers to his vineyard at 6 a.m., 9 a.m., and 5 p.m. At the end of the day all the workers are called in and paid the same, whether they worked one hour or twelve. When those who worked twelve hours grumble, the owner responds, “I have the right to do what I want with my money. I can be generous with whomever I desire.” Jesus’ point is clear: Our earning mentality does not help us understand a gracious God.
The third cluster of core beliefs relates to control of self or others. Individuals governing their lives by these lies are disciplined and hard working, yet constantly fearful of losing their grip. When the control cork pops off, they become unpredictable and impulsive.Whether intent on keeping their own feelings down or holding others in line, people driven by control needs can be difficult to work or live with. They may seek counseling for employment problems. Their spouses may want marriage intervention or demand they go through anger management. These people tend to see God as angry and judgmental.
Cognitive templates are influenced by a person’s inborn temperament tendencies in yet another way. The demonic often seems to hone in on innate imperfections by using difficult events in childhood, then taking a trait and distorting it into a defensive posture. For example, an individual with intrinsic high needs in the control area may be subject to violent authoritarian abuse from a father who never reinforces his developing competence. As a result, the young person’s natural tendencies to want control are perverted and turned inward. He may believe, “I am worthless unless I can control everything and everyone around me”.
Healing Faulty Core Beliefs
Faulty core beliefs are usually addressed after the content of root memories has been processed. Untruths are exposed at this time by questioning, “What are the lies you believed?” To precisely pin down foundational untruths, we often formulate an identity statement based on what has been revealed, offering it to the individual for him to rate on a scale of one to ten.
As a result of what the person has already said, we suggest a possible cognitive template. For instance, we might offer something like this: “On a scale of one to ten, one being totally false, and ten being totally true, how would you rate this statement: I am bad because I am stupid and incompetent, and because of that, nothing I ever do will make me good enough for people to love me”?
By this time the counselee is already acquainted with his hidden emotions and has at least begun to release them. Therefore, when a possible identity statement is presented, the person almost always unambiguously answers with a specific number.If the sentence rates less than a ten, we keep “tweaking” it with the individual until it feels very accurate to him.
Intellectually oriented people may need some time to consider the full impact of the core belief they have confessed. Sometimes we write verbatim the statement rated ten by the affected person, then give it to him so he can deliberately consider how it may be false. Next time we meet, before beginning the session I may invite the injured individual to share insights he has gained after having reflected on the accuracy and truth of his cognitive template.
Now we can explore in particular the reasons a core belief is counterfeit. When the individual is ready to remove the untruth from her heart, we ask that she close her eyes and allow a moment to feel the full effect of its impact on her life. Then we request her to stop and look within her being for a truth to replace the previously identified “10” statement.
If the sufferer is able to report a new idea that she just “heard” or perceived, and which her heart accepts as authentic, we encourage her to reject and renounce the old lie verbally, then confess the new positive cognitive template for her life.
Sometimes people resist the truth, clinging hard to false identities upheld by well-formed hidden cognitive templates. If that is so, we challenge the validity of core beliefs by utilizing certain methods.
One technique is to talk them through a self-judging statement and rate it on a continuum. For example, if a perfectionist states he is worthless because of being unable to fix his car or maintain a spotless house, we ask him to rate his behavior on a scale from one to ten, in comparison with other possible violations such as child molestation or ax murdering. This helps him confront the truth that though he is not perfect, nor even as good as he would like, he is still not “worthless” as his belief insists.
Another tool for evaluating a core lie is to use logical analysis in breaking a person out of circular thinking. An illustration of such self-defeating false reasoning is the statement “I’m only loved when I have sex with men, so I always have to seek sex with men to prove I am lovable.”
Still another procedure is to reverse the negative cognitive template. If a person says, “I am worthless unless I can please everyone”, we may ask, “What if you have worth and you can’t please everyone? How will your life be different if your worth does not come from pleasing everyone, and what would be the basis for your worth then?”
When negative cognitive templates are exposed, demons that can be compared to rats nesting on garbage may be activated. Often unclean spirits leave spontaneously because their environment, figuratively located in dark wounded areas of the heart, has been brought into the light and cleansed by truth.
Deep healing comes to the wounded individual when the lie previously held in darkness is exposed and is changed to a life-affirming truth, in historical context. After negative cognitive templates are revealed and transformed, individuals have reported peace and control in areas of previous obsession and torment. | http://www.smokyraincounseling.com/articles/core-beliefs-and-cognitive-templates/ |
As you get older, your digestive system may not work as effectively or quickly as it used to. With age comes an increased risk for various digestive issues like heartburn, IBS, ulcers, and dysphagia.
Fortunately, as is the case with many health concerns, making modifications to your diet as you age can help support a healthy digestive system.
Digestive Health Tips
1. Stay at a Healthy Weight
Carrying excess weight increases your risk for heartburn and indigestion, so staying at a healthy weight for your body type is crucial. The 3 main factors that affect your weight are diet, exercise, and genetics.
2. Look Over Medications
Some medicines can cause adverse side effects related to digestion, including gas, bloating, constipation, heart burn, and alterations in the healthy bacteria in the gut. Talk with your doctor to see if any of the medications you’re taking could be negatively impacting your digestive health.
3. Recognize Trigger Foods
Keep a journal of what you eat so you can identify specific foods that cause digestive issues. Common irritants include spicy foods, dairy in excess amounts, legumes, and alcohol.
4. Chew Thoroughly
Chewing properly and thoroughly helps break down foods into more easily digestible pieces. Chew slowly and steadily, and don’t take another bite of food until you’ve swallowed. If eating slower is difficult, set a timer for 20 minutes at least that much time to eating your meal.
5. Eat Smaller, More Frequent Meals
Eating 3 large meals a day can slow down your metabolism and the digestive system. Instead, aim for 4-5 small meals spaced throughout the day.
6. Maintain Good Oral Hygiene
Maintaining good oral hygiene is important to preserve the ability to chew foods thoroughly. Floss and brush regularly, along with visiting the dentist on an annual basis or as recommended.
7. Stay Hydrated
Older adults have a greater risk for dehydration due to side effects from various medications, decreased activity levels, and occasionally memory issues. This presents a challenge when it comes to digestion because fluids are what allow the body to properly process fiber. Everyone’s necessary fluid intake is different, but make it a point to always have water with you that you can sip throughout the day. It’s also important to incorporate hydrating foods into your diet like watermelon and other fruits.
8. Increase Fiber Intake
Unfortunately, most people in the US aren’t eating enough fiber which can lead to a myriad of digestive issues, as fiber helps our bodies process food through our system efficiently.
Fiber can be found in plants-based foods, including whole grains, legumes, fruits, vegetables, nuts and seeds. Men 50 years and older are recommended to consume 30 grams of fiber daily, while women are encouraged to consume 21 grams.
9. Supplement Gut Bacteria
A healthy digestive system has to have healthy gut bacteria. Probiotics are live bacteria cultures that supports gut health by controlling the growth of harmful bacteria. Common probiotics to look for on ingredient labels include Lactobacillus, Bifidobacterium, and Saccharomyces and sourced yogurt, sauerkraut, sourdough bread, soft cheeses, and generally fermented foods. Probiotics also come in supplemental form.
Probiotics should be complemented with prebiotics, which are essentially the energy source for probiotics. Prebiotics are carbohydrates than cannot be digested and supplied from asparagus, bananas, oatmeal, legumes, and other fiber sources.
10. Get Active
Being active decreases the risk of constipation, particularly as aerobic exercise promotes the contraction of intestinal muscles and facilitates bowel movements.
Aim for at least 30 minutes of activity most days of the week. | https://chefsforseniors.com/blog/10-healthy-digestion-tips-for-seniors |
Drive-by download means two things, each concerning the unintended download of computer software from the Internet:
Downloads which a person authorized but without understanding the consequences (e.g. downloads which install an unknown or counterfeit executable program, ActiveX component, or Java applet). Any download that happens without a person’s knowledge, often a computer virus, spyware, malware, or crimeware.
Drive-by downloads may happen when visiting a website, viewing an e-mail message or by clicking on a deceptive pop-up window:
by clicking on the window in the mistaken belief that, for instance, an error report from the computer‘ operating system itself is being acknowledged, or that an innocuous advertisement pop-up is being dismissed. In such cases, the „supplier“ may claim that the user „consented“ to the download, although actually the user was unaware of having started an unwanted or malicious software download. Websites that exploit the Windows Metafile vulnerability (eliminated by a Windows update of 5 January 2006) may provide examples of drive-by downloads of this sort.
Hackers use different techniques to obfuscate the malicious code, so that antivirus software is unable to recognize it. The code is executed in hidden iframes, and can go undetected.
A drive-by install (or installation) is a similar event. It refers to installation rather than download (though sometimes the two terms are used interchangeably). | https://newcyber.net/silverlight-drive-download-dbd/?v=4a5e17551e76 |
The Winden triplets used matching shovels to help their mom clear the driveway in Prairie Village.The massive winter front that parked over much of the central plains Tuesday ground business as usual to a halt, and it likely to continue hindering it Wednesday and beyond.Reports suggest parts of northeast Johnson County received nearly a foot of snow, with accumulation piling up over nearly a full day of snowfall, hampering street clearing efforts. Prior to the start of the storm, Prairie Village public works director Keith Bredehoeft said the city’s eight plows were planning to start work at 4 a.m. Tuesday morning, but warned that the size of the storm would pose challenges to his crews.“Public works departments in this part of the country aren’t really equipped to deal with a storm like this,” he said. “We do ask a little patience.”The street clearing issues were compounded early Tuesday in Fairway where a broken water main on Shawnee Mission Parkway just west of Mission Road blocked part of an eastbound lane. WaterOne crews were able to clear the scene by the early afternoon:WaterOne crews fixed a broken main along Shawnee Mission Parkway in Fairway.Challenging as the day may have been, the school closures left some time for fun — particularly before the accummulation made the roads increasingly difficult to manage. At SM East, the Riscovallez family relaxed in their car with hot chocolate and homemade pumpkin donuts (…which they were kind enough to share. It was amazing) between stints on the sledding hill:Here’s what sledders Michael and Stella had to say about their day on the slopette:Audio Playerhttps://dfv6pkw99pxmo.cloudfront.net/wp-content/uploads/2014/02/New-Recording-19.mp300:0000:0000:00Use Up/Down Arrow keys to increase or decrease volume.Today, though, attention turns to clean up. Forecasts don’t call for a temperature above freezing until Saturday, with especially cold air predicted to fall on the area today and Thursday.
Striker Falcao says Chelsea move ‘does not seem possible’
Radamel Falcao has told Sky Italia he doubts he will end up playing under Chelsea boss Jose Mourinho.The Colombian striker was heavily linked with the Blues prior to his move to Monaco and Mourinho’s return to Stamford Bridge last summer.Mourinho is reportedly keen on Falcao.And there have since been reports that Falcao is unhappy in Monte Carlo and could yet move to Chelsea.AdChoices广告“There has been a lot of speculation about European clubs interested in me. I don’t think about that,” he said.“Would I like to play with Mourinho? I’d never considered it. As of today it does not seem possible, as we are on very different paths and I have a five-year contract with Monaco.“But Mourinho is a coach with a fantastic career. Nobody can deny his capabilities.”Follow West London Sport on TwitterFind us on Facebook
The Legend of Holi – Story of Prahlad and Holika
Holika Dahan Holika Dahan or the burning of Holika will take place across Guyana on the night before the Phagwah celebrations. The festival of Holi gets its name from this very ritual of Holika Dahan. The phrase Holika Dahan literally means burning Holika. The day when Holika is burnt is also known as Choti Holi or small Holi. The people create a bonfire on this day and celebrate the triumph of good over evil.It is stated in the Vedas that one day before the Holikotsav a sacred fire was burnt and specific mantras to ward off evil were recited during the burning ritual. The fire was burnt with the objective to destroy the demonic forces.Some texts also mention a different version of Holika Dahan. According the texts, the parched cereals and grains are collectively called ‘Holka’ in Sanskrit.It is believed that the word Holi or Holika is derived from Holka or grains. These parched grains were used to perform ‘hawana’ or sacred fire.The sacred ash obtained from this fire ritual was known as the Bhumi Hari. This ash was smeared on the forehead of the person participating in the ritual to keep away evil spirits. Till date, this ritual of offering food grains to the bonfire is followed.The preparations of Holika Dahan start 40 days before the festival. People gather twigs, logs, dry leaves etc. Then on the night of Phalgun Purnima, the bonfire is set alight amidst chanting of the Rakshogana mantras which help to ward off evil spirits. Next morning the ashes of the fire are collected and smeared on the body before taking a bath.The burning of Holika signifies the triumph of good over evil and that the bad forces around you can never win if you have a strong will. Holika represents the negative forces which get burnt against the strong will of human which is denoted by Prahlad. One of the foundational principles of Hindusim is good only begotten good and it is instilled in the minds of all Hindus from a young age. They are taught to do good deeds as a way of life. They are taught that one’s soul only attains Moksha (liberation in this life) through good karma (deeds). Around this time, we are quite often reminded about the consequences of good karma with the celebration of Holi-the festival of colours.Holi or Phagwah as we know it is the Spring Festival celebrated on the last full moon day of the lunar month Phalguna (Phalguna Purnima), which usually falls in the later part of February or March. There are several legends, depending on the various areas you are living in. However, the most common legend is the story of young Prahlad and his evil aunt Holika signifying the triumph of good over evil.The Legend There was once a demon king by the name of King Hiranyakashyapu who won over the Kingdom of Earth through years of prayers. He was the King of the demonic Asuras and according to the Bhagavata Purana he was granted a boon (wish) that he could be killed by neither a human being nor an animal, neither indoors nor outdoors, neither at day nor at night, neither by astra (projectile weapons) nor by any shastra (handheld weapons), and neither on land nor in water or air.After he received his boon, Hiranyakashyapu became so egoistic that he commanded everybody in his kingdom to worship only him. But to his great disappointment, his son, Prahlad became an ardent devotee of Lord Vishnu-the sustainer of the universe and refused to worship his father.Hiranyakashyapu tried several ways to kill his son Prahlad but Lord Vishnu saved him every time. Finally, he asked his sister, Holika to enter a blazing fire with Prahlad in her lap. For, Hiranyakashyapu knew that Holika had a boon, whereby, she could enter the fire unscathed.Treacherously, Holika coaxed young Prahlad to sit in her lap and she herself took her seat in a blazing fire. The legend has it that Holika had to pay the price of her sinister desire by her life. Holika was not aware that the boon worked only when she entered the fire alone.Prahlad, who kept chanting the name of Lord Vishnu while in the fire with his aunt, came out unharmed, as the lord blessed him for his extreme devotion. Thus, Holi derives its name from Holika. And, is celebrated as a festival of victory of good over evil.Holi is also celebrated as the triumph of a devotee. As the legend depicts that anybody, howsoever strong, cannot harm a true devotee and those who dare torture a true devotee of god shall be reduced to ashes.After the fire destroyed the evil Holika, King Hiranyakashyapu became enraged and attempted to kill Prahlad himself. However, Vishnu, the god who appears as an avatar (form) to restore Dharma in Hindu beliefs, took the form of Narasimha – half human and half lion, at dusk (when it was neither day nor night), took Hiranyakashyapu at a doorstep (which was neither indoors nor outdoors), placed him on his lap (which was neither land, water nor air), and then eviscerated and killed the king with his lion claws (which were neither a handheld weapon nor a launched weapon). | https://www.huanggangbbs.com/fmzrhtag/%E5%A4%9C%E4%B8%8A%E6%B5%B7%E8%AE%BA%E5%9D%9Bef |
Efficiently communicating with customers and partners has never been more important. Rachel Perone, a multimedia designer at Magentrix, offers these tips for how to effectively and efficiently communicate digitally with partners and customers, especially during times of uncertainty.
- Provide updates via a partner or customer portal. No one can predict the future, but showing that your organization is prepared to address any scenario that comes their way will bring confidence to your partners and customers.
- Listen to feedback and respond to concerns. Be prepared to communicate the latest updates with your partners and customers via the portal. Gather feedback in a centralized system to ensure the team is able to stay on top of their concerns.
- Host events online and keep everyone informed as plans change. Customer and partner portals – where organizers can quickly make changes and notify guests with a click of a button – are an easy and effective way to ensure everyone is up-to-date with the latest changes.
Get our newsletter and digital magazine
Stay current on learning and development trends, best practices, research, new products and technologies, case studies and much more. | https://salesandmarketing.com/making-most-digital-communication/ |
As a parent, you’ve probably heard and seen it all when it comes to your child. They come to you when they’re hungry, happy or needing a change of clothes. When bad dreams bother them, when they simply want to play or need a glass of water – you as the parent are usually their go-to. You know what to do to help your child. But what about when they’re anxious?
When a child feels anxious, it may look very different to the adults which they may turn to for help. Anxiety in children may look like unexplained tummy aches, bouts of tears, muscle tension, headaches or a feeling of dread welling up inside. One of the most challenging aspects of childhood anxiety is how it may present as a constellation of complaints, ranging from a fear of separation to difficulty falling asleep, or even being easily agitated and snapping at others.
When parenting an anxious child, the goal is not necessarily to eliminate the anxiety but rather to help the child manage the anxiety in a healthy and adaptive manner. Nobody wants to see their child anxious or unhappy, but as a parent you are not always in the position to be able to remove the triggers of your child’s anxiety. Rather, when helping them to tolerate anxiety, you empower your child to better manage their emotional experiences.
How you respond to your child’s anxiety is vitally important. As a rule of thumb, remember: Empathy first and empathy always. When we respond with empathy, the anxious child will generally feel validated. Acknowledge what your child is feeling, rather than moving into action to solve it or push their emotional state aside. By telling a child that there’s nothing to feel anxious about, we invalidate their very real emotional experience. Ask the child questions about what they're going through, thereby showing them that you want to better understand them. Let them know that you see they are in distress, and you hear what they are saying. You could say, “I can see that you’re scared. I’ve been scared before too, and I know what that feels like.” Engaging the child in an open and non-judgmental manner, where they relay their story or emotional experience, helps them take charge of their emotional experience, open communication, while also improving on their emotional vocabulary.
Anxiety is often such a captivating emotional state, with many “what if’s” running through the mind. “What if they don’t like me?”, “What if mommy gets sick?”, “What if this feeling lasts forever?” are just some examples of anxiety-provoking thoughts that a child might ruminate on. By practicing grounding exercises, the child is taught to be mindful of the present moment which can assist them in regulating their emotions. Ask your child to tune into their senses as a way to ground them in the present, by using the 5,4,3,2,1 technique. Ask them to name 5 things they can see, 4 things they can hear, 3 things they can touch and so forth. By engaging the senses, the child is connected to the here-and-now present moment.
Let your child know that you appreciate their willingness to engage with their emotions and to tolerate their anxiety. By modelling an openness to talk about emotions and developing healthy responses to anxiety-provoking situations, you also teach your child that anxiety is something that can be managed. If we want to prepare our children for facing challenging circumstances and uncomfortable emotional states, we can begin with preparing them to talk candidly about worrisome topics. | https://www.biancalimasmit.com/post/coping-during-covid-19-parenting-an-anxious-child |
The utility model discloses an intenral fixed device for vertebral arch pedicle of vertebral column, and aims to provide an intenral fixed device for vertebral arch pedicle of vertebral column with simple surgery, and easily grasped installation, and the neck of a vertebra bolt with enough strength and bolt/rod locking strength, and with powerful combination between the vertebral bolt and the vertebra, and the satisfied requirement for the lower notch. The device comprises a plurality of vertebral bolts, connection rods, connection clipping blocks and locking screw plugs used to be screwed into the vertebra, wherein the vertebral bolts connected with the connecting clipping blocks are the universal bolts or the single-axis bolts. The hole arranged inside the connecting clipping blocks used for passing through the universal bolts which matches with the spheric head on the tail of the universal bolt hole forms a spheric surface. The utility model can be used to treat the fracture of thoracic-lumbar vertebra, spine tumor, and spine abnormality, as well as the thoracic-lumbar vertebra diseases that need surgical therapy and posterior spinal internal fixation. | |
Temporality in computational linguistics and natural language processing can be considered from two aspects. One concerns the use of linguistic and philosophical theories of temporality in computational applications. The other concerns the use of computational theory in its own right to define new kinds of theories of dynamical systems including natural language and its temporal semantics. As in the case of nominal expressions in natural language, we should be careful to distinguish temporal semantics, or the question of what kinds of objects and relations temporal categories denote, from the question of temporal reference to particular times or events that the discourse context affords. It is useful to draw a further distinction within the semantics between temporal ontology, or the types of temporal entity that the theory entertains, such as instants, intervals, events, states, or whatever, temporal quantification over such entities, and the temporal relations over them which it countenances, such as priority or posteriority, causal dependence, and the like. This article examines computational linguistics, focusing on temporal semantics, and also considers ontologies, quantifiers, relations, and temporal reference.
Keywords: temporality, computational linguistics, natural language processing, computational theory, temporal semantics, temporal ontology, temporal quantification, temporal relations, temporal reference
Mark Steedman is Professor of Cognitive Science in the School of Informatics of University of Edinburgh and adjunct professor in Computer and Information Science at the University of Pennsylvania. He works in computational linguistics, artificial intelligence, and cognitive science on the generation of meaningful intonation for speech by artificial agents, animated conversation, semantics of tense and aspect, wide-coverage parsing, and combinatory categorial grammar. His books include Surface Structure and Interpretation (1996) and The Syntactic Process (2000).
Access to the complete content on Oxford Handbooks Online requires a subscription or purchase. Public users are able to search the site and view the abstracts and keywords for each book and chapter without a subscription.
Please subscribe or login to access full text content.
If you have purchased a print title that contains an access token, please see the token for information about how to register your code.
For questions on access or troubleshooting, please check our FAQs, and if you can''t find the answer there, please contact us. | http://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780195381979.001.0001/oxfordhb-9780195381979-e-3 |
The Insurance Analytics team is responsible for providing a variety of analytics across the Insurance & Working Capital Solutions Group (IWCS). They are responsible for supporting the performance and integrity of all programs within IWCS. Their responsibilities include the development and maintenance of credit and pricing models, business analytics and forecasting. These activities are undertaken with a view to providing market-driven insurance solutions to meet the needs of the Canadian trade community.
Discipline Summary
• Provide an actuarial service covering all aspects of business and financial control
• Provides underwriting statistics and recommendations on the adequacy of premium rates and rating methods
• Reviews premium rates/rating methods and provides input to the business plan of the actuarial function
• Involved in general strategic business planning, recommendations and advice on adequacy of reinsurance levels and cover
• Depending on the team's needs, provide (non-actuarial) modeling support.
Key Responsibilities Quantitative Analyst:
• Responsible for the actuarial analysis and pricing of products across the guarantee and insurance product groups
• Develops, enhances and maintains portfolio management tools (e.g. pricing and scoring models) and related applications for the quantification and management of financial risks
• Responsible for the measurement of the financial performance of pricing and automation strategies
• Responsible for independently gathering, examining and analyzing data, writing technical specifications for data manipulation methodologies, producing clear and concise reports, graphs, summaries and recommendations
• Communicates quantitative findings and complex subject matter, both verbally and in writing, in a fashion that is concise and tailored to both technical and non-technical audiences
• Develops and documents standards and processes relevant to the development, testing, validation, calibration and usage of the models for all IWCS product groups
• Provides training and support to all direct and indirect users of the models as necessary
• Proactively identifies opportunities to make improvements to models, policies and processes that are consistent with team and corporate goals
Senior Quantitative Analyst: In addition to the above responsibilities:
• Take a lead role in the responsibilities listed above and perform them at an advanced/expert level
• Provide oversight and coaching to more junior employees
• Be expected to independently manage the non-standard requests that come to the team
• Completed an Undergraduate Degree in Actuarial Science, Mathematics, Finance, Statistics or in a related field
• For Quantitative Analyst role:
o Completed 3-6 exams of either the Casualty Actuarial Society (CAS) or the Society of Actuaries (SOA)
o Minimum 5 years’ experience in actuarial analysis and risk management
• For Senior Quantitative Analyst role:
o Completed 6-9 exams of either the Casualty Actuarial Society (CAS) or the Society of Actuaries (SOA)
o Minimum 7 years’ experience in actuarial analysis and risk management
• Advanced MS-Excel capabilities
• Advanced knowledge of accounting, finance and actuarial principles and practices
• Advanced programming knowledge and experience in the management and manipulation of large and complex data files (R, VBA, SAS/SQL, etc.)
• Team player, with strong interpersonal, problem solving and communication skills
• Fluent in English (written and oral)
Assets
• Bilingual in both official languages (French and English)
• Analytics related skills (data sourcing, manipulation, modeling and interpretation)
• Financial or accounting designation
• Pursuit of actuarial designation under way (FCIA, FCAS or FSA with a focus on General Insurance or Enterprise Risk Management)
• Experience communicating with and influencing decisions at a senior management level
Salary Range Quantitative Analyst:
• $86,000 - $128,000 plus performance based incentive
Senior Quantitative Analyst:
• $102,000 - $154,000 plus performance based incentive
How to apply
Only candidates selected for an interview will be contacted.
Application deadline: November 17, 2017, 11:59 p.m. EST on www.edc.ca/careers
EDC is committed to employment equity and actively encourages applications from women, Aboriginal people, persons with disabilities and visible minorities. If selected for an interview, please advise us if you require special accommodation. Candidates must meet the requisite government security screening requirements.
Close
Continue
The Mission of the American Business Women's Association is to
bring together businesswomen of diverse occupations and to provide
opportunities for them to help themselves and others grow personally
and professionally through leadership, education, networking support,
and national recognition. | http://careers.abwa.org/jobs/10449772/quantitative-analyst-senior-quantitative-analyst-insurance-analytics |
The IRS will significantly increase the dollar thresholds when liens are generally filed. The new dollar amount is in keeping with inflationary changes since the number was last revised. Currently, liens are automatically filed at certain dollar levels for people with past-due balances.
The IRS plans to review the results and impact of the lien threshold change in about a year.
A federal tax lien gives the IRS a legal claim to a taxpayer’s property for the amount of an unpaid tax debt. Filing a Notice of Federal Tax Lien is necessary to establish priority rights against certain other creditors. Usually the government is not the only creditor to whom the taxpayer owes money.
A lien informs the public that the U.S. government has a claim against all property, and any rights to property, of the taxpayer. This includes property owned at the time the notice of lien is filed and any acquired thereafter. A lien can affect a taxpayer’s credit rating, so it is critical to arrange the payment of taxes as quickly as possible. | http://www.blog.unclejoetax.com/tax-lien-thresholds/ |
The IT security situation in Germany is tense. In 2019 alone, the Federal Criminal Police Office recorded over 100,000 reported cyberattacks in Germany, and the number of unreported cases is probably much higher. The main targets of hackers are primarily companies and public institutions, but private individuals are also increasingly being targeted. To understand how companies and private individuals can protect themselves, you first need to understand what the different types of hacker attacks are. You’ll learn about that and much more in this post.
Hacker attack, cyber threat, data theft: it won’t hit me, you may think. To illustrate the scale and threat of cyberattacks, here are a few facts about the cybersecurity situation in Germany in 2020:
Compared to 2019, the German Federal Office for Information Security (BSI) determines:
- 117.4 million new malware variants
- 35,000 intercepted mails with malware from German government networks
- 52,000 blocked websites with contained malware programs
- 7 million malware reports sent to German network operators
- One in four citizens has already been a victim of cybercrime, 25% of them even in the last 12 months.
At this point, at the latest, it becomes clear that cybercrime really does concern everyone. That’s why we’ve summarized five different types of hacker attacks and given you tips on how you can protect yourself.
1. Denial-of-service (DoS) and distributed denial-of-service attacks (DDoS).
In DoS attacks, systems are flooded with requests by hackers to such an extent that the IT infrastructure can no longer handle the requests. When an attack is carried out in parallel by several computers on which malware is installed, this is known as a DDoS. The more computers that work together as a botnet, the more powerful the attack. If a server is attacked without DDoS protection, the servers are overloaded with the enormous number of requests, so that websites are either no longer accessible or only build up in slow motion. Attackers use this type of cybercrime to extort ransoms from organizations or carry out other criminal acts. In-house servers are classic targets of DoS attack. Routers that are properly set up and secured with strong passwords provide some protection, but most organizations rely on larger firewalls. If a successful attack has already occurred, additional resources should be provided by hosting providers so that the website can still be accessed. An elastic infrastructure can thus dynamically reduce and expand resources as needed, automatically maximizing the use of the resource.
2. Man-in-the-middle attacks
A MitM attack occurs when a hacker inserts himself into the communication between a client and a server. This can be done in the course of session hijacking, where the hacker attacks the connection between a trusted client and a network. In this case, the IP address of the trusted client is replaced with that of the hacker. In this way, the criminals can not only read the data traffic, but also manipulate it. An effective protective measure against this attack is strong end-to-end encryption. This involves encrypting the transmitted data in such a way that it never exists in unencrypted form on the partial routes. Alternative protection mechanisms are intrusion detection systems, i.e., attack detection systems that monitor the network activities of systems and secured network systems and report suspicious activities.
3. Phishing and spear phishing attacks
In a phishing attack, emails are mostly sent with a seemingly trustworthy source with the aim to grab personal data. Through a link in the email, people are redirected to a website where they are supposed to reveal personal information. Spear phishing even goes one step further: victims are addressed personally and the fake emails contain information that is relevant to the individuals. The following figures illustrate how many phishing emails are sent every day. With 4 billion e-mail users worldwide, 3.4 billion phishing e-mails are sent to e-mail inboxes every day. To protect oneself against social engineering measures, it is advisable to handle personal information in social networks responsibly. The less information hackers can gather about users, the harder it will be to deceive them. In addition, account information, access data and passwords should never be shared by phone or e-mail. After all, reputable companies and banks do not ask their customers to reveal this information to anyone. In addition, it is advisable to exercise caution when receiving e-mails from unknown senders. If there is even the slightest suspicion that it is an attempted attack, it is better not to react. If it is a false alarm, the employer or bank will contact you through another channel, such as a letter.
4. Drive-by download
Drive-by download is another popular hacker method to spread malware. For this, hackers search for an unsecured website and inject a malicious script into the http or PHP code of the page. This injected script can be used to install malware on a site visitor’s computer – without the user noticing. To protect yourself against this, it is important to keep your browser and plug-ins up to date. With most browsers, an automatic search for updates can be set up. Additional help can be provided by virus protection installed on the computer, which detects and fends off malware.
5. Password attacks
Another popular method is password attacks, where hackers grab users’ passwords. Hackers find them in password Excel lists on the dekstop, on slips of paper on the screen or under the keypad, or they use social engineering methods to guess passwords or get them out of the user – in the worst case, the password for a password database that protects other accesses. A subtype of password attacks are brute force attacks. Brute force means that all strings of possible passwords are tried one after the other. You can protect yourself against this type of attack by using a higher password change frequency and longer passwords, because longer passwords mean that the attacks statistically take more time. The longer a hacker needs for a successful attack and the more likely a password change is, the less attractive an attack is. However, the chances of success with brute force are 100%, since every password can be guessed at some point. Another form of password attack is the dictionary method. Here, hackers use a dictionary of common passwords such as “password 123” to gain access to a victim’s network and server. Hackers can also be logical and enter passwords related to the victim’s name, pet, or hobbies, for example. To protect against dictionary attacks, equally complex and unique passwords should be used for different access points. Additionally, account lockout policies can be implemented to ensure that an account is locked out after a certain number of invalid password entries.
If you want to learn even more about social engineering attacks, read our human security gap whitepaper for free here.
Damage caused by cyber attacks
Cyberattacks harm organizations regardless of the method chosen. The affected organizations often suffer from the consequences years later. Three million euros – that’s how expensive the average financial damage caused by a cyber attack on a company is estimated to be. The consequences include:
1. Economic damage
Short time offline quickly costs several thousand euros. Wasted marketing budget and lost profits are only part of the financial damage.
2. Image damage
A successful cyber attack often results in major reputational damage. Rebuilding a positive image can take years and consume many resources.
3. Data theft
Users’ personal data is also frequently stolen in the course of a cyberattack. This data can then be sold by hackers on the darknet. Stolen identities and digital shopping sprees at the expense of the hacked user are just some of the far-reaching consequences.
Protection against cyber attacks
Strong passwords are far less vulnerable to many of the cyberattacks mentioned above. Therefore, strong, unique and complex passwords should be generated for each application. A good mix of upper and lower case letters, numbers and special characters is advisable. Common words and names should be avoided, as these are also known to hackers. In the company, a password manager is also a good idea. This is also confirmed by the German Federal Office for Information Security. Enterprise password managers such as Password Safe manage user names, accounts and passwords. With end-to-end encryption and a complex master password, the password manager keeps passwords safe. With one click, employees can create complex passwords that no one knows, manage them centrally and securely, and log in automatically. Password Safe also offers the Password Sharing option. According to their defined role, each employee has individual rights to use and edit passwords and share them with colleagues in the team. This can improve company-wide collaboration and passwords remain secure despite employee changes. For additional security of sensitive access, a second factor can be added to the target application and to the password safe for logon. Thus, in the case of a login, a confirmation code can be sent to another device such as the smartphone to uniquely authenticate the process.
For more information about Password Safe click here. | https://blog.passwordsafe.de/en/2021/06/07/the-5-most-common-cyber-threats-and-how-to-protect-yourself/ |
Introduction {#Sec1}
============
Nuclear magnetic resonance (NMR) spectroscopy is a powerful and versatile method for the analysis of metabolites in biological fluids, tissue extracts and whole tissues. Applications include the analysis of metabolic differences as a function of disease, gender, age, nutrition, genetic background, and the targeted analysis of biochemical pathways (Klein et al. [@CR25]). Further, metabolomic data derived from individuals with known outcome are used to train computer algorithms for the prognosis and diagnosis of new patients (Gronwald et al. [@CR18]). There are many good reviews available on these topics (Lindon et al. [@CR29]; Dieterle et al. [@CR11]; Clarke and Haselden [@CR5]).
Due to the chemical complexity of biological specimens such as human urine and serum, which contain hundreds to thousands of different endogenous metabolites and xenobiotics (Holmes et al. [@CR19]), NMR spectra contain a correspondingly large number of spectral features. Spectral data are typically analyzed using multivariate data analysis techniques (Wishart [@CR37]), which all exploit the joint distribution of the metabolomic data including the variance of individual metabolite concentrations and their joint covariance structure. Some sources of variation are the target of analysis such as differences in response to treatment or metabolite concentrations between diseased individuals and controls. Other sources of variation are not wanted and complicate the analysis. These include measurement noise and bias as well as natural, non-induced biological variability and confounders such as nutrition and medication. An additional complication arises from the typically large dynamic spectrum of metabolite concentrations. As described by van den Berg et al. (van den Berg et al. [@CR34]), one can expect order-of-magnitude differences between components of metabolite fingerprints of biological specimens, where the highly abundant metabolites are not necessarily more biologically important. Data normalization needs to ensure that a measured concentration or a fold change in concentration observed for a metabolite at the lower end of the dynamic range is as reliable as it is for a metabolite at the upper end. Also variances of individual metabolite concentrations can differ greatly. This can have a biological reason as some metabolites show large concentration changes without phenotypic effects, while others are tightly regulated. Moreover, one observes that the variance of non-induced biological variation often correlates with the corresponding mean abundance of metabolites leading to considerable heteroscedasticity in the data. However, differences in metabolite variance can also have technical reasons, because relative measurements of low abundance metabolites are generally less precise than those of high abundance metabolites. The goal of data preprocessing is to reduce unwanted biases such that the targeted biological signals are depicted clearly.
In accordance with the layout suggested by Zhang et al. (Zhang et al. [@CR39]), methods applicable to NMR spectra may be grouped into (i) methods that remove unwanted sample-to-sample variation, and (ii) methods that are aimed at adjusting the variance of the different metabolites to reduce for example heteroscedasticity. These include variable scaling and variance stabilization approaches. There are methods that attempt both tasks simultaneously.
The first group includes approaches such as Probabilistic Quotient Normalization (Dieterle et al. [@CR12]), Cyclic Loess Normalization (Cleveland and Devlin [@CR6]; Dudoit et al. [@CR13]), Contrast Normalization (Astrand [@CR3]), Quantile Normalization (Bolstad et al. [@CR4]), Linear Baseline Normalization (Bolstad et al. [@CR4]), Li-Wong Normalization (Li and Wong [@CR28]), and Cubic-Spline Normalization (Workman et al. [@CR38]). The second group comprises among others Auto Scaling (Jackson [@CR22]) and Pareto Scaling (Eriksson et al. [@CR15]). These are so-called variable scaling methods that divide each variable by a scaling factor determined individually for each variable. The next tested method of the second group is a non-linear transformation that is aimed at the reduction of heteroscedasticity by use of a Variance Stabilization Normalization (Huber et al. [@CR20]; Parsons et al. [@CR31]; Durbin et al. [@CR14]; Anderle et al. [@CR2]). Several of the aforementioned methods, including Variance Stabilization Normalization, were developed originally for the analysis of DNA microarray data. Since factors complicating the analysis of DNA microarray data also affect the analysis of metabolomics data, it appeared promising to conduct a comprehensive evaluation of these methods for their application to NMR-based metabolite fingerprinting. A similar evaluation, albeit limited to six linear scaling and two heteroscedasticity reducing methods, had been already performed for mass spectrometry based metabolomic data (van den Berg et al. [@CR34]).
For the evaluation of the performance of the different data normalization methods in the identification of differentially produced metabolites and the estimation of fold changes in metabolite abundance, we spiked eight endogenous metabolites at eight different concentration levels into a matrix of pooled human urine following a Latin-square design (Laywine and Mullen [@CR26]) that keeps the total spike-in amount constant while the molar amounts of the individually added metabolites were varied. To investigate the effect of the different normalization methods on sample classification by a support vector machine (SVM) with nested cross validation, a previously published dataset comprising NMR urinary fingerprints from 54 autosomal polycystic kidney disease (ADPKD) patients and 46 apparently healthy volunteers was employed (Gronwald et al. [@CR18]).
Materials and methods {#Sec2}
=====================
Urinary specimens {#Sec3}
-----------------
As a background for the spike-in data human spot-urine specimens were collected from volunteers at the University of Regensburg. Samples were pooled and immediately frozen at −80°C until preparation for NMR analysis. The classification data had been generated previously employing urine specimens collected at the Klinikum Nürnberg and the University Hospital Erlangen from 54 ADPKD patients and 46 apparently healthy volunteers, respectively (Gronwald et al. [@CR18]).
Latin-square spike-in design {#Sec4}
----------------------------
For the generation of the Latin-square spike-in data, eight endogenous metabolites, namely 3-aminoisobutyrate, alanine, choline, citrate, creatinine, ornithine, valine, and taurine, were added in varied concentrations to eight aliquots of pooled human urine keeping the total concentration of metabolites added consistently at 12.45 mmol/l per aliquot of urine. The highest concentration level of an individual metabolite was 6.25 mmol/l and was halved seven times down to a minimum concentration 0.0488 mmol/l, i.e. in each of the 8 aliquots of urine each metabolite was present at a different concentration. In contrast to a dilution series, the overall concentration of the contents remains the same, thus eliminating the impact of differing total concentrations on normalization. The spike-in samples were prepared once.
NMR spectroscopy {#Sec5}
----------------
To each 400-μl specimen of human urine 200 μl of 0.1 mol/l phosphate buffer, pH 7.4, and 50 μl of deuterium oxide containing 0.75% (w/v) trimethylsilyl-2,2,3,3-tetradeuteropropionic acid (TSP) as a reference \[Sigma-Aldrich, Steinheim, Germany\] were added. 1D ^1^H spectra were measured as described previously (Gronwald et al. [@CR17]) on a 600 MHz Bruker Avance III spectrometer \[Bruker BioSpin GmbH, Rheinstetten, Germany\], which was equipped with a cryogenic probe with *z*-gradients and a cooled automatic sample changer. A 1D nuclear Overhauser enhancement spectroscopy (NOESY) pulse sequence was used in all cases and solvent signal suppression was achieved by presaturation during relaxation and mixing time. All spectra were measured once. Spectra were Fourier transformed and phase corrected by automated routines. A flat baseline was obtained employing the baseopt option of TopSpin2.1 \[Bruker BioSpin\] that corrects the first points of the observed signal, i.e. of the free induction decay (FID). All spectra were chemical shift referenced relative to the TSP signal. For subsequent statistical data analysis, bin (feature) tables were generated from the 1D ^1^H NMR spectra using AMIX 3.9 (Bruker BioSpin).
Signal positions between samples may be subject to shifts due to slight changes in pH, salt concentration, and/or temperature. In addition, the TSP signal used for spectral referencing may also show pH-dependent shifts. Here we chose to use equidistant binning to compensate for these effects, which is still the most widely used method. In order to keep a clear focus on data normalization other parameters of metabolomic data evaluation such as the initial data processing including spectral binning were kept constant. Competing methods include peak alignment (Forshed et al. [@CR16]; Stoyanova et al. [@CR33]), approaches working at full resolution using statistical total correlation spectroscopy (Cloarec et al. [@CR7]), and orthogonal projection to latent structures (Cloarec et al. [@CR8]). In another approach termed targeted profiling a pre-selected set of metabolites is quantified from 1D spectra and these values are used for subsequent data analysis (Weljie et al. [@CR36]). Quantitative values may also be obtained from 2D spectra (Lewis et al. [@CR27]; Gronwald et al. [@CR17]). For the data presented here an optimized bin size of 0.01 ppm was applied and bins were generated in the regions from 9.5 to 6.5 ppm and from 4.5 to 0.5 ppm, respectively, to exclude the water artifact and the broad urea signal, leaving 701 bins for further analysis. To correct for variations in urine concentration, all data in the classification data set was linearly scaled to the signal of the CH~2~ group of creatinine at 4.06 ppm. This can be considered as normalization in itself. Each dataset was arranged in a data matrix *X* = (*x*~*ij*~) with *i* = 1...*I* and *I* = 701 representing the feature or bin number, and *j* = 1...*J* with *J* = 8 and *J* = 100 for the spike-in and classification datasets, respectively, representing the number of specimens. For further analysis, tables were imported into the statistical analysis software *R* version 2.9.1 (Development Core Team [@CR10]).
Basic characteristics of the normalization algorithms employed {#Sec6}
--------------------------------------------------------------
For all normalization methods discussed it is assumed that NMR signal intensities scale linearly with metabolite concentration and are mostly independent of the chemical properties of the investigated molecules. The equations describing the different normalization approaches are listed in Supplemental Table S1. The first group of methods evaluated aims to reduce between-sample variations. If not stated otherwise, it is assumed in the following that only a relatively small proportion of the metabolites is regulated in approximately equal shares up and down. The first group includes the following approaches:
*Probabilistic Quotient Normalization* (Dieterle et al. [@CR12]) assumes that biologically interesting concentration changes influence only parts of the NMR spectrum, while dilution effects will affect all metabolite signals. In case of urine spectra, dilution effects are caused, for example, by variations in fluid intake. Probabilistic Quotient Normalization (PQN) starts, with an integral normalization of each spectrum, followed by the calculation of a reference spectrum such as a median spectrum. Next, for each variable of interest the quotient of a given test spectrum and reference spectrum is calculated and the median of all quotients is estimated. Finally, all variables of the test spectrum are divided by the median quotient.
*Cyclic Locally Weighted Regression (Cyclic Loess)* is based on MA-plots, which constitute logged Bland-Altman plots (Altman and Bland [@CR1]). The presence of non-linear such as intensity-depended biases is assumed. Briefly, the logged intensity ratio *M* of spectra *j*~1~ and *j*~2~ is compared to their average *A* feature by feature (Dudoit et al. [@CR13]). Then, a normalization curve is fitted using non-linear local regression (loess) (Cleveland and Devlin [@CR6]). This normalization curve is subtracted from the original values. If more than two spectra need to be normalized, the method is iterated in pairs for all possible combinations. Typically, almost complete convergence is reached after two cycles. If only a relatively small proportion of the metabolites are regulated, all data points can be taken into account. Otherwise, rank-invariant metabolites can be selected for the computation of the loess lines. Here, all data points were used.
*Contrast Normalization* also uses MA-plots (Astrand [@CR3]) and makes the same assumptions as Cyclic Loess. The data matrix of the input space is logged and transformed by means of an orthonormal transformation matrix *T* = (*t*~*ij*~) into a contrast space. This expands the idea of MA-plots to several dimensions and converts the data into a set of rows representing orthonormal contrasts. A set of normalizing curves is then fitted similarly to those in Cyclic Loess Normalization, using a robust distance measure є based on the Euclidean norm that renders the normalization procedure independent of the particular choice of *T*. The contrasts are then evened out by a smooth transformation, ensuring that features with equal values prior to normalization retain identical values. Subsequently, data are mapped back to the original input space. The use of a log function impedes the handling of negative values and zeros. Therefore, all non-positive values were set beforehand to a residual value (10^−11^) three orders of magnitude smaller than the smallest value in the original data. Subtracting the 10%-quantile from each spectrum will minimize the bias introduced thereby.
The goal of *Quantile Normalization* is to achieve the same distribution of feature intensities across all spectra. Similarity of distributions can be visualized in a quantile--quantile plot (Bolstad et al. [@CR4]). If two spectra share the same distribution, all quantiles will be identical and, hence, align along the diagonal. The idea is to bring simply all spectra to an identical distribution of intensities across features (bins). This is achieved by sorting the vector of feature intensities in ascending order separately for each spectrum. In the sorted vector each entry corresponds to a quantile of the distribution. Next the mean of identical quantiles across spectra is calculated, i.e. the mean of the highest abundances, the mean of the second highest abundances, and so on. This mean is assigned to all features that realize the corresponding quantile. For example, the feature with the highest intensity in a spectrum is assigned the average of the highest intensities across spectra irrespectively of their spectral positions. Since different features may display the highest intensity in different samples, this constant average value may be assigned to different features across samples. After Quantile Normalization the vectors of feature intensities consist of the same set of values, however, these values are distributed differently among features.
A completely different normalization approach used in DNA microarray analysis is *Baseline Scaling*. In contrast to normalizing the data to a measure of the full dataset, here the data is normalized only to a subset of it, the so-called baseline. This can be conducted both linearly and non-linearly. Typically, the spectrum with the median of the median intensities is chosen as baseline, but other choices are possible, too. Alternatively, an artificial baseline can be constructed.
*Linear Baseline Scaling* uses a scaling factor to map linearly from each spectrum to the baseline (Bolstad et al. [@CR4]). Therefore, one assumes a constant linear relationship between each feature of a given spectrum and the baseline. In the version implemented in this paper, the baseline is constructed by calculating the median of each feature over all spectra. The scaling factor β is computed for each spectrum as the ratio of the mean intensity of the baseline to the mean intensity of the spectrum. Then, the intensities of all spectra are multiplied by their particular scaling factors. However, the assumption of a linear correlation between spectra may constitute an oversimplification.
A more complex approach is to fit a *Non-Linear Baseline Normalization* relationship between the spectra that are to be normalized and the baseline as implemented by Li and Wong ([@CR28]). It is assumed that features corresponding to unregulated metabolites have similar intensity ranks in two spectra, allowing a reliable determination of a normalization curve. In addition, possible non-linear relationships between the baseline and the individual spectra are assumed. The normalization process is based on scatter plots with the baseline spectrum (having the median overall intensity) on the *x*-axis and the spectrum to be normalized on the *y*-axis. Ideally, the data should align along the diagonal *y* = *x*. As the non-normalized data generally deviates from that, the normalization curve is then fitted to map the data to the diagonal. To make sure that the normalization curve is fitted only on non-differentially expressed features, a set of almost rank-invariant features (invariant set) is calculated and used for finding the normalizing piecewise linear running median line.
Another non-linear baseline method makes use of *Cubic Splines* (Workman et al. [@CR38]). As in quantile normalization the aim is to obtain a similar distribution of feature intensities across spectra. In this method as well the existence of non-linear relationships between baseline and individual spectra are assumed. A baseline, called target array in the original publication that corresponds to a target spectrum here, is built by computing the geometric mean of the intensities of each feature over all spectra. In this paper, the geometric mean was substituted by the arithmetic mean for reasons of robustness to negative values. For normalization, cubic splines are fitted between each spectrum and the baseline. To that end, a set of evenly distributed quantiles is taken from both the target spectrum and the sample spectrum and used to fit a smooth cubic spline. This process is iterated several times shifting the set of quantiles by a small offset each time. Next, a spline function generator uses the generated set of interpolated splines to fit the parameters of a natural cubic spline (B-spline). Here, for each spectrum five iterations comprising 14 quantiles each were calculated and interpolated to normalize the data.
The second group of methods is aimed at adjusting the variance of different metabolites. These include variable scaling and variance stabilization approaches. The simplest of these approaches uses the standard deviation of the data as a scaling factor. This method is called *Auto Scaling* or unit variance (uv) scaling (Jackson [@CR22]). It results in every feature displaying a standard deviation of one, i.e. the data is transformed to standard units. Briefly, one centers the data first by subtracting from each feature its mean feature intensity across spectra. This will result in a fluctuation of the data around zero, thereby adjusting for offsets between high and low intensity features. From the centered data the standard deviation of each feature is obtained and data is divided by this scaling factor. Auto Scaling renders all features equally important. However, measurement errors will also be inflated and between-sample variation due to dilution effects, which in case of urine spectra are caused, for example, by variations in fluid intake will not be corrected.
Using the square root of the standard deviation is an alternative used by *Pareto Scaling* (Eriksson et al. [@CR15]). It is similar to Auto Scaling, but its normalizing effect is less intense, such that the normalized data stays closer to its original values. It is less likely to blow up noisy background and reduces the importance of large fold changes compared to small ones. However, very large fold changes may still show a dominating effect.
*Variance Stabilization Normalization* (*VSN*) transformations are a set of non-linear methods that aim to keep the variance constant over the entire data range (Huber et al. [@CR20]; Parsons et al. [@CR31]; Durbin et al. [@CR14]; Anderle et al. [@CR2]). In the VSN R-package used here (Huber et al. [@CR20]), a combination of methods that corrects for between-sample variations by linearly mapping all spectra to the first spectrum followed by adjustment of the variance of the data is applied. Looking at the non-normalized data, the coefficient of variation, i.e. the variance divided by the corresponding mean, does not vary much for the strong and medium signals, implying that the standard deviation is proportional to the mean and, therefore, in VSN it is assumed that the variance of a feature depends on the mean of that feature via a quadratic function. But as values approach the lower limit of detection, variance does not decrease any more, but rather stays constant, thus, the coefficient of variation increases. VSN addresses exactly this problem by using the inverse hyperbolic sine. This transformation approaches the logarithm for large values, therefore removing heteroscedasticity. For small intensities, though, it approaches linear transformation behavior, leaving the variance unchanged. The VSN normalized data is not logged again for comparisons based on logarithmic intensities of the data.
The *R*-code for performing the different normalization techniques is given in the supplemental material.
Classification of samples using a support vector machine {#Sec7}
--------------------------------------------------------
Classification of samples was performed using the support vector machine (SVM) provided in the *R*-library e1071 (<http://cran.r-project.org/web/packages/e1071>). Results were validated by a nested cross-validation approach that consists of an inner loop for model fitting and parameter optimization and an outer loop for assessment of classification performance. From the analyzed dataset of 100 urine specimens two samples were selected arbitrarily and excluded to serve as test data of the outer cross-validation (leave-two-out cross-validation). Then, two of the remaining samples were chosen randomly and put aside to serve as test data of the inner cross-validation. In the inner loop, the SVM was trained on the remaining *n* − 4 samples in order to find the optimal number of features. For this, the feature number *k* was increased stepwise within the range *k* = 10...60. The top *k* features with the highest *t*-values were selected and a SVM classifier was trained and applied to the left-out samples of the inner loop.
For each feature number, the SVM was trained (*n* − 2)/2 times, such that every sample except for the outer test samples was used once as inner test sample. The accuracy on the inner test samples was assessed and the optimal feature number was used to train classifiers in the outer loop. In the outer cross-validation, the SVM was trained on all samples except the outer test samples, using the optimal number of features from the inner loop and the outer test samples were predicted. This was repeated *n*/2 times, so that all samples were chosen once as outer test data. In all cases a linear kernel was used. In all steps feature selection was treated as part of the SVM training and was redone excluding left out cases for every iteration of the cross validations.
Classification performance was analyzed by evaluating receiver operating characteristic (ROC) plots that had been obtained by using the *R*-package ROCR (Sing et al. [@CR32]).
Results and discussion {#Sec8}
======================
A first overview of the data (*Data Overview*) was obtained by comparing the densities of the metabolite concentration distributions for each of the 100 urine specimens of the classification dataset. Supplemental Fig. S1 shows the creatinine adjusted-intensity distributions. For comparison, the distribution of the Quantile normalized data that represents an average of the intensity distributions is indicated in red. Roughly similar distributions were obtained for all specimens.
Next we investigated for each normalization method whether comparable profiles were obtained for all samples of the classification dataset (*Overall between Sample Normalization Performance*). To that end, all preprocessing methods were included, although the variable scaling and variance stabilization methods are not specifically designed to reduce between-sample variation. For all features we calculated the pair-wise differences in intensity between spectra. We argue, that if these differences do not scatter around zero, this is evidence that for one out of a pair of spectra the concentrations are estimated systematically either too high or too low. To assess the performance of methods we calculated for each pair-wise comparison the ratio of the median of differences to the inter-quartile range (IQR) of differences and averaged the absolute values of these ratios across all pairs of samples (average median/IQR ratios). Dividing by the IQR ensures that the differences are assessed on comparable scales. The smaller the average median/IQR ratios are the better is the global between-sample normalization performance of a method. The results for the classification dataset are shown in the first row of Table [1](#Tab1){ref-type="table"}.Table 1Analysis of average inter- and intra-sample differences by means of interquartile rangesCrea-normalized/non-normalizedPQNCyclic loessContrastQuantileLinear baselineLi-WongCubic splineAuto/pareto scalingVSNAverage median/IQR ratios0.460.040.060.550.060.150.820.070.280.07Average IQR ratios5.125.124.134.694.315.123.374.460.824.95*First row*: Average ratios of the median to the IQR of the classification data. Lower values are favorable.*Second row*: Average ratios of the IQR of the spiked features to the IQR of the non-spiked features. Here, higher values are favorable. The two variable scaling methods performed equally and, therefore, are summarized in a single column
Comparing the list of average median/IQR ratios, PQN (0.04), Quantile (0.06), Cyclic Loess (0.06), VSN (0.07), and Cubic Spline (0.07) reduced overall differences between samples the best compared to the creatinine-normalized data only (0.46). The other methods, except for Contrast and Li-Wong Normalization, all improved the comparability between samples, but did not perform as well as the methods mentioned above. Note that the two variable scaling methods performed similarly and, therefore, were summarized as one entry in Table [1](#Tab1){ref-type="table"}. The good performance of the VSN method can be explained by the fact that VSN combines variance stabilization with between-sample normalization. In comparison to the creatinine-normalized data, Auto and Pareto Scaling also showed some improvement.
While good between-sample normalization is desirable, it should not be achieved at the cost of reducing the genuine biological signal in the data. We tested for this in the Latin-square data. By experimental design, all intensity fluctuations except for those of the spiked-in metabolites, which should stand out in each pair-wise comparison of spectra, are caused by measurement imprecision. That is, spike-in features must be variable, while all other features should be constant. We assessed this quantitatively by calculating the IQR of the spike-in feature intensities and dividing it by the IQR of the non-spike-in feature intensities (average IQR ratios). These ratios are given in the second row of Table [1](#Tab1){ref-type="table"}. High values indicate a good separation between spiked and non-spiked data points and, therefore, are favorable.
For the non-normalized data a ratio of 5.12 was obtained, i.e. the spike-in signal stood out clearly. These results were also obtained for the PQN and the Linear Baseline methods. For the Cyclic Loess, Quantile, Cubic Spline, Contrast, VSN, and Li-Wong approaches, the ratio was slightly reduced demonstrating that normalization might affect the true signals to some extent. Nevertheless, the signal-to-noise ratios for these methods were still above 4 and the signals kept standing out.
Importantly, Auto and Pareto Scaling compromised the signal-to-noise ratio severely. As for the classification data, the two variable scaling methods performed comparable and were summarized as one entry.
This prompted us to investigate systematically technical biases in this data (*Analysis of intensity*-*dependent bias*). As illustrated in Fig. [1](#Fig1){ref-type="fig"}a and b, *M* versus rank(*A*)-plots (*M*-*rank*(*A*)-*plots*) allow the identification of intensity-dependent shifts between pairs of feature vectors. Data in *M*-*rank*(*A*)-*plots* are log base 2 transformed so that a fold change of two corresponds to a difference of one. For each feature, its difference in a pair of samples (*y*-axis) is plotted against the rank of its mean value (*x*-axis). Hence, the *x*-axis corresponds to the dynamic spectrum of feature intensities, while the *y*-axis displays the corresponding variability of the intensities.Fig. 1**a** M-rank(A)-plots comparing the same randomly selected pair of specimens from the classification dataset after creatinine normalization alone (*top left*) and after additional Cyclic Loess (*top right*), Q*uantile* (*lower left*), and Cubic Spline Normalization (lower right). The *straight line* indicates *M* = 0**,** the *curved line* represents a loess fit of the data points. Deviations of the loess line from *M* = 0 correlate with bias between samples. The data is log base 2 transformed so that a fold change of two corresponds to a difference of one. **b** The same methods as above were applied to a pair of samples from the Latin-square spike-in dataset. Note, that for this dataset no prior creatinine normalization was performed and, therefore, in the top left part of the Figure results obtained from non-normalized data are displayed. The *black dots* represent background features that should not vary, while the *differently marked dots*, mostly found on the right hand side, represent features for which spike-in differences are expected. Therefore, they preferably stand out from the non-spike-in background
For all possible pair-wise comparisons of spectra and all investigated normalization methods, M-rank(A)-plots were produced from the classification data as well as from the Latin-square data. Representative sets of plots for a randomly selected pair of spectra selected from each of the two datasets are displayed in Fig. [1](#Fig1){ref-type="fig"}a and b. Shown are plots for creatinine-normalized classification data, respectively non-normalized Latin-square data and for data after Cyclic Loess, Quantile and Cubic Spline Normalization. In the absence of bias, the points should align evenly around the straight line at *M* = 0. The additionally computed loess line (curved line) represents a fit of the data and helps to determine how closely the data approaches *M* = 0.
In the M-rank(A) plots of creatinine-normalized (Fig. [1](#Fig1){ref-type="fig"}a) and non-normalized data (Fig. [1](#Fig1){ref-type="fig"}b), the curved loess line clearly does not coincide with the straight line at *M* = 0. The plot of the creatinine-normalized classification data in Fig. [1](#Fig1){ref-type="fig"}a suggests that intensities in sample 2 of the pair are systematically overestimated at both ends of the dynamic spectrum but not in the middle. One might want to attribute this observation to a technical bias in the measurements. While we cannot proof directly that the observation originates indeed from a technical bias rather than biological variation, we will show later that correction for the effect improves the estimation of fold changes, the detection of differentially produced metabolites and the classification of samples.
Here, we first evaluated the normalization methods with respect to their performance in reducing such an effect. Looking at Cyclic Loess normalized data in Fig. [1](#Fig1){ref-type="fig"}a, the bias is gone for the mid and high intensities, however, in the low-intensity region additional bias is introduced affecting up to 20% of the data points. With Quantile and Cubic Spline Normalization nearly no deviation from *M* = 0 can be recognized in Fig. [1](#Fig1){ref-type="fig"}a, they seem to remove any bias almost perfectly. Similar trends were also observed for the other pair-wise comparisons within the classification data (plots not shown). Application of the other normalization methods to the classification data showed that PQN and VSN evened out most bias well, although they sometimes left the loess line s-shaped. The linear baseline method performed similarly, in that it only partially reduced bias. Contrast, Li-Wong and the two variable scaling methods hardly reduced bias at all.
The M-Rank(A) plots of the Latin-square data, of which 4 examples are shown in Fig. [1](#Fig1){ref-type="fig"}b, generally resemble those obtained for the classification data, except for one major difference: Here, we have a large amount of differential spike-in features representing a range of 2- to 128-fold changes. The spike-in differences should not be lost to normalization. Therefore, for better visualization, all empirical data points of the spiked-in metabolites were marked differently, while the non-differential data points were marked in black (Fig. [1](#Fig1){ref-type="fig"}b). Ideally, all data points corresponding to the spiked-in metabolites should all be found in the high- and mid-intensity range (A). Moreover, differences (M) should increase with increasing spike-in concentrations, resulting in a triangle-like shaped distribution of the data points corresponding to the spiked-in metabolites and the curved loess line staying close to *M* = 0. As expected, the spike-ins stood out clearly in the non-normalized data. This was also the case for the PQN, Cyclic Loess, Contrast, Quantile, Linear Baseline, Li-Wong, Cubic Spline and VSN normalized data but not for the variable scaling normalized data.
The performance of all methods with respect to correcting dynamic range related bias can be compared in Loess-Line Plots (Bolstad et al. [@CR4]). In these plots we drew rank(A) (*x*-axis) against the differences of the average loess line to the baseline at *M* = 0 (*y*-axis). The average loess line was computed for each normalization method by a loess fit of the absolute loess lines of the Ranked MA-plots for all pairs of NMR spectra. Our plots are a variation of those used by Bolstad et al*.* ([@CR4]) and (Keeping and Collins [@CR24]) in that we use rank(A) instead of A on the *x*-axis. Any local offset from zero indicates that the normalization method does not work properly in the corresponding part of the dynamic range.
We calculated these plots for both the classification data (Fig. [2](#Fig2){ref-type="fig"}a) and the spike-in data (Fig. [2](#Fig2){ref-type="fig"}b). Since in most cases similar trends were obtained for both datasets, the best performing methods will be discussed together if not stated otherwise. In the absence of normalization, an increasing offset with decreasing intensities is observed for the lower ranks of both datasets. Cyclic Loess Normalization reduced the distance for the mid intensities well, but it increased the offset for low intensities. Contrast, Quantile and VSN Normalization all removed the intensity-dependency of the offset well. Regarding the overall distance, Quantile Normalization reduced it best, followed by VSN. Contrast Normalization left the distance at a rather large value. Taken together, this analysis shows that intensity-dependent measurement bias can only be corrected by a few normalization approaches. Not surprisingly, these are methods that model the dynamic range of intensities explicitly.Fig. 2Ranked plot of the averaged loess line versus intensity *A* of the classification (**a**) and the spike-in (**b**) datasets for all normalization approaches. The lines were computed for each normalization method by a loess fit of the absolute loess lines of the M-rank(A)-plots for all sample pairs. Smaller and intensity-independent distances are preferable. The data is log base 2 transformed. For the methods involving centering not all features are well defined after logarithmic transformation leading to shorter average loess lines.*Solid lines* depict methods that are aimed at reducing sample-to-sample variations, while variable scaling and variance stabilization approaches are marked by*dashed lines*
M-rank(A)-plots can also detect unwanted heteroscedasticity, which may compromise the comparability of intensity changes across features. Spreading of the point cloud at one end of the dynamic range, as exemplified by the solely creatinine-normalized and non-normalized data, respectively, in Fig. [1](#Fig1){ref-type="fig"}a and b, indicates a decrease in the reliability of measurements. In the absence of evidence that these effects reflect true biology or are due to spike-ins (data points corresponding to spiked-in metabolites in Fig. [1](#Fig1){ref-type="fig"}b), one should aim at correcting this bias. Otherwise feature lists ranked by fold changes might be dominated by strong random fluctuations at the ends of the dynamic spectrum. Between-feature comparability will only be achieved, if the standard deviation of feature intensities is kept low over the entire dynamic spectrum.
The influence of the different normalization techniques on standard deviation relative to the dynamic spectrum was investigated using *plots of the standard deviation* for both the classification (Fig. [3](#Fig3){ref-type="fig"}a) and the Latin-square dataset (Fig. [3](#Fig3){ref-type="fig"}b). For this, the standard deviation of the logged data in a window of features with similar average intensities was plotted versus the rank of the averaged feature intensity, similarly to Irizarry et al. (Irizarry et al. [@CR21]). The plots show for both the creatinine-normalized (Fig. [3](#Fig3){ref-type="fig"}a) and the non-normalized data (Fig. [3](#Fig3){ref-type="fig"}b), respectively, that standard deviation decreases with increasing feature intensity. The same is true for the PQN normalized data. Further, VSN keeps the standard deviation fairly constant over the whole intensity regime. In contrast, Li-Wong increases the standard deviation compared to the non-normalized data. The two variable scaling approaches increase standard deviation substantially.Fig. 3Plot of the logged standard deviation within the features versus the rank of the averaged feature intensity of the classification (**a**) and spike-in (**b**) datasets. To make the fit less sensitive to outliers, lines were computed using a running median estimator. The data is log base 2 transformed.*Solid lines* depict methods that are aimed at reducing sample-to-sample variations, while variable scaling and variance stabilization approaches are marked by*dashed lines*
Next, we investigated the influence of preprocessing on the detection of metabolites produced differentially, the estimation of fold changes from feature intensities, and the classification of samples based on urinary NMR fingerprints.
In the Latin-square data, we know by experimental design which features have different intensities and which do not. The goal of the following analysis is to detect the spike-in related differences and to separate them from random fluctuations among the non-spiked metabolites (*Detection of Fold Changes*). To that end, features with expected spike-in signals were identified and separated from background features. Excluded were features that were affected by the tail of spike-in signals, and regions in which several spike-in signals overlaid. As the background signal in the bins containing spike-in signals was, in general, not negligible, it was subtracted to avoid disturbances in the fold change measures.
Then, all feature intensities in all pairs of samples were compared and fold changes were estimated. Fold changes that resulted from a spike-in were flagged. Next, the entire list of fold changes was sorted. Ideally, all flagged fold changes should rank higher than those resulting from random fluctuations. In reality, however, flagged and non-flagged fold changes mix to some degree. Obviously, by design smaller spike-in fold changes tend to be surpassed by random fluctuations. The flagging was performed with three different foci, first flagging all spike-in features, then just low spike-in fold changes up to three and last only high fold changes above ten.
Receiving operator characteristic (ROC) curves with corresponding area under the curve (AUC) values were calculated for each normalization method and are given in Supplemental Table S2. Looking at the AUC values, only four methods yielded consistently better classification results than those obtained with the non-normalized data: Contrast, Quantile, Linear Baseline, and Cubic Spline Normalization. Quantile Normalization reached the highest AUC values in all runs, Cubic Spline and the Linear Baseline method showed comparable results and Contrast Normalization performed slightly better than the non-normalized data.
Differentially produced metabolites may be detected correctly even if the actual fold changes of concentrations are systematically estimated incorrectly. The ROC curves depend only on the order of fold changes but not on the actual values. This can be sufficient in hypothesis generating research but might be problematic in more complex fields such as metabolic network modeling. Therefore, we evaluated the impact of the preprocessing techniques on the accurate determination of fold changes. Based on published reference spectra, for each metabolite a set of features corresponding to the spike-in signals was determined, features with overlapping spike-in signals were removed and the background signal was subtracted. Within this set of features, the feature with the highest measured fold change among all pairs of samples with the highest expected fold change was chosen for evaluating the accuracy of determining the actual fold change for the respective metabolite. Note that the spike-in metabolite creatinine was excluded because of the absence of any non-overlapping spike-in bins. Then, plots of the spike-in versus the measured fold changes between all pairs of samples were computed for each metabolite and each normalization method. For taurine, Fig. [4](#Fig4){ref-type="fig"} shows exemplary results obtained from non-normalized data and from data after Cyclic Loess, Quantile, and Li-Wong Normalization, respectively.Fig. 4Plot of the reproducibility of determining spike-in fold changes for taurine from the Latin-square spike-in dataset without normalization (*upper left*), after Cyclic Loess (*upper right*), Quantile (*lower left*) and Li-Wong Normalization (*lower right*). Features at the border of the spike-in signal frequencies are represented by *grey dots* , whereas features from the inner range of the signals are plotted *black*. As detailed in the text, for each metabolite one feature was automatically selected. These features (marked differently) were used for fitting a linear model, which is given in the upper left corner of each plot. The *solid lines* represent the actual models, while the *dashed lines* represent ideal models with a slope of 1 and an intercept of 0. The data is log base 2 transformed
In analogy to Bolstad et al*.* ([@CR4]) the following linear model was used to describe the observed signal *x* of a bin *i* and a sample *j*:$$\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$ \log x_{ij} = \gamma \log c_{0} + \gamma \log c_{\text{spike-in}} + \varepsilon_{ij} $$\end{document}$$Here, *c*~0~ denotes the signal present without spike-in, *c*~spike-in~ the spike-in concentration of the respective metabolite, γ the proportionality between signal intensity and spike-in concentration, which is assumed to be concentration independent within the linear dynamic range of the NMR spectrometer, and ε~*ij*~ the residual error.
Comparing two samples *j*~1~ and *j*~2~ leads to the following linear equation, for which we estimate the intercept *a* and the regression slope *b*,$$\documentclass[12pt]{minimal}
\begin{document}$$ \log \frac{{x_{{ij_{1} }} }}{{x_{{ij_{2} }} }} = a + b\log \frac{{c_{{{\text{spike-in}}_{1} }} }}{{c_{{{\text{spike-in}}_{2} }} }} $$\end{document}$$
In Supplemental Table S3, slope estimates *b* for the different metabolites and normalizations are given. Again the variable scaling methods were summarized in a single entry. It is obvious that nearly all values exceed one, meaning that the fold changes are overestimated. This can be explained by the choice of features: As one metabolite generally contributes to several features and the feature with the highest fold change between the pair of samples with the highest spike-in difference is selected for each metabolite, features overestimating the fold change are preferred over features underestimating or correctly estimating the fold change. However, we still favored this automated selection algorithm over manually searching for the "nicest looking" bin, to minimize effects of human interference.
Apart from that, it can be seen from an analysis of the slope estimates *b* that normalization performs quite differently for different metabolites. The methods that showed the most uniform results for all metabolites investigated are Quantile, Contrast, Linear Baseline, and Cubic Spline Normalization.
In Supplemental Table S4, values for the intercept a, the slope *b*, and the coefficient of determination *R*^2^ are given, averaged over all metabolites. The data shows that the methods that performed best in estimating accurately fold changes are Quantile and Cubic Spline Normalization.
Another common application of metabolomics is the *classification of samples*. To investigate the degree to which the different normalization methods exerted an effect on this task, the dataset consisting of the ADPKD patient group and the control group was used. Classifications were carried out using a support vector machine (SVM) with a nested cross-validation consisting of an inner loop for parameter optimization and an outer loop for assessing classification performance (Gronwald et al. [@CR18]). The nested cross-validation approach yields an almost unbiased estimate of the true classification error (Varma and Simon [@CR35]). For the nested cross validation, a set of *n* samples was selected randomly from the dataset. This new dataset was then normalized and classifications were performed as detailed above. Classification performance was assessed by the inspection of the corresponding *ROC curves* (Supplemental Fig. S2). The classification was conducted five times for every normalization method and classification dataset size *n*.
In Table [2](#Tab2){ref-type="table"}, the AUC values and standard deviations of the ROC curves are given for all normalization methods and classification dataset sizes of *n* = 20, *n* = 40, *n* = 60, *n* = 80, and *n* = 100, respectively. As expected, the classification performance of most normalization methods depended strongly on the size of the training set used for classification. The method with the highest overall AUC value was Quantile Normalization: With 0.903 for *n* = 100, 0.854 for *n* = 80, and 0.812 for *n* = 60, it performed the best among the normalization methods tested, albeit for larger dataset sizes only. For dataset sizes *n* ≤ 40, its performance was about average. Cubic-Spline Normalization performed nearly as well as Quantile Normalization: It yielded the second highest AUC values for the larger training set sizes of *n* = 100 (0.892) and *n* = 80 (0.841). In contrast to Quantile Normalization, it also performed well for smaller dataset sizes: For *n* = 20 (0.740), it was the best performing method. VSN also showed good classification results over the whole dataset size range, its AUC values were barely inferior to those of the Cubic-Spline Normalization. Cyclic Loess performed almost as well as Quantile Normalization. For small dataset sizes, its classification results were only slightly better than average, but for the larger dataset sizes it was among the best-performing methods. Over the whole dataset size range, the classification results of PQN, Contrast and the Linear Baseline Normalizations and those of the variable scaling methods were similar to results obtained with creatinine-normalized data. Supplemental Table S5 gives the median (first column) and the mean number (second column) of features used for classification with respect to the applied normalization method. As can be seen, the number of selected features strongly depended on the normalization method used. The best performing Quantile Normalization led to a median number of 21 features, while the application of Cubic Spline Normalization and VSN resulted in the selection of 27 and 34 features, respectively. Employment of the PQN approach and the variable scaling methods resulted for the most part in a greater number of selected features without improving classification performance. The third column of Supplemental Table S5 gives the percentage of selected features that are identical to those selected by SVM following Quantile Normalization. As can be seen, PQN yielded about 95% of identical features, followed by Li-Wong and the Linear Baseline method with approx. 90% identical features. This data shows that the ranking of features based on *t*-values, which was the basis for our feature selection, is only moderately influenced by normalization. The smallest percentage (52.4%) of identical features was observed for Contrast Normalization, which also performed the poorest overall (Table [2](#Tab2){ref-type="table"}).Table 2Classification performance measured on classification datasetNormalization*n* = 20*n* = 40*n* = 60*n* = 80*n* = 100Crea-normalized0.628 ± 0.0740.722 ± 0.0370.776 ± 0.0320.783 ± 0.0190.787 ± 0.003PQN0.710 ± 0.0290.749 ± 0.0340.781 ± 0.0180.802 ± 0.0160.796 ± 0.002Cyclic loess0.683 ± 0.0290.728 ± 0.0300.784 ± 0.0270.797 ± 0.0120.859 ± 0.005Contrast0.611 ± 0.0720.693 ± 0.0460.718 ± 0.0360.764 ± 0.0180.757 ± 0.004Quantile0.688 ± 0.0230.731 ± 0.0430.812 ± 0.0330.854 ± 0.0090.903 ± 0.003Linear baseline0.661 ± 0.0340.728 ± 0.0200.777 ± 0.0270.790 ± 0.0190.756 ± 0.005Li-Wong0.607 ± 0.0360.659 ± 0.0240.723 ± 0.0430.771 ± 0.0290.804 ± 0.005Cubic spline0.740 ± 0.0660.749 ± 0.0400.793 ± 0.0180.841 ± 0.0100.892 ± 0.003Auto0.705 ± 0.0320.703 ± 0.0200.764 ± 0.0200.772 ± 0.0110.789 ± 0.006Pareto0.652 ± 0.0370.717 ± 0.0380.757 ± 0.0320.796 ± 0.0100.785 ± 0.008VSN0.721 ± 0.0220.772 ± 0.0130.790 ± 0.0150.838 ± 0.0090.887 ± 0.003Averaged AUC values and their standard deviation for the classification performance obtained for different sizes of the classification dataset and following different normalization methods. In all cases a SVM with nested cross-validation was employed
We also investigated the impact of the use of creatinine as a scale basis for renal excretion by subjecting the classification data to direct normalization by Quantile and Cubic-Spline Normalization without prior creatinine normalization. For *n* = 100, AUC values of 0.902 and 0.886 were obtained for Quantile and Cubic-Spline Normalization, respectively. These values are very similar to those obtained for creatinine-normalized data, which had been 0.903 and 0.892 for Quantile and Cubic-Spline Normalization, respectively. However, without prior creatinine--normalization an increase in the average number of selected features was noticed, namely from 21 to 31 and 27 to 36 features, respectively, for Quantile and Cubic-Spline Normalization. In summary one can say that Quantile and Cubic-Spline Normalization are the two best performing methods with respect to sample classification irrespective whether prior creatinine normalization has been performed or not.
Different preprocessing techniques have also been evaluated with respect to the NMR analysis of metabolites in blood serum (de Meyer et al. [@CR9]). Especially, Integral Normalization, where the total sum of the intensities of each spectrum is kept constant, and PQN were tested in combination with different binning approaches. PQN fared the best, but it was noted that none of the methods tested yielded optimal results, calling for improvements in both spectral data acquisition and preprocessing. The PQN technique was also applied to the investigation of NMR spectra obtained from cerebrospinal fluid (Maher et al. [@CR30]).
Several of the preprocessing techniques compared here have been also applied to mass spectrometry-derived metabolomic data and proteomics measurements. Van den Berg et al. ([@CR34]) applied 8 different preprocessing methods to GC-MS data. These included Centering, Auto Scaling, Range Scaling, Pareto Scaling, Vast Scaling, Level Scaling, Log Transformation and Power Transformation. They found, as expected, that the selection of the proper data pre-treatment method depended on the biological question, the general properties of the dataset and the subsequent statistical data analysis method. Despite these boundaries, Auto Scaling and Range Scaling showed the overall best performance. For the NMR metabolomic data presented here, the latter two methods were clearly outperformed by Quantile, Cubic Spline and VSN Normalization, all of which were not included in the analysis of the GC-MS data. In the proteomics field, Quantile and VSN normalization are commonly employed (Jung [@CR23]).
Concluding remarks {#Sec9}
==================
In this study, normalization methods, different in aim, complexity and origin, were compared and evaluated using two distinct datasets focusing on different scientific challenges in NMR-based metabolomics research. Our main goal was to give researchers recommendations for improved data preprocessing.
A first finding is that improper normalization methods can significantly impair the data. The widely used variable scaling methods were outperformed by Quantile Normalization, which was the only method to perform consistently well in all tests conducted. It removed bias between samples, and accurately reproduced fold changes. Its only flaw was its mediocre classification result for small training sets. Therefore, we recommend it for dataset sizes of *n* ≥ 50 samples.
For smaller datasets, Cubic Spline Normalization represents an appropriate alternative. We showed that its bias removal and fold change reproduction properties were nearly equal to Quantile Normalization. Moreover, it classified well irrespectively of the dataset size.
VSN also represents a reasonable choice. Concerning the ADPKD data, it showed good results for both classification and bias removal. Concerning the spike-in data it performed less convincingly; however, the spike-in design affects the normalization procedure strongly by inducing additional variance. In conclusion, we found that preprocessing methods originally developed for DNA microarray analysis performed superior.
Electronic supplementary material {#AppESM1}
=================================
{#SecESM1}
Supplementary material 1 (DOC 808 kb)
Supplementary material 2 (TXT 4 kb)
The authors thank Drs. Raoul Zeltner, Bernd-Detlef Schulze and Kai-Uwe Eckardt for providing the urine specimens used for the analysis of classification results. In addition the authors are grateful to Ms. Nadine Nürnberger and Ms. Caridad Louis for assistance in sample preparation. This study was funded in part by BayGene and the intramural ReForM program of the Medical Faculty at the University of Regensburg. The authors who have taken part in this study declare, that they do not have anything to disclose regarding funding from industry or conflict of interest with respect to this manuscript.
Open Access {#d29e1567}
===========
This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
| |
The accurate estimation of standing plant biomass is essential for understanding and predicting the effects of forest ecosystem processes (e.g. energy, nutrient, water, and carbon fluxes) on regional and global carbon cycles[@b1][@b2][@b3][@b4]. A convenient and widely used method for biomass estimation is provided by equations that interrelate plant biomass (*M*) and stem/trunk diameter (*D*) that take the form *M* = β*D*^α^, where β is a normalization constant and α is the scaling exponent[@b5][@b6][@b7][@b8]. Since the numerical values of β and α can differ among species, stand age, site characteristics, climate, and stand density[@b5][@b6][@b7][@b9][@b10], they are typically estimated via regression of log-transformed data for *D* and *M* data obtained from destructive sampling methods. This approach is time consuming and expensive, and thus generally restricts data collections to small areas, plant sizes, and sample numbers. More efficient and economical methods for estimating allometric parameters would help considerably.
In addition to empirical model fitting approaches, theoretical models have also been used to predict and estimate allometric scaling exponents. For example, metabolic scaling theory (MST)[@b11][@b12][@b13][@b14][@b15] hypothesizes that evolutionary optimization of vascular transport hydraulics has resulted in plant metabolic rates that scale as 3/4 power of *M* and that *M* scales as the 8/3 power of *D*. Several authors have used these and other MST predictions to develop allometric models for interrelating *M* and *D*[@b4][@b6][@b9][@b16][@b17] and to evaluate the generality of predicted or estimated allometric parameters, although other authors have concluded that the power-law scaling exponents vary considerably within and across taxa[@b6]. Additionally, several studies have argued that MST as originally formulated[@b11][@b12] cannot explicitly account for the range and origin of variation of plant metabolic scaling exponents[@b18][@b19][@b20][@b21]. For example, mass-scaling exponents of metabolic rates are close to unity for saplings (isometry) and decrease as trees grow in size[@b22][@b23][@b24], implying that metabolic scaling relationships vary though ontogeny. To address the concerns, Niklas and Enquist[@b24] and Enquist *et al*.[@b25] used biomechanical and space-filling arguments to suggest how metabolic scaling of seedlings and saplings should deviate from the original MST predictions[@b11]. Further, Niklas[@b26] demonstrated that the scaling of height with respect to diameter decreases from nearly isometric (for small and juvenile trees) to a 2/3 power for older more mature trees. Niklas and Spatz[@b22] also developed a hydraulic model that predicts a log-log nonlinear *H* (and *M*) vs. *D* relationship that predicts a shift from an isometric to an allometric scaling exponent across species and habitats.
To address these issues and to expand their theoretical underpinnings, Price *et al*.[@b27] extended MST to show that the biomass scaling exponents relating *M*, *H*, and *D* covary, a feature that was already demonstrated by Niklas and Spatz[@b22] and Sileshi[@b28]. Consequently, the scaling exponents of *M* vs. *D* can be estimated from those of *H* vs. *D*, which provides an attractive method for estimating tree biomass scaling relationships because data for *D* and *H* can be collected non-destructively. Indeed, if tree trunks can be modeled as simple cylindrical or truncated conical geometries, tree biomass can be related to diameter and height as *M* ∝ *D*^2^*H*^9^. Furthermore, if the scaling exponent relating height and stem diameter is denoted as α~1~, we see that , which is a more general expression of the *M*, *D*, and *H* scaling interrelationship.
However, although biomass-scaling relationships clearly involve covariation among the scaling exponents for the relationships among *M*, *D*, and *H* (see details in Materials and Methods), the extent to which these relationships mediate the numerical values of normalization constants (i.e., β-values) remains unclear. Prior studies have shown that scaling exponents are inversely related with their corresponding scaling constants in *M* vs. *D* relationships[@b6][@b10][@b28]. For example, using a collection of 223 allometric equations relating biomass to diameter, Zianis and Mencuccini[@b6] have shown that normalization constants are negatively correlated with the scaling exponents governing *M* vs. *D* scaling relationships. If this relationship holds true generally, a "prediction model" for estimating tree biomass can be established by recasting biomass-scaling relationships in terms of an inverse relationship between the numerical values of scaling exponents and normalization constants. Specifically, the scaling of *M* with respect to *D* can be estimated from the scaling of *H* with *D*, and the normalization constants can be estimated using the specific function between scaling exponents and constants as suggested by Zianis and Mencuccini[@b6].
Nevertheless, empirical testing of the biomass scaling relationships proposed by Price *et al*.[@b28] has been based primarily on the datasets collected from different species and biomes. More experimental work is needed to gain insight into the mechanisms of covariation among scaling exponents and normalization constants at the level of individual species. Since plant functional traits can influence metabolic scaling relationships[@b29][@b30][@b31], predictions for the covariation of scaling relationships among *D*, *H* and *V* must be tested at the intra-specific level because of species-specific differences among species (e.g. wood density). Under any circumstances, it is necessary to verify whether the covariation between the numerical values of scaling exponents and normalization constants hold true at the intra-specific level.
In light of the theoretical and practical importance of understanding the mathematical and biological relationships among scaling exponents and their corresponding normalization constants, we used the stem volume data of *Cunninghamia lanceolata* (Lamb.) Hook. (Chinese fir) in Jiangxi province, China, to (1) examine the variations of the scaling relationships between stem volume, diameter, and height, (2) test whether an inverse relationship between scaling exponents and related constants holds true, (3) verify whether the covariation of scaling exponents in these stem relationships support the mathematical functions proposed by Price *et al*.[@b27], and (4) test whether the prediction model emerging from our approach successfully estimates stem volume.
Materials and Methods
=====================
Species and Study Area Selection
--------------------------------
*Cunninghamia lanceolata* (Lamb.) Hook. (Chinese fir) is an evergreen conifer in the *Taxodiaceae* (Redwood) family. This species was selected because it is one of the most important commercial trees in China[@b32] and because it is grown in a variety of sites. In the first half of 1988 and 1999, twenty-four sites in the Jiangxi Province were selected to investigate tree stem (trunk) growth ([Supplementary Fig. S1](#S1){ref-type="supplementary-material"}). The original planting density was 3300 trees·ha^−1^. For the first three years, the forest was tended twice every year, after which it was left undisturbed for the duration of the experiment. Thinning operations were conducted using chain saws and heavy equipment after 7--10 years of the initial planting of trees. About 30% trees in the plantation were felled. When necessary, a second thinning operation was conducted to maintain an appropriate space between neighboring trees after 12--15 years of the initial plant of trees, the average reserve density is about 1800 trees·ha^−1^. All of the sites were located in subtropical monsoon climatic regions. Mean annual temperature ranged from 16.5 °C to 19.5 °C and mean annual precipitation from 1421 mm to 1962 mm ([Supplementary Table S1](#S1){ref-type="supplementary-material"}).
Field Measurements
------------------
Because *C. lanceolata* is the main forestation species in Jiangxi province, all trees were collected from plantation sites. Circular forest research plots were established with areas of 600 m^2^. Because prior work had shown that the architecture of the forest canopy is an important determinant of the scaling relationship between tree height and diameter[@b33], efforts were made to eliminate differences among canopy densities by drawing data only from sampling plots where the vertical projection of forest crowns was over 60%. This protocol identified 24 sites that could be sampled.
At each site, individuals spanning a wide range of sizes were selected in order to properly characterize the size distribution of the local stand ([Supplementary Table S1](#S1){ref-type="supplementary-material"}). Because the number of plots varied across sites, the number of sampled individuals ranged between 6 to 185 ([Supplementary Table S2](#S1){ref-type="supplementary-material"}).
Data were obtained by first measuring trunk diameter at breast height (DBH; 1.3 m from ground-level). Trees were then felled using a chain saw, and total height was measured using a steel tape. Stem discs were then taken at 1.3 m above the base and every 1 m for *H* \< 10 m or 2 m for *H* ≥ 10 m thereafter. An additional disc was taken at 0.5 m above the base for trees \<10 m. Finally, the stem volume of each trunk section was calculated based on the geometric shape of the segments. For example, the stem volume for the top section (above the last sample disc) was calculated using the formula for a truncated cone. Total trunk volume was calculated subsequently as the sum of all trunk sectional volumes.
Statistical protocols
---------------------
Data for *D, H* and *V* from each of the 24 sites were log~10~-transformed. Because functional rather than predictive relationships were sought, reduced major axis (RMA) regression was used to determine the scaling exponents (α) and normalization constants (log β) for log--log linear regression curves (see [Supplementary Table S2](#S1){ref-type="supplementary-material"}). The parameter φ (see [Supplementary Eqs (6--8)](#S1){ref-type="supplementary-material"}) was calculated using nonlinear regression analyses in SPSS Statistics 17.
Because trees were required to determine the numerical values required to develop a prediction model and because trees were aslo necessary to test the prediction model, 12 of the 24 sites (from Anfu to Ruichang, listed in [Supplementary Table S1](#S1){ref-type="supplementary-material"}) were used to establish the prediction model and the remaining 12 sites (from Ruijin to Yongxin; see [Supplementary Table S1](#S1){ref-type="supplementary-material"}) were used for testing the model. Specifically, the numerical values of the scaling exponents and constants of *V* vs. *D* of 12 sites were used to estimate the parameters *c* and *d* in [Supplementary Eq. (10)](#S1){ref-type="supplementary-material"}. Then, using the estimated parameter φ, a site-specific stem volume prediction model was developed and used to predict stem volume based on measurements of *D* and the associated scaling exponents of *H* vs. *D* for the second set of 12 sites. It must be noted that ordinary least squares (OLS) regression analyses were used to establish the prediction model ([Supplementary Eq. (11)](#S1){ref-type="supplementary-material"}) because the objective was to predict standing biomass by means of predicting stem volume.
RMA and OLS regression analyses were performed using the Standardized Major Axis Tests and Routines (SMATR) software package[@b34][@b35]. The software package SMATR was also used to determine whether the numerical values of scaling exponents differed among the 24 sites, which can provide the Model Type II equivalent of OLS standard analyses of covariance (ANCOVA). The significance level for testing scaling exponent heterogeneity was *P* \< 0.05 (i.e. slope heterogeneity was rejected if *P* \> 0.05).
The reliability of using this approach to predict stem volume was assessed numerically by calculating the mean absolute percentage error (MAPE) as suggested by Sileshi[@b27] using the formula: , where *V*~*O*~ and *V*~*P*~ denote the observed and predicted stem volume, respectively.
Results
=======
Volume-scaling relationships
----------------------------
The numerical values of the scaling exponents for volumetric scaling were significantly heterogeneous across sites (*P* \< 0.005) ([Supplementary Table S2](#S1){ref-type="supplementary-material"}). The mean scaling exponent of all sites was 0.380, with the smallest scaling exponent at Guixi and the largest at Anfu (i.e. α = 0.323 and 0.427, respectively). For all sites, *D* scaled as 0.386 power of *V* (95% CIs = 0.383--0.390, *n* = 1273, *r*^2^ = 0.969). Likewise, the scaling exponents relating height and volume varied significantly among sites (*P* \< 0.005; [Supplementary Table S2](#S1){ref-type="supplementary-material"}), and ranged from 0.323 to 0.427, with a mean of 0.341 ([Fig. 1](#f1){ref-type="fig"}). Pooling all of the data gave α = 0.331 and log β = 1.389 (*n* = 1273, *r*^2^ = 0.870). The scaling exponents for the *H* vs*. D* relationship differed significantly among sites (*P* \< 0.005), and ranged from 0.603 to 1.589 ([Supplementary Table S1](#S1){ref-type="supplementary-material"}).
The allometric covariation of volume-scaling relationships
----------------------------------------------------------
### The covariation of scaling exponents
The empirical data agreed well with the predicted covariations of stem volume scaling relationships ([Fig. 2](#f2){ref-type="fig"}). Specifically, the observed relationships between the scaling exponents for *D* vs. *V *^(*y*′)^ and *H* vs. D^(*x*)^ closely followed the predicted function of [Supplementary Eq. (8)](#S1){ref-type="supplementary-material"} (i.e. ), with *φ* = 1.10 (95% CIs = 1.09--1.11, *r*^2^ = 0.867) ([Fig. 2a](#f2){ref-type="fig"}). The relationship between scaling exponents for *H* vs. *V*^(*z*′)^ and *H* vs. *D*^(*x*)^ was governed by the function, where φ is 1.10 (95% CI = 1.09--1.12, *r*^2^ = 0.975) ([Fig. 2b](#f2){ref-type="fig"}). The observed relationships between the scaling exponents for *D* vs. *V* and *H* vs. *V* also complied with the predicted function , with *φ* = 1.10 (95% CIs = 1.09--1.11, *r*^2^ = 0.703) ([Fig. 2c](#f2){ref-type="fig"}). Importantly, the *φ*--value calculated for the three covariation curves had the same identical value of 1.10 ([Fig. 2](#f2){ref-type="fig"}).
### The covariation of scaling exponents and constants
The scaling exponents and normalization constants were statistically significantly correlated with one another ([Fig. 3](#f3){ref-type="fig"}). Specifically, normalization constants positively correlated with the scaling exponents for *D* vs. *V* (i.e. *y* = 0.74*x* + 1.26, *n* = 24, *r*^2^ = 0.741, *P* \< 0.01) ([Fig. 3a](#f3){ref-type="fig"}) and *H* vs. *V* (i.e. *y* = 0.59*x* + 1.19, *n* = 24, *r*^2^ = 0.778, *P* \< 0.01) ([Fig. 3b](#f3){ref-type="fig"}). In contrast, the relationship between the constants and the exponents for *H* vs. *D* was significantly negative (i.e. *y* = −1.31*x* + 1.18, *n* = 24, *r*^2^ = 0.992, *P* \< 0.01) ([Fig. 3c](#f3){ref-type="fig"}).
Furthermore, as predicted by [Supplementary Eq. (9)](#S1){ref-type="supplementary-material"}, the *H*-*D* scaling exponents were significantly correlated with the covariation of normalization constants in the stem volume scaling relationship (i.e. *y* = 0.995*x* + 0.0079, *n* = 24, *r*^2^ = 0.999) ([Fig. 4](#f4){ref-type="fig"}).
### Predictions and there percent prediction errors
A significant negative relationship was observed between empirically determined normalization constants (i.e. log β) and the *V* vs. *D* scaling exponents (i.e., ) across the 12 sites (for details, see Material and methods and [Supplementary Table S1](#S1){ref-type="supplementary-material"}) used to develop the prediction model, i.e.
when *V* is expressed in m^3^ and *D* in cm ([Fig. 5](#f5){ref-type="fig"}).
The empirically determined scaling exponents of *H* vs. *D* were applied to [Supplementary Eq. (8)](#S1){ref-type="supplementary-material"} to estimate . These values were then applied to [Supplementary Eq. (12)](#S1){ref-type="supplementary-material"} to calculate the corresponding normalization constants for each model calibrated site ([Supplementary Table S1](#S1){ref-type="supplementary-material"}). Lastly, a non-destructive model for estimating the stem volume of *C. lanceolata* was obtained, i.e.,
where *x* is the empirically determined site-specific scaling exponent for *H* vs. *D*.
Across all of the model calibrated sites, the mean absolute percentage error (MAPE) was 10.50 ± 0.32 SE, and 57% of all trees had MAPE values less than 10% ([Fig. 6](#f6){ref-type="fig"}).
Discussion
==========
The variation in volume-scaling relationships
---------------------------------------------
A number of previous studies have predicted heterogeneity in the numerical values of the scaling exponents of stem volume scaling relationships. For example, three biomechanical models have been proposed to explain the scaling of *H* with respect to *D*. These are the geometric similarity model, which assumes height will scale isometrically with respect to diameter (*H* ∝ *D*^1/1^), the elastic similarity model, which assumes height will scale as the 2/3 power of diameter (i.e. *H* ∝ *D*^2/3^), and the constant stress similarity model, which assumes height will scale as the 1/2 power of diameter (i.e. *H* ∝ *D*^1/2^)[@b36][@b37]. Our data demonstrate that none of these models can be applied to our experimental system because significant variation in the numerical values of the scaling exponents of the stem volume scaling relationships exist for *C. lanceolata* ([Supplementary Table S2](#S1){ref-type="supplementary-material"}; [Fig. S2](#S1){ref-type="supplementary-material"}). Specifically, for the scaling relationship of *H* vs. *D*, five sites had scaling exponents with 95% CIs that included 2/3, twelve sites included 1.0, one site included both 2/3 and 1.0, and five sites included neither 2/3 nor 1.0. Further, across all of the sites, height scaled as the 0.86 power of diameter (95% CI = 0.84--0.88) ([Supplementary Table S2](#S1){ref-type="supplementary-material"}). These results diverge significantly from all of the aforementioned models, indicating that no single optimal scaling exponent exists for Chinese fir. Indeed, many studies have demonstrated that tree scaling relationships for height, diameter and biomass are variable rather than constant[@b27][@b38][@b39][@b40][@b41][@b42][@b43].
In addition to empirical studies, using a growth-hydraulic model, Niklas and Spatz[@b21] predicted a nonlinear (convex) relationship between height and diameter through tree ontogeny, which agrees with empirical observations[@b26] and implies that scaling relations will vary among trees of different sizes[@b16]. Similarly, Enquist *et al*.[@b25] have shown a curvilinear relationship between tree height and diameter and a similar scaling transition for metabolism and biomass. Given that our results indicate that biomass scales nearly as the 1.10 power of stem volume ([Fig. 2](#f2){ref-type="fig"}), the observed variation in scaling relationships of Chinese fir might, at least in part, reflect differences in scaling relationships between trees of different ontogenetic stages driven ultimately by anatomical or ecophysiological responses to site quality and/or management practices.
The covariation of volume scaling relationships
-----------------------------------------------
It has long been recognized that size-dependent variation in *H* vs. *D* scaling relationships can be important in shaping other allometric relationships[@b44]. For instance, Dai *et al*.[@b45] demonstrated that the scaling exponent of height with respect to diameter decreases with increasing drought stress such that, for a given diameter, drought stressed trees are proportionately shorter, leading to a systematic change in the plant density--mass relationship. Our data demonstrate that the scaling exponents of the stem volume scaling relationships covary and that the changes agree with the equations derived from the biomass scaling relationships suggested by Price *et al*.[@b27] (see [Supplementary Eq. (8)](#S1){ref-type="supplementary-material"}, [Fig. 2](#f2){ref-type="fig"}), which indicates that the intraspecific covariation in scaling exponents for Chinese fir plantations across different locations holds true as well as across species. Thus, our results indirectly support the hypothesis that plant growth can adjust network geometry and hydraulic function in order to cope with variation in the abiotic and biotic environment, at least in the case of Chinese fir.
Furthermore, we find that the scaling exponents are all correlated with their associated normalization constants across all of the stem volume scaling relationships. Consistent with the findings of Zianis and Mencuccini[@b6], Djomo *et al*.[@b10], and Sileshi[@b28], our analyses show a positive correlation between the numerical values of the normalization constants and scaling exponents of *D* vs. *V* ([Fig. 3a](#f3){ref-type="fig"}). Likewise, noting that *H* = β~3~*D*^*b*/*a*^ (see [Supplementary Eq. (1)](#S1){ref-type="supplementary-material"}), it follows that β~3~ = *H*/*D*^*b/a*^. Given that the scaling of *H* vs. *D* should shift from 1.0 to 2/3 as trees grow in size[@b15][@b21][@b25], we must expect a negative relationship between β~3~ and *b*/*a*. Indeed, our data support the expectation that normalization constants significantly correlate with scaling exponents for *H* vs. *D* ([Fig. 3c](#f3){ref-type="fig"}). Importantly, beyond the variations of scaling exponents suggested by Price *et al*.[@b27], our mathematical derivation (see Eq. (9)) and empirical data illustrate that the plant fractal traits *a* and *b* influence not only scaling exponents but also normalization constants interrelating tree height, diameter, and stem volume ([Fig. 4](#f4){ref-type="fig"}). It is reasonable therefore to argue that other key plant functional traits underlie these relationships. For example, across woody plants, wood density is a crucial variable in carbon estimation and correlates with numerous morphological, mechanical, physiological, and ecological properties of trees[@b46][@b47][@b48]. For example, wood density is related to the normalization constants for *M* vs. *D*[@b9][@b25][@b49], but is negatively correlated to tree growth rates[@b47][@b50][@b51], i.e. species with denser wood tend to have slower growth rates than species with less dense wood because dense wood may have a lower conduit fraction that reduces the rate of transpiration and photosynthesis, and thus growth in biomass. In addition, denser wood requires more mass per volume such that for the same growth in mass, a denser wood species will grow less in volume[@b52]. Given that the growth rate is directly proportional to plant metabolic rate[@b14][@b53], wood density, constrained by plant growth, might therefore affect the scaling exponents of plant metabolism. Indeed, King *et al*.[@b54] report that the scaling exponent of tree stem growth rate vs. light-interception is negatively correlated with wood density. Furthermore, wood density varies more than an order of magnitude across species[@b47][@b48][@b55], which can weaken correlations among scaling exponents among biomass scaling relationships across species, because there is less variation in ecophysiological traits within species than across species. Indeed, our data reveal a stronger allometric covariation among height, diameter and stem volume relationships (i.e. *r*^2^ \> 0.70) then that reported by Price *et al*.[@b27] (i.e. *r*^2^ ≥ 0.27) ([Fig. 2](#f2){ref-type="fig"}). This feature may reflect a more consistent sampling methodology for a single species with more uniform anatomical, morphological, and biomechanical properties, which would reduce the residual variation in contrast to the many differences among the many species represented in a global data set[@b27].
The prediction model
--------------------
In terms of its practical application, our prediction model (Eq. (13)) was developed based on the theoretical framework of the covariation of scaling exponents and the correlation between scaling exponents and constants in stem volume scaling relationships ([Supplementary Eqs (8--11)](#S1){ref-type="supplementary-material"}). A considerable number of studies have attempted to develop a general predictive model for biomass estimation. For example, using the small tree sampling scheme (SSS), Zianis and Mencuccini[@b7] reported that the mean absolute percentage error (MAPE) of aboveground biomass estimation in 10 different studies ranged from 7.43% to 31.59%, with a mean value of 14.83%. Furthermore, using biomass-diameter-height regression models, the MAPE values of biomass estimation in tropical forests reported by Chave *et al*.[@b47] ranged between 9.4% to 12.2%, with a mean value of 10.7% (recalculated by Ref. [@b27]). Likewise, using the site-specific *H* vs. *D* scaling relationship in our field sites, MAPE across the 12 model calibrated sites listed in [Supplementary Table S1](#S1){ref-type="supplementary-material"} was 10.5%, and more than 57% of all MAPE values were less than 10% ([Fig. 6](#f6){ref-type="fig"}). Consequently, our data show that the implementation of the prediction model developed here can result in very accurate predictions for stem volume. Furthermore, the data required for determining a site-specific *H* vs. *D* scaling relationship are easy to collect, requiring only height and diameter measurements. Therefore, the prediction model reported here provides a useful non-destructive tool for predicting stem volume (and biomass) based on the site-specific height vs. diameter relationship. However, it must be noted that our prediction model was developed using data drawn from the monospecific plantations having small or little variations in density and age. Another concern is the statistical limitations to fitting log-log linear power functions to *H* vs. *D* relationships, which the reductionist model is contingent upon. This latter problem is likely exacerbated when a strict linearity does not true (i.e., under less uniform conditions and a wider range of tree ages, height-diameter relationships become more complex).
Conclusions
===========
Our results reveal important departures from the general scaling relationships predicted for allometrically ideal plants, and show that volume scaling relationships vary significantly even for a single species. Nevertheless, a modified stem volume scaling model governed by the covariation of the numerical values of scaling exponents and normalization constants for whole-plant morphology and stem volume is shown to have remarkable predictive properties. Furthermore, the theory and empirical data present in the current study support the view that allometric normalization constants are influenced by plant fractal traits, and are thus directly related to scaling exponents ([Fig. 3](#f3){ref-type="fig"}). Lastly, the covariation of scaling relationships provides an accurate non-destructive method for predicting Chinese fir stem volume relationships. Collectively, our data and our theory show that the changes in the numerical values of scaling exponents and the corresponding normalization constants attending growth reflect anatomical and ecophysiological responses to ontogenetic changes in size, and, importantly, differences in site quality and/or management practices. Our results provide strong circumstantial support for the hypothesis that plant growth adjusts hydraulic geometry and function in order to cope with ontogenetic changes in plant size and variation in abiotic environmental factors. Nevertheless, progress toward understanding the mechanisms that govern the scaling of plant form and function require additional theoretical insights regarding how and why scaling exponents and normalization constants covary within species. It also requires additional data in order to assess theoretical predictions.
Additional Information
======================
**How to cite this article**: Zhang, Z. *et al*. A predictive nondestructive model for the covariation of tree height, diameter, and stem volume scaling relationships. *Sci. Rep.* **6**, 31008; doi: 10.1038/srep31008 (2016).
Supplementary Material {#S1}
======================
###### Supplementary Information
The authors thank Dr. Sean Michaletz (Department of Ecology and Evolutionary Biology, University of Arizona) and Dr. Tao Li (Department of Environmental Science, University of Eastern Finland) for helpful and constructive comments. This study was supported by grants from the National Natural Science Foundation of China (31170374, 31370589 and 31170596), the Program for New Century Excellent Talents in Fujian Province University (JA12055), Fujian Natural Science Funds for Distinguished Young Scholar (2013J06009), Key Project of Science and Technology of Fujian (2004N5008), and the Seed Industry Innovation Project of Fujian Province (2014S1477-4).
**Author Contributions** Z.Z., Q.Z., Y.Y. and D.C. conceived, designed the study. Z.Z., Q.Z., L.C., Y.Y. and D.C. performed the data collection and analysis. K.J.N. coordinated data analysis and interpreted the results. Z.Z., Q.Z., K.J.N., L.C., Y.Y. and D.C. wrote the paper.
{ref-type="supplementary-material"}.](srep31008-f1){#f1}
{#f2}
{#f3}
{#f4}
{ref-type="supplementary-material"}.\
Normalization constants and exponents were calculated using ordinary least squares regression. Solid line is OLS regression line (*r*^2^ = 0.979).](srep31008-f5){#f5}
{ref-type="supplementary-material"}.\
The insert plot is frequency distributions of the mean absolute percentage error (MAPE) based on the Eqn (13). The mean and S.E. values are presented.](srep31008-f6){#f6}
| |
- This event has passed.
Digital Wildfires: respond now at the Digital Catapult!
12th January 2016Free
The Digital Wildfires project (University of Oxford) and the CaSMa project (University of Nottingham) are organizing a one-day workshop to discuss the impact of (provocative) social media content, the (responsible) use of social media data and the balance between concerns over the harms cause by social media posts with the rights to freedom of speech.
Our showcase workshop brings together researchers and key stakeholders from government, law enforcement, commerce, education and civil society to foster debate on these important questions. The workshop will include:
• speaker presentations
• a discussion roundtable
• a youth panel
• a keynote address by Baroness Beeban Kidron, founder of the iRights campaign for children and young people
• viewing of artwork produced on the theme of Digital Wildfires
• opportunities for networking and debate
• lunch and refreshments provided.
Preliminary Programme
Digital Wildfire: respond now at the Digital Catapult!
- 10.15 – 10.40
Arrival and coffee
- 10.40 – 10.50
Welcome – Marina Jirotka, University of Oxford
- 10.50 – 11.10
– Introduction to Digital Wildfire project – Marina Jirotka, University of Oxford
– Introduction to CaSMa project – Ansgar Koene, University of Nottingham
- 11.10 – 12.15
Keynote address: Baroness Beeban Kidron – iRights
Introduced by Elvira Perez Vallejos, University of Nottingham
- 12.15 – 13.15
Lunch
- 13.15 -15.45
Panel presentations: Order of presentations tbc.
– Dhiraj Murthy, Goldsmiths College
– Anna Jönsson, Kick it Out
– Iain Bourne, Information Commissioner’s Office
– Rob Procter, University of Warwick
– Marion Oswald, University of Winchester
– Carl Miller, Demos
– Tom Sorell, University of Warwick
Panels chaired by William Housley, Cardiff University and Bernd Stahl, De Montfort University
- 15.45-16.15
Coffee break
- 16.15 – 17.15
Discussion roundtable: The responsible governance of social media
– Paul Giannasi, Ministry of Justice
– Adam Edwards, Cardiff University
– Penny Duquenoy, University of Middlesex
– Gabrielle Guillemin, Article 19
Roundtable chaired by Tom Rodden, University of Nottingham
- 17.15-18.00
Youth panel: Panel chaired by Elvira Perez Vallejos, University of Nottingham
- 18.00 – 18.10
Closing remarks – Marina Jirotka, University of Oxford
- 18.10 – 19.00
Drinks and viewing session for youth panel entries and Digital Wildfire project artwork
The event is free to attend, but places are limited and it is necessary to register. If you would like to attend, please contact [email protected]
Related
Details
- Date:
- 12th January 2016
- Cost:
- Free
- Event Category:
- Public
- Event Tags:
- Digital Rights, Informed consent, iRights, Policy, social media, Social Media Analysis, Workshop Event
Organisers
- Dr Helena Webb
- Dr Elvira Perez Vallejos
Venue
- Digital Catapult
-
101 Euston Road
London, NW1 2AJ United Kingdom + Google Map
- Phone:
- 0300 1233 101
- Website: | https://casma.wp.horizon.ac.uk/event/860/ |
This thesis’s purpose is to study people from two different cultural clusters, the Anglo and Confucian clusters, within the context of a cross-cultural GMBA program. The goal is to compare the conflict management styles and the stereotypes among those two cultural clusters and analyze how these could be affected by the exposure to the GMBA program. For that, we will first explore the literature and previous studies about cultures, stereotypes and conflict management styles. Then we will explore the literature about two opposing theories, the culture fit and the cultural adaptation theories. The observations of the GMBA program and its students’ conflict management styles and stereotypes revolves around the possibility of the students to learn and adapt to the group from the other culture (cultural adaptation) or the possibility of the students sticking to their cultural values and not changing (culture fit). To run this observation, we have created a case study around the NTU GMBA program, and we have surveyed and interviewed its students in order to understand if they adapted to each other, and, in case of adaptation, what allowed it to happen. | https://www.airitilibrary.com/Publication/alDetailPrint?DocID=U0001-2301201716510600 |
For the optimal design of social insurance policy, policymakers must consider how public insurance interacts with family dynamics. This column reveals how in Austria, the impact of husbands losing their jobs on wives entering the workforce is generally weak compared to other countries. This may be explained by traditional gender norms and the importance of the male breadwinner model.
Martin Halla, Julia Schmieder, Andrea Weber, 13 December 2018
Wolfgang Dauth, Sebastian Findeisen, Jens Südekum, Nicole Woessner, 19 September 2017
Recent research has shown that industrial robots have caused severe job and earnings losses in the US. This column explores the impact of robots on the labour market in Germany, which has many more robots than the US and a much larger manufacturing employment share. Robots have had no aggregate effect on German employment, and robot exposure is found to actually increase the chances of workers staying with their original employer. This effect seems to be largely down to efforts of work councils and labour unions, but is also the result of fewer young workers entering manufacturing careers.
Patrick Bennett, Amine Ouazad, 29 October 2016
A substantial body of literature finds significant effects of unemployment rates on crime rates. However, relatively little is known about the direct impact of individual unemployment on individual crime. This column examines the effect of job displacement on crime using 15 years of Danish administrative data. Being subject to a sudden and unexpected mass-layoff is found to increase the probability that an individual commits a crime. However, the findings stress the importance of policies targeting education and income inequality in mitigating crime.
Italo Colantone, Rosario Crinò, Laura Ogliari, 04 December 2015
Influential studies have shown that trade liberalisation is associated with substantial adjustment costs for workers in import-competing jobs. This column uses UK data to shed light on one such cost that has not been considered to date – subjective well-being. Import competition is found to substantially raise mental distress, through worsened labour market conditions and increased stress on the job. These findings provide evidence of an important hidden cost of globalisation. | https://voxeu.org/taxonomy/term/5966 |
Combine creativity and the visual to reinforce your speaker memory/1 Comment/in notes in public speaking, PowerPoint, presentations, public speaking, public speaking, public speaking rehearsal, reading speeches, speeches, using notes in public speaking, visual aids /by Bronwyn
Nobody likes a mental blank; not the speaker, not the audience.
As a speaker you really would like to remember all of the points that you so carefully researched and constructed and that you designed to work together to get your message across.
We all have our own ways of remembering our presentations and speeches. Some are based on our preferences for sound or images or body language. Some are simply based on what works for us.
I am working, at the moment, with a client who is moving away from a script to presenting in natural, unscripted language. She is a very creative person, especially in visual media – creates amazing paintings, patterns, and rejuvenates a flagging spirit with what she calls “creative time’ which might involve painting, or doodling or setting significant messages in a beautiful surrounding. I made several suggestions based on using visuals so that she could remember her speech and create useful, unobtrusive notes, and it was like watching a light bulb glowing. She is now in her element.
So for those of you looking for ways to remember your speech or presentation and to create prompts for yourself, here are 5 ways that visuals could work for you.
1. I will call the first one “mind mapping”. This involves “mapping” the ideas for your speech. Usually people put the central message in the centre, perhaps surrounded by a circle or border. Then they connect the points of the speech to the main message as one would spokes to a wheel. From those points, then, further connections reach out to the supports for the points. You can use decorations, colours, pictures, whatever most represents the content of each part, and the connections between them and the order in which you will present them. This is a standard mind map.
You might, on the other hand prefer to draw waves that represent the more emotional flow of the speech, somewhat similar to Nancy Duarte’s spark lines, or curves that represent a new point, or a change in direction of the speech. So each wave or curve represents the points to be made, the climax of the point that will really hook audience response, or the points of the presentation that represent, say, problems and solutions. Write the main message beneath the waves, perhaps, and transition techniques between the waves.
I simply use the sort of note-taking skills I learned in uni – main message at the top, then bullet point system for points to be made and further indented bullet points . I don’t use this system to creating slides, incidentally – yuk, how boring that would be, but it’s the way I learned to organise content and it works for me, in conjunction with other memory techniques.
So however you visually represent the flow of your speech or presentation, you can memorise that image, and the connections between its parts and that will be with you when you need to remember what to say next when you actually present. It will also be there with you, should you need to change the flow of the speech in response to the audience or the environment, and the logic will allow you to make the changes in a way that works for you and for the audience.
2. Visualisation. This is a technique used in many areas where performance is focused and adrenalin-driven, particularly in sports. And while it certainly involves the visual and imagery there are so many other aspects involved – training your subconscious to store what I call “muscle memory”, injecting positive emotion to reinforce the memories. For the purposes of this article, though, the visualisation involves using the mind’s eye or imagination to “watch” yourself as you present, your body language, how you appear on the “stage”, how you are interacting with the audience, and what you are saying. It also involves “seeing” how the audience is reacting to what you are saying, how the equipment is functioning, how you are using the particular setup in the room. I used it from the beginning, I think, of my speaking, long before the word became such a large part of our language, “visualising” successful presentation, and visualising overcoming the possible hurdles to successful presentation. It’s a way of committing the presentation to memory, and ensures that much of the presentation can be put into a state similar to auto-pilot while the front of your brain deals with interacting with, and customizing for, this particular audience.
3. For many people the simple act of writing something is a memory aid, but it can be combined with the visual memory. You can commit the look of the writing to memory, having simply written on a normal blank page. Or you can use the ubiquitous, totally indispensable sticky notes. Use colours, create patterns of shapes and colours, use diagrams on the notes, make diagrams with the notes, write a word or words for each point with points in one colour, supports in another, transitions in another. Lay out the whole speech, in point form, and that process alone may just be enough if you have a photographic memory. Or you can then transfer the whole thing onto a sheet or folder or series of cards to use as a prompt. Take a photograph of it, and use that. Your creativity comes into play, here, to learn, by trial and error, just what works best for you.
4. Maybe it is images that work for you. After all, ‘a picture paints a thousand words’, and our minds remember information better if that information is combined with an image. So you could use an image as a prompt. The storyboarding process that can be so useful for creating a PowerPoint presentation would work well here. One idea – one image, with maybe a word or two to reinforce the memory. Again – perhaps the creation of the storyboard will be enough, if the photographic system works for you, or you can use the board as a prompt. Or you can rehearse the presentation and when you know the points that will cause you grief, just use images for those. It’s all a matter of finding, through practice, what works for you.
5. And, of course, the logical extension of this thought is to use the PowerPoint slides themselves as a memory aid. I know people do this and can make it work for them. It takes practice. I watched a speaker use this system recently. She was a dynamic presenter, with a fluent presentation and had the audience captivated. She had no remote for the slides, so had to ask or signal to the computer operator to advance the slides. When that operator made an error, then the whole speech ground down as the speaker had to wait to find out what was supposed to happen next. Her dynamism saved the day, but it was a glitch that could have been avoided.
Each of these 5 is a way of using images to remember your presentation. Each allows you to be creative in producing a visual to memorise and guide your presentation. Each will need your constant creative attention in honing its success, and maybe you will go even further and combine them.
Perhaps you are already using one or more of these or have your own way of using the sense of sight in memorizing your speech, ensuring there are no blank moments and that it progresses as you dreamed it would. I would love you to share them in the comments below.
The operating system behind any great presentation/0 Comments/in books, presentation, presentations, public speaking, public speaking and business /by Bronwyn
I don’t think I’ve mentioned this before. So if you have been hiding under a rock for the last year or so and have missed this – it’s a great read – Jobs and Gallo are both speakers we can all model….
The Presentation Secrets of Steve Jobs
Carmine Gallo
“The Presentation Secrets of Steve Jobs reveals the operating system behind any great presentation and provides you with a quick-start guide to design your own passionate interfaces with your audiences.” Cliff Atkinson, author of Beyond Bullet Points and The Activist Audience => http://bit.ly/14Kp90g
Quick public speaking tip – Is your audience illiterate?/0 Comments/in audience in public speaking, PowerPoint, presentations, public speaking /by Bronwyn
Well, are they?
Probably not.
If your words are on the screen or sheet of paper, then let the audience read for themselves. This will have enormous impact, especially if your audience is used to presenters slavishly following the test on their visuals.
You are presenting your message verbally, and visuals are just that – images or groups or words that support your message. They are not the message itself. If necessary, you may have to explain this, first, because many audiences have been trained by presenters who cover their inadequacies by using their visuals as the message.
This may just be why you will make an impact if you can present without using this method. You will be different. You will be seen as so much more confident and competent as a person.
But underneath all of that is the fact that your audience is not illiterate. Don’t annoy them by reading to them what they can read themselves.
Research more than the content when you prepare your speech/0 Comments/in audience, audience in public speaking, presentations, public speaking, speech writing, speeches /by Bronwyn
When you start building a speech or presentation, the first thing you think of is the content. What will you say? How will you say it? What message do you want to communicate? And what do you want your audience to say or think or do differently? So you start researching that content – on the internet, at the library, with your friends and from the experts.
Content, however, is not the only thing you need to research if your speech or presentation is to be a success. If you want your audience to say or think or do something differently, you will need to know how to “pitch” your content to this particular audience.
Everything that you say or do in your presentation has to be geared to that audience… what they will be receptive to, what their triggers are, the language that they will respond to.
So in researching that presentation to write it, or prepare it, you will also need to research the audience.
Find out as much as you can – their age range, gender, income levels, dreams, needs, wants, culture. What are their likes and dislikes? What will excite them, offend them, unnerve them? What do they wear? What keeps them awake in the middle of the night?
You can gain much from a registration form, especially if you can design it yourself, or have a hand in designing it.
You can ask the event manager, or the person who hired you. You can research their company or organisation, talk to them and their friends and colleagues.
In your preparation routine, you can mingle with audience members before your speech.
Then you can use the information you have gained in constructing and presenting your speech. Use your knowledge of their interests and dreams, to choose your most persuasive stories, points and suggestions.
You will choose language that they understand, and that is not irritating or offensive to them, and subject matter to suit that audience – themes, supports, anecdotes all will be tailored to them.
One of the strongest engagement techniques in presentations is WIIFM (What’s in it for me?) and you need to be reminding your audience regularly of why they should keep listening to your presentation, and of just what they would gain from your suggestions (or lose by not following them).
I’m not sure whether researching the audience is more important than researching content. What do you think?
I do know that for the content to be effective, the research you do on your audience will be vital.
©2012 Bronwyn Ritchie
Please feel free to reproduce this article, but please ensure it is accompanied by this resource box.
Bronwyn Ritchie has 30 years’ experience speaking to audiences and training in public speaking – from those too nervous to say their own name in front of an audience to community groups to corporate executives. To take your public speaking to the next level, get free tips, articles, quotations and resources, at http://www.pivotalpublicspeaking.com
Is your audience switching off when you present data? – Part Two/0 Comments/in data presentation, PowerPoint, presentation, presentations, public speaking /by Bronwyn
Presenting data is a very difficult challenge. The first step is engaging the audience with a strong emphasis on why it is important for them to understand what is being presented. Nevertheless they do need to be able to understand the data you present. While ensuring its relevance is understood is vital, so is it vital that your audience understand each and every piece of data that you present, or they will just as surely switch off, and your outcome is lost.
Visuals are very useful here. Use pie graphs and bar charts; insert them into your slides if you are using slides. If you are using a whiteboard, draw as you tell the story or make the point. If you are using PREZi you can let the audience look at the data from different angles. The visual representation will reinforce your explanation and the point you are making.
If it is necessary to use graphs, diagrams and charts, make sure they are as simple as possible. While you probably want to impress with your understanding of complicated data, being able to simplify it will have far more of an impact, particularly in terms of getting your message across.
And make sure that everything about those visuals is clear. Sometimes it’s necessary to explain so that all the implications are clear as well. There may have been a very good reason for choosing the axes in the graph. There may have been a very good reason for choosing the increments that are used. While it may seem obvious to you, it may not be to the audience, and it may make the data relationships clearer.
You can also add to the impact of the visuals. There may be a story behind the points on a graph. It is the intersection of two values and maybe the relationship is reasonably clear. But if you can give the reason why this relationship exists or maybe the history behind it, then it will be so much clearer. And if you can put a human face on it, with a human story then the relationship and the point you are using it for will have so much more impact. If wages are going down and costs of living rising, for example, then a story about a family forced to live in a car will make the impact so much more real. Another way to add a human face, or a realistic face, is to use a graphic representing the actual item being quantified. This can be particularly useful in a bar graph. If the bar consists of pictures of dollar coins to represent money, or of groups of people to represent populations or groups, for example, again the impact is multiplied.
In the midst of all this, it is important to remember, still, that you are presenting points towards a persuasion of some kind. It can be useful to have the point you are making as the heading for the slide that contains the visuals.
And while the visuals should be as detailed as is necessary to make them understandable, too much detail will overwhelm. Remember the visuals only need to make a point, not necessarily present all the data. If all the data is necessary for later inspection and verification, put it in a handout, and leave the slides as simple as they can be.
Visuals are your greatest ally in presenting data. They can add impact and keep your audience engaged with the thread of your message. Your simplification and design of the material to support that message and the thoughtful explanation you add to it, will support the success of your data presentation.
©2012 Bronwyn Ritchie
Please feel free to reproduce this article, but please ensure it is accompanied by this resource box.
Bronwyn Ritchie has 30 years’ experience speaking to audiences and training in public speaking – from those too nervous to say their own name in front of an audience to community groups to corporate executives. To receive her fortnightly free tips, articles, quotations and resources, subscribe now, it’s free!. Visit http://www.pivotalpublicspeaking.com/ps_ezine.htm
The New Rules of Persuasive Speaking/0 Comments/in persuasion in public speaking, PowerPoint, presentation, presentations, public speaking /by Bronwyn
Carmine spoke to an audience of grad students at the UC Berkeley Haas School of Business. His topic: The New Rules of Persuasive Presentations pulled from his best-selling book The Presentation Secrets of Steve Jobs. Here’s an excerpt.
3 Ways to Use Prezi Big Pictures to Make a Big Impression/0 Comments/in presentations, public speaking, public speaking technology, speech structure /by Bronwyn
Welcome to this guest post from Jim Harvey. Jim helps speakers with his very practical approach, an approach he has developed for himself and his clients through years of research and experience. Enjoy his insights on creating the big picture with Prezi.
A big picture is what makes Prezis immediately stand out from all other presentations, and lets your audience know they’re in for a different type of presentation. Because of its zoom functions, Prezi allows you to put images at the heart of your presentation – even incorporating all of your information into one picture.
No matter how you’re structuring your presentation, there’s probably a way to incorporate a big picture which makes it easier to understand and more interesting to watch. Here are three big picture techniques I use when designing presentations for myself and my clients.
1. Set the scene
Pictures have the power to make us think and understand things which we’d need hundreds of words to convey. It might be a landscape photograph which reminds us of a place we love, or a diagram which shows us how a manufacturing process works. Sometimes one image can explain exactly what your presentation is about – making it the perfect backdrop to your introduction, or window into the subject you’re explaining.
In Prezi, a big picture has the power to set the mood of your entire presentation. You can begin with it filling the screen, giving exactly the message you want to begin with, and even structure the rest of the presentation in and around that image.
A Prezi with an Informative Big Picture
For this Prezi: http://prezi.com/ow8zo7rbkt7v/raise-the-rate/
2. Show the structure of your presentation
A big picture can act like a map – showing where your presentation is going, and giving context to each point you make. This makes your whole presentation work, because it shows how everything links together and relates to your overall message.
It’s a great approach to delivering both short and long presentations, and particularly useful if you’re building up a series of points, for example to argue “3 reasons why xyz”. At the end of the presentation your audience should be able to look at your big picture, and pick out the three reasons you’ve identified.
Prezi with a Clear Structure
(for this prezi: http://prezi.com/y3f0vwjfiayl/we-day/ )
3. Present in a different way
Prezi allows us to plan presentations in an entirely new way – instead of creating an inflexible path through the information in advance, you can simply decide how to structure your presentation on the day. We’ve used this method before by creating infographic type big pictures, which cover all of the information a client may like to know.
When we come to present, we deliver a short introduction and then ask the client, “what would you like to know?” In present mode, you can click anywhere in a Prezi and be taken to that point – from there you can follow a linear path or carry on moving around organically.
Prezi Made for Exploring Naturally
For prezi: http://prezi.com/xtthuex5lynq/prezi-faq/
Jim Harvey is a presentation skills coach and blogger. His aim is to help people to tell stories – about themselves and their products – better. Take a look at his presentation skills blog, or find out more about using Prezi.
Present like Steve Jobs/0 Comments/in presentations, public speaking, public speaking and business, video, videos /by Bronwyn
Apple CEO Steve Jobs was well known for his electrifying presentations. Communications coach Carmine Gallo discusses the various techniques Jobs uses to captivate and inspire his audience — techniques that can easily be applied to your next presentation.
You’re not a natural comedian? That’s fine …. here’s how to find humour to use in your presentations/0 Comments/in humour in public speaking, presentations, public speaking /by Bronwyn
Most audiences will respond to humour. You don’t need to be a comedian, or even a humorous speaker, if it is not your style. You can still use humour to engage with an audience and have them be comfortable with you and your presentation.
“But I’m just not funny and I’m hopeless at telling jokes” Yes, I know – me too. So where do we find humour to use in our speeches? There are three main places to find humour. They are readily available to you, and they are used by all successful speakers and comedians. Those places are life, jokes and situations. Let’s look at how to extract the humour from them.
The first place to find humour is to look around you – look at your life – look at everything within it. Look at the conversations that make people laugh. Use them. Or look at what worked to make people laugh and use that. When you find yourself laughing or even smiling, look at why. What made you smile? Yes, I know you have your own sense of humour, but it is your own sense of humour that will make the humour in your presentations authentic, strong and personal. Select from that, what you think will appeal to your audience and what will best support your points.
Seek out humour. Look at the internet – not to copy jokes (we’ll look at that in a minute) but, again to see what makes you laugh. What makes other people laugh? Go to the library. Look into magazines and ezines. Read humorous writers, go to comedy clubs, listen and watch radio and television. What works and what doesn’t and why? When you find out what works and what doesn’t and why, then you can go back to your own life and watch for those same things, what works, what doesn’t and why – those same conversations, those same situations. See the humour and how that humour can be used in your presentations.
When they are situations and conversations and events that have happened to you or around you or to those around you, they have so much more impact. They have all the added benefits that storytelling brings to a speech. They are authentic and not some joke that you are repeating and trying to twist to suit your point. And they are certainly not a joke that your audience has heard before.
Another source of humour is our own speaking experience. You will discover, as you speak, what people find humorous about you and your style. Sometimes you may make an aside or a throw away remark that was not intended to be humorous, but that makes people laugh. You may make a point using exaggerated body language and people laugh. You might create a situation with the audience or the stage that creates a laugh. Note it well, and use it again. Next time it will be deliberate, certainly, but you can make it look spontaneous if need be. If it works, keep it!
Other people’s jokes are a very dangerous source of material for your humour. Part of the danger lies in the way people use jokes. Some speakers, desperate to be humorous, plan to simply tell jokes to get a laugh, relax their audience and create engagement. If it is not your joke, you risk it falling flat. If it is just a joke on its own, you increase the risk because everything is riding on that joke being funny, you telling it well, the audience being in the mood for that sort of humour – all sorts of pitfalls. If, on the other hand, you choose to use the joke as a support for a point you are making, then you decrease the chances of failure. If worst comes to worst and your audience does not respond, you can just carry on as if it were a story and not necessarily a funny joke. If it succeeds then you have got double value from the joke in creating a memorable tag for the point you were making. You can find jokes in all of the places I mentioned above – the internet, the library, magazines, other comedians and so on. You can use quotations and crazy predictions. You can search in the area of the subject of your presentation or in the expertise of your audience. Just be very careful that the joke suits you audience and the occasion, that it suits your style and your sense of humour and that it suits the point you are trying to make.
The final source of humour is one that works really well. I will call it situational humour. Find humour in the situation you find yourself in, for this speech. You can use geographical humour – compare your home country with this country. Tell the story of something funny that happened here on this occasion, or on another occasion. Use the organisation or the people in the audience or the event. Research the history of the organisation and its culture. Find (appropriate) humour in that. Find humour in your relationship with someone in the audience – something funny that has happened or that the person said had happened. Turn someone’s idiosyncrasies into humour if t can be done respectfully. Use current events – in the world, the country, this town or this audience. All of these are particularly useful in your opening segments that will help relax the audience and to make build engagement with them. | http://www.pivotalpublicspeaking.com/blog/category/presentations/ |
Microsoft does not allow this behavior and takes action on IPs that engage in it.
Occasionally, some of the IPs in our MX record may be out of service.
If you are an user looking for support with your account, please visit our end user support page.
However, submitting this information does not guarantee that any message you send to users of the services will be delivered.
Senders must not use namespace mining techniques against inbound email servers.
Alternatively, if an user adds your domain or email address to their "contacts" or their "safe-senders list" they will no longer see this notification.
In addition, senders who are on the Return Path Certification list or on a user's "safe sender's" list typically experience links and images within their messages enabled by default. An "allow list" is essentially a "free pass" which allows emails from certain senders to bypass junk email filters and other precautions. You can find out more about our filtering processes here. who helps ensure the legitimacy of certain senders via their Return Path Certification program.
This is the practice of verifying email addresses without sending (or attempting to send) emails to those addresses.
This method is commonly used by malicious senders to generate lists of valid e-mail addresses that they can send spam, phishing emails or malware.
Some of the deliverability issues are the result of sender-based software configurations. | http://sky-post.ru/cheekymonkeydating+com/27137.html |
I have nearly 9 years of experience as performing songwriter & producer. I also have a degree in Music Production Recording Arts and specialize in mixing and mastering and all related disciplines. It is with this experience that I approach songwriting and production as an ever evolving skill.
We were all created to create something.
Art is healing. For the creator and aesthete.
I don’t like to do things I don’t like to do. I love creating. Therefore I aspire to create for a living.
My goal is to educate, inspire, & innovate.
I am creating for all the would be geniuses that will never get the chance to display the talents they have been blessed with.
I feel that in creating anything whether it be a business or a song we must consider the legacy to be left after we are gone. How is what we do going to impact future creators? I will build a image centered around purpose and substance as my label will be non profit and a portion of all profits will directly benefit the community. By building a brand that is diverse and includes much more than just music I can speak to people in a variety of ways by utilizing my distinct sound to promote individuality and creative expression while at the same time being an advocate for unity within creativity.
I'd love to hear about your project. Click the 'Contact' button above to get in touch.
Interview with Creative Producer & Engineer
Q: What are you working on at the moment?
A: I have four artists I am currently working with in some capacity whether it is just mixing and recording or whether it is full service, production, songwriting, consulting and marketing.
Q: What was your career path? How long have you been doing this?
A: My career path took me from performing on stage and making music in my garage to finally going to school after six years int he industry in order to learn all nuances. I have now been making music for nine years
Q: Can you share one music production tip?
A: The best tip I can give on music production is FINISH. Even if you don;t like something it is extremely important to finish the process in order to build good habits and put all of your skills to use. The best teacher is experience.
Q: What type of music do you usually work on?
A: The majority of tracks I work on are Hip Hop, R&B, Soul, and Singer-Songwriter compositions.
Q: What's your strongest skill?
A: My strongest skill is my attention to detail. I am very maticulous when it comes to finding the perfect sound for a particular track. I like to think of my self as a tailor of sound.
Q: What's your typical work process?
A: My typical workflow on a mix is to first go through and do all my editing and comping in order to get everything in the right place. After that I usually go through the entire song and adjust panning to give the song it's very own unique stereo field. I will usually adjust volume of each track as I move through panning as well. Next I usually create my Aux tracks and set the routing for all my sends. Often times I will start with a session template from a previous track in the same genre or sound. Once I've done this I begin with subtractive EQ mixing backwards from the Aux tracks. Next I move on to compression, followed by adding effects such as reverb, delay, etc. Once I have a sound that I like I will usually dial it in with some additive EQ across key frequency spectrums before adjusting my final levels and adding mix bus compression depending on the track's needs. After everything is done I prefer to print the entire mix to it's own track rather than bounce it traditionally as I find the qulaity to be higher this way.
Q: Tell us about your studio setup.
A: My home studio setup is pretty simple. The whole rig runs off my MacBook Pro through my Saffire Interface. I produce most of my tracks in Maschine and do the majority of my mixing in Pro Tools 10 and 11. I own a numark turntable for sampling and a Sterling ST55 for scratch vocals. I also have unlimited access to the studios at Pinnacle College where I graduated. There are three studios there with a plethora of outboard gear and even affords me the ability to mix and record to tape.
Q: What other musicians or music production professionals inspire you?
A: Prince, Janelle Monae, Jidenna,
Q: Describe the most common type of work you do for your clients.
A: The most common work I do is a mix of full range production services that from beatmaking to mixing and even mastering. I often contribute to most projects I work on as a songwriter in some way shape or form. I also offer consulting on the music business front and provide clients with publishing and licensing opportunities through a partner company. | https://soundbetter.com/profiles/11980-creative-producer-&-engineer |
CROSS REFERENCE TO RELATED APPLICATION
TECHNICAL FIELD
BACKGROUND
BRIEF SUMMARY
DETAILED DESCRIPTION
Definitions
Overview
Exemplary System Architecture
This application is a continuation application of U.S. application Ser. No. 13/922,750 filed Jun. 20, 2013, the entire contents of which are incorporated herein by reference.
The present description relates to determining and/or generating promotions based on promotion templates by a provider of goods, services, experiences and/or the like using a promotion and marketing service configured to illustrate or otherwise inform consumers of the availability of the promotion.
Merchants sell goods, services and/or experiences, also known as products, to consumers. The merchants, also known as providers, can often control the form of their product offers, the timing of their product offers, and the price at which the products will be offered. The provider may sell products at a brick-and-mortar sales location, a virtual online site, or both.
Promotions have been used as part of some retail strategies. Promotion techniques include providing instruments that result in rebates to potential consumers, but these techniques have several disadvantages. In this regard, a number of deficiencies and problems associated with the systems used to, among other things, provide and redeem promotions used by consumers have been identified. Through applied effort, ingenuity, and innovation, many of these identified problems have been solved by developing solutions that are included in embodiments of the present invention, some examples of which are described herein.
In general, example embodiments of the present invention provide herein systems, methods and computer readable storage media for facilitating the generation of promotions based on generated promotion templates by a promotion and marketing service for a provider of goods, services, experiences and/or the like in a simple and user-friendly manner. Among other things, embodiments discussed herein can be configured to generate a promotion from a provider's proposed terms, one or more determined terms and/or the like. Some embodiments may be configured to receive data corresponding to at least one provider characteristic, generate a promotion based on a promotion template, and transmit data corresponding to the promotion to an interface for the provider to review and accept.
Some embodiments discussed herein can be configured to aid a promotion and marketing service to normalize and/or categorize promotion templates, which promote the efficient creation of promotions for one or more providers. Some embodiments may be configured to aid a promotion and marketing service in defining provider characteristics, such as a provider service category, and/or other characteristics associated with a provider so as to assign certain promotion templates to certain services.
Other systems, methods, and features will be, or will become, apparent to one with skill in the art upon examination of the following figures and detailed description. It is intended that all such additional systems, methods, features and be included within this description, be within the scope of the disclosure, and be protected by the claims that follow.
Embodiments of the present invention now will be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments of the inventions are shown. Indeed, embodiments of the invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Like numbers refer to like elements throughout.
As used herein, the terms “data,” “content,” “information” and similar terms may be used interchangeably to refer to data capable of being captured, transmitted, received, displayed and/or stored in accordance with various example embodiments. Thus, use of any such terms should not be taken to limit the spirit and scope of the disclosure. Further, where a computing device is described herein to receive data from another computing device, it will be appreciated that the data may be received directly from the another computing device or may be received indirectly via one or more intermediary computing devices, such as, for example, one or more servers, relays, routers, network access points, base stations, and/or the like. Similarly, where a computing device is described herein to send data to another computing device, it will be appreciated that the data may be sent directly to the another computing device or may be sent indirectly via one or more intermediary computing devices, such as, for example, one or more servers, relays, routers, network access points, base stations, and/or the like.
The principles described herein may be embodied in many different forms. Not all of the depicted components may be required, however, and some implementations may include additional, different, or fewer components. Variations in the arrangement and type of the components may be made without departing from the spirit or scope of the claims as set forth herein. Additional, different, or fewer components may be provided.
As used herein, the term “provider,” “merchant,” and similar terms may be used interchangeably to refer to, but not limited to, a merchant, business owner, consigner, shopkeeper, tradesperson, vender, operator, entrepreneur, agent, dealer, organization or the like that is in the business of a providing a good, service or experience to a consumer, facilitating the provision of a good, service or experience to a consumer and/or otherwise operating in the stream of commerce. For example, a provider may be in the form of a spa company that that provides health and beauty services to a consumer.
In addition, as used herein, the term “promotion” may include, but is not limited to, any type of offered, presented or otherwise indicated reward, discount, coupon, credit, deal, incentive, discount, media or the like that is indicative of a promotional value or the like that upon purchase or acceptance results in the issuance of an instrument that may be used toward at least a portion of the purchase of particular goods, services and/or experiences defined by the promotion. An example promotion, using a spa company as the example provider, is $25 for $50 toward running spa services. In some examples, the promotion defines an accepted value (e.g., a cost to purchase the promotion), a promotional value (e.g., the value of the resultant instrument beyond the accepted value), a residual value (e.g., the value upon return or upon expiry of one or more redemption parameters), one or more redemptions parameters and/or the like. For example, and using the spa company promotion as an example, the accepted value is $25 and the promotional value is $50. In this example, the residual value may be equal to the accepted value.
In addition, as used herein, the term “promotion and marketing service” may include, but is not limited to, a service that is accessible via one or more computing devices and is operable to provide example promotion and/or marketing services on behalf of one or more providers that are offering one or more instruments that are redeemable by consumers for goods, services, experiences and/or the like. The promotion and marketing service is further configured to illustrate or otherwise inform one or more consumers of the availability of one or more instruments in the form of one or more impressions. In some examples, the promotion and marketing service may also take the form of a redemption authority, a payment processor, a rewards provider, an entity in a financial network, a promoter, an agent and/or the like. As such, the service is, in some example embodiments, configured to present one or more promotions via one or more impressions, accept payments for promotions from consumers, issue instruments upon acceptance of an offer, participate in redemption, generate rewards, provide a point of sale device or service, issue payments to providers and/or or otherwise participate in the exchange of goods, services or experiences for currency, value and/or the like.
As used herein, the term “instrument” may include, but is not limited to, any type of gift card, tender, electronic certificate, medium of exchange, voucher, or the like that embodies the terms of the promotion from which the instrument resulted and may be used toward at least a portion of the purchase, acquisition, procurement, consumption or the like of goods, services and/or experiences. In some examples, the instrument may take the form of tender that has a given value that is exchangeable for goods, services and/or experiences and/or a reduction in a purchase price of a particular good, service or experience. In some examples, the instrument may have multiple values, such as accepted value, a promotional value and/or a residual value. For example, using the aforementioned running company as the example provider, an electronic indication in a mobile application that shows $50 of value to spend at the spa company. In some examples, the accepted value of the instrument is defined by the value exchanged for the instrument. In some examples, the promotional value is defined by the promotion from which the instrument resulted and is the value of the instrument beyond the accepted value. In some examples, the residual value is the value after redemption, the value after the expiry or other violation of a redemption parameter, the return or exchange value of the instrument and/or the like.
As used herein, the term “impressions” may include, but is not limited to, a communication, a display, or other perceived indication, such as a flyer, print media, e-mail, text message, application alert, mobile applications, other type of electronic interface or distribution channel and/or the like, of one or more promotions. For example, and using the aforementioned spa company as the example provider, an e-mail communication sent to consumers that indicates the availability of a $25 for $50 toward spa services promotion.
As discussed herein, a provider of goods, services, experiences and the like (e.g. a spa company that provides spa and health services and products) may engage with a promotion and marketing service for providing promotion and/or marketing services on behalf of the provider. For example, the promotion and marketing service may transmit to a number of consumers impressions associated with a promotion for a good, service, and/or the like provided by a provider (e.g., an e-mail indicating a consumer may purchase $40 worth of spa services from Acme Spa Company for $20). In addition, the promotion and marketing service may accept payments for the promotion from a consumer and issue a promotion instrument to a consumer for the payment. Accordingly, the consumer may present and redeem the promotion instrument to the provider in exchange for goods or services (e.g., the consumer may visit an Acme Spa Company location and obtain $40 worth of spa services by presenting the $20 for $40 promotion instrument). In exchange for providing the promotion and/or marketing service, the promotion and marketing service may retain a portion of the revenue received from the consumer and provide the provider with the remainder of the revenues (e.g. the marketing and promotion service may retain $5 of the $20 and provide Acme Spa Company with $15 of the $20 paid by the consumer for the instrument).
In some example embodiments, the method, apparatus and computer program product is configured to enable the registration of a merchant with the promotional system to enable the merchant to create and/or publish a promotion via the promotion and marketing service. In some examples, and upon sign in or identification one or more attributes or characteristics associated with the merchant that may be used to identify a set of promotion parameters for suggesting a promotion or identifying a promotion template for the merchant. For example, the attributes or characteristics may include, but are not limited to, a the type of industry of the merchant, the type of products or services sold by the merchant, the size of the merchant, the location of the merchant, the sales volume of the merchant, reviews and ratings for the merchant, or the like. In some embodiments, the attributes or characteristics are a result of analytics that allow for generation of promotions that are ideal for the particular merchant's circumstances. For example, the attributes or characteristics may be used to identify optimal promotions and/or promotion templates for the particular merchant based on their exact location (e.g. the particular city street of the merchant as opposed to a wider range, such as a zip code), the merchant's exact products and services offered (e.g., pizzerias that only serve deep dish pizza, restaurants that become nightclubs after 11:00 pm), the merchant's price point (e.g., barbershops that charge more than $20 for a haircut), or the like. These merchant self-service indicators may be used to identify deal parameters that were used by other merchant s that share one or more same or similar attributes or characteristics.
For example, after an initial registration and verification, attributes or characteristics that associated may be verified for the newly registered merchant, such as by looking up the merchant in a merchant database or by receiving the attributes or characteristics directly from the merchant (e.g., by a Tillable form). The identified attributes or characteristics may be cross-referenced with promotion offers from other merchants to identify deal offers that were successful for other merchants with the same or similar attributes or characteristics. Successful deal offers for merchants with similar attributes or characteristics may be used to generate a suggested promotion or to select a preferred promotion template for the newly registered merchant, and the newly registered merchant may confirm the suggested promotion to offer the promotion to consumers via the promotion and marketing system. The promotion and marketing system may also provide an interface allowing the merchant to edit or otherwise modify the suggested promotion before confirmation. Example embodiments of a system and method for merchant self-service are described further with respect to at least U.S. patent application Ser. No. 13/749,272 filed Jan. 24, 2013, which are herein incorporated by reference in its entirety.
Embodiments discussed herein may be configured to provide for generating a promotion based on a promotion template (e.g., a grammar defined by a sequence of variables that concatenate \ generate a particular promotion or promotion title) for a particular promotion provided to a consumer by the provider and/or the promotion and marketing service. According to some embodiments, a provider may have a number of provider characteristics or attributes, which may include the type of goods and/or services the provider provides (i.e., a category, a sub-category, a service category), the provider location (i.e., the city, neighborhood, state, a defined local area and/or the like), the provider name (e.g., Acme Spa Company), and/or the like. In some embodiments, a promotion and marketing service may define a provider characteristic for a provider, such as a provider's service category. For example, a promotion and marketing service may determine a provider (e.g., Acme Spa Company) that solely provides spa services would be categorized in a particular service category, such as “Spa Services-Massage,” rather than another particular service category, such as “Health and Beauty,” as this particular provider does not provide any hotel accommodations. In another embodiment, a promotion and service provider may determine a number of primary service categories and sub-categories (i.e., a service taxonomy).
Various embodiments of the invention are directed to determining and/or generating a promotion, using a promotion template, based at least in part on a number of provider characteristics. Said differently, such embodiments are directed to generating a promotion that will advantageously maximize particular revenues, bookings, consumer purchases, promotion redemptions and/or the like. In some example embodiments, a promotion template may be selected to maximize return on investment (ROI). Example embodiments of a system and method for determining and providing provider ROI information are described further with respect to U.S. Provisional Patent Application 61/824,850 filed May 17, 2013 and U.S. patent application Ser. No. 13/841,347 filed Mar. 15, 2013, which are herein incorporated by reference in its entirety.
Embodiments may also be directed to determining a score and/or ranking for a particular promotion template. In this regard, one advantage that may be realized by some embodiments discussed herein is that determining a promotion template for a particular promotion that may increase the efficiency of a promotion and marketing service for providing a number of promotions for a number of providers. Further, embodiments may advantageously provide for determining particular promotion templates that may produce increased consumer interaction with a particular provider.
Example embodiments may also be directed to the generation of promotion templates themselves. In such examples, the method, apparatus and computer product may analyze one or more historical promotions (e.g., historical performance data) in each of the one or more services in the service taxonomy to generate promotion templates. In some examples, a corpus (e.g., one or more previously run promotions in each of the one or more defined services within the service taxonomy) of previous promotions may be analyzed and one or more regular expressions may be used to parse the corpus. As such, the method, apparatus and computer product may generate optimized promotion templates for a particular promotion, promotion type, service, local area and/or the like. As described above these generated templates may be analyzed and selected based on a provider's category, sub-category, service, location or the like.
The foregoing description applies the inventive concepts herein described to generate an exemplary promotion, based on a promotion template, for a promotion. This application is provided for ease of illustration and is not intended to limit the scope of the claimed subject matter. Indeed, as will be apparent to one of ordinary skill in the art in view of this disclosure, the inventive concept herein described may also be applied to other promotion characteristics.
FIG. 1
100
100
102
150
152
154
104
104
154
110
110
112
112
114
114
illustrates a system including an example network architecture, which may include one or more devices and sub-systems that are configured to implement some embodiments discussed herein. For example, system may include promotion template system , which may include, for example, a processor , a memory , a promotion template computing device , and a promotion template module . Promotion template module can be any suitable network server and/or other type of processing device, such as a promotion template computing device . As discussed herein, the provider device A,M, the sales representative device A,Z, and/or the consumer device A, N may be any suitable mobile device, such as a cellular phone, tablet computer, smartphone, etc., or other type of mobile processing device that may be used for any suitable purpose.
102
110
110
112
112
114
114
152
108
108
108
108
Promotion template system may be coupled to one or more of the provider devices A, M, sales representative devices A, Z, and/or consumer devices A, N, (e.g., mobile device) via a communications interface that is configured to communicate with network . In this regard, network may include any wired or wireless communication network including, for example, a wired or wireless local area network (LAN), personal area network (PAN), metropolitan area network (MAN), wide area network (WAN), mobile broadband network, or the like, as well as any hardware, software and/or firmware required to implement it (such as, e.g., network routers, etc.). For example, network may include a cellular telephone, a 802.11, 802.16, 802.20, and/or WiMax network. Further, the network may include a public network, such as the Internet, a private network, such as an intranet, or combinations thereof, and may utilize a variety of networking protocols now available or later developed including, but not limited to TCP/IP based networking protocols.
102
102
In some example embodiments, the promotion template system may be configured to use one or more learning models to generate and test one or more promotion templates for each of the one or more services in a defined service taxonomy. In some examples, the promotion template system may be configured to analyze one or more historical promotions, such that each of the one or more historical promotions are assigned a primary service. A promotion title of the one or more historical promotions are then normalized (e.g., based on the one or more normalization methods described herein). The normalized title for each of the promotions is then parsed to extract the promotional value, the actual value, one or more benefits, connectors, editorial comments and or the like. Such parsing may make use of generated regular expressions (e.g., a set of rules that match or otherwise identify one or more text structures within processed documents and are advantageously generalized so as to follow a standardized process to extract information or text from the processed documents, the set of rules are determined, in some examples, based on the different text components to be parsed) or other language processing means. The parsing may be verified, in some examples, by attempting to recreate the promotion title based on the parsed elements. In an instance in which the parsing is verified, the template, and its identified primary service, is stored for later use. As is described herein, a promotion template may then be selected for use in generating a promotion based on one or more factors further detailed hereinafter.
In further examples, promotion templates may be generated and/or instantiated in such a way that they include meta-data that may be configured to aid in defining the use of the template in one or more clients or systems. For example, a given promotional value (e.g. “60 minute massage” or “what you get”) may comprise time meta-data (e.g., “60 minutes” that can be labeled as a “time”) that may be used in conjunction with a scheduling, calendaring or availability engine. Whereas a promotional value that identifies a quantity of a service to be performed may be used by an inventory engine or resource planning engine. Alternatively or additionally other meta-data may identify particular promotion options, promotion constraints, promotion parameters and/or the like.
In further example embodiments, the promotion templates may be designed for use in multiple different languages or dialects (e.g. i18n/l10n or the like). In such cases, the promotion templates may be translated, instead of the promotion itself, from a first language to a second language and allow for the construction of the promotion in the second language via a user interface or the like. Such a method results, in some examples, in a higher level of translation accuracy and readability when compared to a conversion of the promotion itself from the first language two the second language.
FIG. 2
200
illustrates a flow chart for an example embodiment of a method of generating a promotion and/or a promotion title. In some example embodiments, and in order to generate such a promotion, a provider, a salesperson or the like may access a web-based interface via a provider device, such as a mobile device, a smartphone, a laptop, a mobile computing device, a tablet computing device, and/or the like, and provide information corresponding to a promotion the provider wishes to offer. Based on the provided data, promotion templates may be identified and displayed for the provider for instantiation. Alternatively or additionally, the promotion template system may dynamically instantiate the promotion template with recommended promotion terms.
200
202
204
110
110
Method may begin at and proceed to , where a promotion template system may receive provider characteristic data from a mobile device, such as a provider device A, via an interface, from a sales system and/or the like. For example, a provider may access a secure web-based provider interface and transmit provider characteristic data to the promotion template system. The provider may transmit provider characteristic data, such as registration information, provider name, provider service category, provider location, and/or the like via a provider device .
In some embodiments, the provider may initially make the provider characteristic data available by pre-registering as a provider with the promotion and marketing service. Accordingly, subsequent transactions may not require the provider to provide provider characteristic data to the promotion template system. In another embodiment, the provider may provide provider characteristic data to the promotion template system in real-time to support a generation of a promotion (i.e., if the provider has not pre-registered or if, for example, the provider is a mobile provider and changes locations from time to time, in an instance in which the provider offers multiple services, in an instance in which the provider has added new services and/or the like). In an example embodiment, a provider characteristic may include the name of the provider that provides a good, service, or experience to the consumer. Alternatively or additionally, provider information may be accessed via a provider database, may be generated by analyzing the provider name, based on characteristics of the provider determined using historical promotions, websites, rating services, social networks or the like. In some examples, provider information may be confirmed using one or more of the aforementioned methods.
206
At , the promotion template system may be configured to determine a service category based at least in part on the provider characteristic data. In this regard, a promotion template system may be configured to determine a service category based at least in part on data corresponding to a provider name or other indications otherwise determined in light of the provider characteristic data. For example, the provider name may include information corresponding to the provider service category (e.g., Acme Spa Company having the word “Spa” in the provider name).
Alternatively or additionally, and according to some embodiments, the promotion template system may be configured to determine a provider service category by the provider name. For example, a provider may be have been previously assigned a provider service category upon the provider historical data, previous promotions, previous registrations and/or the like. As such, when a promotion template system receives provider characteristic data corresponding to the provider's name, the promotion template system may be configured to determine the value of the provider service category that was previously assigned to the provider upon pre-registration.
208
206
FIG. 4
At , the promotion template system may be configured to select a particular promotion template. The promotion template is selected as a function of the determined service in block and/or one or more additional provider characteristics. Determination and/or selection of a particular promotion template is further defined with respect to .
210
At , the promotion template system may be configured to define at least one connector term for the selected promotion template. For example, the connector term may include at least one of a basic connector term and a descriptor connector term. In this regard, a promotion template may consist of a sequence of variables that, when concatenated, will produce a promotion. As such, basic connector terms may include a number of words and/or phrases that may be used in a promotion and/or a promotion template. For example, a basic connector term may include words and/or phrases, such as “for”, “at”, “get”, “buy”, “worth of”, “of”, “off”, “spend”, and/or the like.
In some example embodiments, the promotion template system may be configured to define or otherwise implement a grammar (e.g. set of structural rules that govern the composition of clauses, phrases and words in any given language). In some examples, the grammar defined by the template system may be configured to adapt one or more connectors or other words or phrases in the promotion template based the defined or otherwise implemented grammar. For example, the use of “a/an” may depend on the subsequent word or phrase. Other such modifications of a promotion template by the promotion template system may also be performed, such as, but not limited to modifying a word or phrase to be plural or singular, addition of punctuation, modification of syntax or word ordering and/or the like.
In some embodiments, the connector term may include at least one descriptor connector term. In this regard, the promotion template system may be configured to define at least one descriptor connector term that may correspond to a particular provider service category or may include one or more editorial comments about the particular service category or provider. In some embodiments, a descriptor connector may correspond to multiple provider service categories. For example, a descriptor connector term may consist of a number of words and/or phrases that may be used to describe the promotion. According to some embodiments, the descriptor connector term may describe the provider service category. In some embodiments, the descriptor connector term may correspond to the goods, services, and/or experiences being provided by the provider in the promotion. For example, a promotion generated based on a promotion template for a provider, such as Acme Spa Company, may include a descriptor connector such as “spa services”, “spa products” “renowned spa in the downtown area” and/or the like.
In some example embodiments, the descriptor connector term may be instantiated via one or more editorial descriptions that may have been previously determined to be or otherwise have been learned to be influential on a consumer's likelihood of purchasing a particular promotion. While, such phrases or terms may be developed using supervised learning methods, such terms may also be generated based on user comments with respect to the provider, third party reviews, website or blog comments, social networking comments and/or the like.
212
At , the promotion template system may be configured to define at least one promotion parameter term. According to some embodiments, the promotion parameter term may include at least one of a promotional value and an accepted value. In this regard, the promotional value may correspond to the value of the goods, services, and/or experience the provider is providing the consumer in the promotion (i.e., “what you get”). The accepted value may correspond to the price of the promotion the consumer pays for the promotion value (i.e., “what you give”). For example, a promotion offering $40 worth of spa services for $20 would have a promotional value of $40 and an accepted value of $20.
214
At , a promotion template system may be configured to generate a promotion by instantiating the promotion template using, at least in part, provider characteristic data, such as the provider name and/or the provider service category. In some embodiments, the promotion template system may generate a promotion containing at least one connector term, at least one descriptor term, at least one promotion parameter term.
By way of example, in conjunction with the steps described above, after receiving provider characteristic data, such as the provider name (e.g., Acme Spa Company), the promotion template system may determine a provider service category (e.g., Spa Services) that corresponds with the particular provider. The promotion template system may then generate a promotion that corresponds with the particular provider service category that consists of, for example, a number of connector terms, at least one descriptor term, and a number of promotion parameter terms. For example, the promotion template system may determine a number of connector terms, such as “for”, “at”, and “worth of”, that correspond to the particular provider service category.
212
In addition, the promotion template system may determine a number of promotion parameter terms, such as a promotional value and an accepted value, that correspond to the particular provider service category. In this regard, a promotion parameter system may determine a plurality of promotion parameter terms, such as a promotional value of $40 and an accepted value of $20. According to some embodiments, the promotion parameter system may determine a descriptor term (e.g., “Spa Treatments”) that corresponds with the particular provider service category (e.g., Spa Services). Accordingly, at , the promotion template system may be configured to generate a promotion by sequentially concatenating the plurality of connector terms, the descriptor term, the plurality of promotion parameter terms, and/or the provider name. For example, the promotion template system may be configured to generate a promotion, such as “$20 for $40 worth of Spa Treatments at Acme Spa Services.” Accordingly, the promotion includes the sequential concatenated plurality of connector terms, a descriptor term, a plurality of promotion parameter terms, and/or the provider name. Specifically, the promotion includes, in order, a accepted value “$20”, a connector term “for”, a promotional value “$40”, a connector term “worth of”, a descriptor term “Spa Treatments”, a connector term “at”, and the provider name “Acme Spa Services.”
According to some embodiments, the promotion template system may be configured to generate multiple promotion options (e.g., values that a particular promotion template may use to complete its terms). For example, the promotion template system may be configured to generate a promotion for a first promotion option having a promotion value of $40 and a promotion price of $20 and a second promotion option having a promotion value of $100 and a promotion price of $50. Alternatively or additionally, the promotion template may have a first price for a first service and a second price for a second service (e.g., $100 for painting a single room, $150 for painting two rooms). One of ordinary skill in the art, in light of this disclosure, may appreciate the promotion template system may be configured to generate a plurality of promotions corresponding to a plurality of promotion options and/or a variety of promotion types.
FIG. 3
300
300
302
304
illustrates a flow chart for an example embodiment of a method of generating a promotion from a provider-proposed promotion in a number of ways. Method may start at . At , a promotion template system may receive data corresponding to a provider-proposed promotion. For example, a provider may access a web-based secure provider interface via a provider device, such as a mobile device, a smartphone, a laptop, a mobile computing device, a tablet computing device, and/or the like, and provide information corresponding to provider characteristics, such as the provider name, provider service category, and/or the like. In some embodiments, the provider may provide information corresponding to particular promotion parameters for a promotion the provider wishes to offer. Further still, a provider may provide information corresponding to a provider-proposed promotion the provider wishes to use for the particular provider-proposed promotion.
306
At , a promotion template system may be configured to determine if a provider-proposed promotion matches a past promotion. For example, a provider (e.g., Acme Spa Services) may wish to use a promotion and marketing service for a particular promotion (e.g., “$20 for $40 worth of Spa Treatments at Acme Spa Services”). As such, the provider may provide information corresponding to the provider-proposed promotion to the promotion template system. The promotion template system may receive data corresponding to the provider-proposed promotion, such as a provider-proposed promotion parameter, and determine that a past promotion corresponds to the provider-proposed promotion.
308
If the proposed promotion matches the past promotion, the promotion template system may be configured to generate the past promotion for use as a proposed promotion at . In this regard, if the promotion template system determines a past promotion (e.g., “$20 for $40 worth of Spa Treatments at Acme Spa Services”) matches the proposed promotion (e.g., “$20 for $40 worth of Spa Treatments at Acme Spa Services”); the promotion template system may generate the past promotion for use for the current proposed promotion.
310
If the provider-proposed promotion does not match a past promotion, the promotion template system may be configured to generate at least one proposed promotion from a promotion template for use at . For example, a provider-proposed promotion (e.g., “$20 for $40 worth of Spa Treatments at Acme Spa Services”) may not match a past promotion (e.g., “Get $100 worth of Spa Goods at Acme Spa Services for $75”). In this regard, the past promotional value (e.g., $100), the past accepted value (e.g., $75), and the descriptor term (e.g., “Spa Goods”) do not match at least one descriptor term, accepted value, and/or promotional value of the provider-proposed promotion.
As such, the promotion template system may be configured to extract and/or normalize a provider-proposed promotion for various connector terms, promotion parameter terms, and/or descriptor terms so as to determine if a provider-proposed promotion matches a past provider promotion. In this regard, a promotion template system may be configured to determine a provider-proposed promotion parameter, such as a promotional value, does not match a past promotion parameter. Accordingly, the promotion template system may be configured to determine various terms of a provider-proposed promotion and generate a proposed promotion from a plurality of promotion templates, as discussed herein. In some embodiments, the promotion template system may be configured to query a provider for data corresponding to promotion parameter terms and/or descriptor terms, as discussed in greater detail below.
In some embodiments, the promotion template system may be configured to compare a provider-proposed promotion with a plurality of promotion templates that correspond to a particular provider service category. As such, according to some embodiments, the promotion template system may be further configured to receive data corresponding to the provider, such as provider characteristic data. Provider characteristic data may include a provider name, a provider service category and/or the like. Accordingly, the promotion template system may be configured to receive data corresponding to a provider-proposed promotion and compare the provider-proposed promotion data with promotion templates associated with a particular provider service category.
According to some embodiments, a service category may include one or more promotion templates. For example, a popular and/or highly-ranked service category, such as “Food and Drink” with a service “Food-Chinese” for example, may include multiple promotion templates for use in generating a promotion for a provider, while a less popular and/or less diverse service category may have a single promotion template associated with the service category. As such, the promotion template system may have a number of promotion templates for use in generating a promotion for providers associated with highly-ranked or popular service categories, while having a minimal or singular amount of promotion templates for use in generating a promotion for providers associated with lower-ranked or less diverse service categories.
FIG. 4
FIG. 4
400
402
404
Turning to , illustrates a method of scoring and/or ranking a plurality of promotion templates. The method may start at . At , a promotion template system may receive promotion data corresponding to at least one promotion parameter. For example, the promotion template system may receive promotion data corresponding to a promotion option selection. In this regard, a promotion may include a single option or multiple options. For example, a single option promotion may provide a consumer with a single promotion (e.g., “$20 for $40 worth of Spa Services at Acme Spa Company”). In another embodiment, a multiple option promotion may provide a consumer with multiple promotions to choose from (e.g., “$20 for $40 worth of Spa Services at Acme Spa Company; $50 for $100 worth of Massage Services at Acme Spa Company”). In some examples, multiple option promotions may comprise templates that define a promotion title that is inclusive of the multiple options and one or more editorial comments that define each option of the multiple options (e.g., title: “Discount Spa Services at High End Spa” editorial comments: “$45 dollar Swedish massage, $65 facials”). In some cases, instrument options may be considered.
406
At , the promotion template system may be configured to determine a promotional value for each of the promotion templates. For example, the promotion template system may be configured to determine at least one promotion parameter for each of the promotion option templates. For example, in some embodiments, the promotion parameter may correspond to a promotional value (e.g., “60 minutes”, “$40”, etc.). In this regard, some promotions may provide an amount of service time as the promotional value for a promotion price (e.g., “$40 for 1 hour massage at Acme Spa Company”). In another embodiment, some promotions may provide a monetary-equivalent amount as the promotional value for a promotion price (e.g., “$40 for $80 worth of Spa Goods at Acme Spa Company”). As such, embodiments of the present invention may be configured to determine at least one promotion parameter for each of the promotion option templates.
In addition, some example embodiments may provide for the normalization and/or standardization of promotion parameters and/or promotion options for each of the promotion templates. For example, when a promotional value provides an amount of service time (e.g. “1 hour”), the promotion template system may be configured to normalize the promotional value for standardization across all promotion templates for a particular service category. In this regard, the promotion template system may be configured to normalize the promotional value to indicate the amount of service time in minutes (e.g., “$40 for 60 minute massage at Acme Spa Company”). Accordingly, when compared to another promotion template in a multiple promotion offer, the promotional values are standardized for use in generating promotions. For example, another promotion template within the same service category of “Spa Services” may include a promotion template for “$25 for 30 minute massage at Acme Spa Company”. As such, the promotion templates are standardized for use in generating promotions for a particular provider.
408
Further still, the promotion template system may be configured to determine a promotion score for each of the promotion templates at . In this regard, the promotion template system may determine a promotion score for a single-option promotion and/or a multiple option promotion for a provider. In some embodiments, the promotion score may be based upon, at least in part, on the at least one promotion parameter and a promotion quality metric.
404
For example, a promotion template system may receive provider-proposed promotion data corresponding to a promotion parameter and a promotion option selection, as previously stated at . In this regard, the promotion template system may receive provider-proposed promotion data corresponding to a multiple-option promotion having two promotional values (e.g., “$40 for $80 worth of Spa Goods; $80 for $175 worth of Spa Goods”). As such, the promotion template system may be configured to determine the provider-proposed promotion is a multiple option promotion that includes two promotional values of $80 and $175 for Spa Goods. Based on past promotion redemption rates, consumer purchases of promotions, consumer reviews of the provider, length of the promotion offer before selling out, and/or the like, the promotion template system may be configured to determine a promotion score for the specific promotion template that includes a multiple option promotion having two promotional values of $80 and $175 for Spa Goods.
FIG. 4
In some example embodiments, the promotion template system may be configured to generate a score for a particular promotional value (e.g., what you get) to be included in the promotion. As such, the promotion template system may then be configured to provide not only a recommended promotion, but recommended promotion parameters as well. In one example embodiment, promotional value recommendations may be generated based on a learning model that continually analyzes historical promotions. For example, for each promotion option within each promotion, a primary service related to that a particular option may be identified. The promotion title for the particular promotion may be normalized (e.g., using the normalization procedures described herein). The normalized title may be parsed to identify the promotional value portion, such as by using regular expressions or other natural language processing means. A promotion performance score or promotion quality metric (e.g., ebpm) may be accessed for that particular option of the promotion. The promotion performance score, the promotional value and the primary service may then be stored for use, such as the use described with respect to . Alternatively or additionally, actual value, descriptors and/or connectors may be discovered and ranked using the same or similar methodologies.
410
At , the promotion template system may be configured to provide a promotion score that corresponds to the at least one promotion parameter and/or promotion option selection to an interface so as to be displayed to a provider. In this regard, the promotion template system may determine that a multiple option promotion, wherein the promotion values do not correspond by a factor of 2 (i.e., doubling the first promotion value of $80 provides $160, and not $175), is not successful, and thus is associated with a lower promotion score. The promotion template system may be configured to provide the lower promotion score to the provider via an interface configured to display the promotion score. In some embodiments, the promotion template system may be further configured to provide additional options (i.e., other proposed promotions) and those promotion templates respective promotion scores to the provider via the interface.
In some embodiments, promotion score may be based at least in part on the frequency a particular promotion is used within a particular provider service category (e.g., service in a service taxonomy). In some embodiments, the scoring for a particular provider service category may be defined by the following equation:
<math overflow="scroll"><mrow><msub><mi>score</mi><mrow><mi>k</mi><mo>,</mo><mi>g</mi></mrow></msub><mo>=</mo><mrow><mi>log</mi><mo>(</mo><mrow><mn>1</mn><mo>+</mo><mrow><msub><mrow><mo>{</mo><mrow><mi>average</mi><mo></mo><mstyle><mspace width="0.8em" height="0.8ex" /></mstyle><mo></mo><mi>ebpm</mi></mrow><mo>}</mo></mrow><mi>k</mi></msub><mo>×</mo><mrow><mo>(</mo><mfrac><msub><mi>N</mi><mrow><mi>g</mi><mo>,</mo><mi>k</mi></mrow></msub><msub><mi>M</mi><mi>k</mi></msub></mfrac><mo>)</mo></mrow></mrow></mrow></mrow></mrow></math>
g,k
k
Wherein Nis the frequency of a promotion in a group g (i.e., the promotional value) within the provider service category k. Mis the total number of promotions considered in a provider service category k. In addition,
<math overflow="scroll"><mrow><mrow><mrow><mi>average</mi><mo></mo><mstyle><mspace width="0.8em" height="0.8ex" /></mstyle><mo></mo><msub><mi>epbm</mi><mi>k</mi></msub></mrow><mo>=</mo><mrow><mfrac><mn>1</mn><mrow><mo></mo><msub><mi>options</mi><mi>k</mi></msub><mo></mo></mrow></mfrac><mo></mo><msub><mi>Σ</mi><mrow><mi>i</mi><mo>∈</mo><msub><mi>options</mi><mi>k</mi></msub></mrow></msub><mo></mo><mstyle><mspace width="0.8em" height="0.8ex" /></mstyle><mo></mo><msub><mi>ebpm</mi><mi>i</mi></msub></mrow></mrow><mo>,</mo></mrow></math>
k
where the optionsare the number of promotion options within a provider service category k. Further, ebpm=1000*(option booking amount/engg impressions), which is calculated for each of the individual promotion options and is normalized by the total number of promotion impressions.
In another embodiment, the promotion score may be based at least in part on the total amount of the promotion value instead of the promotion value per promotion. According to some embodiments, the promotion score may be based at least in part on a total promotion value amount computed over a period of time.
Further still, in some embodiments, the promotion template system may be configured to determine a promotion score based at least in part on the frequency and/or probability of receiving a particular promotional value having a particular promotion value for any of the provider service groups. This may be defined by the following equation:
i,k,g
k
g,k
k
i,g,k
k
N
/M
N
/M
joint prob score=log(1+{average ebpm})×()×()
i,k,g
The joint prob scoreconsiders the joint probability of getting a particular good, service or experience in a promotion in a particular group g for a particular provider service category k.
In another embodiment, the promotion template system may be configured to determine a promotion value score based at least in part on other ranking structures as discussed previously. For example, in some embodiments, a promotion value score may be determined by determining a trade-off score between the likelihood that a particular promotion value is used compared to a contribution to the total ebpm, as defined by the equation below.
<math overflow="scroll"><mrow><mrow><mi>tradeoff</mi><mo></mo><mstyle><mspace width="0.8em" height="0.8ex" /></mstyle><mo></mo><msub><mi>score</mi><mrow><mi>g</mi><mo>,</mo><mi>k</mi></mrow></msub></mrow><mo>=</mo><mrow><mrow><mi>α</mi><mo>×</mo><mrow><mo>(</mo><mrow><msub><mi>N</mi><mrow><mi>g</mi><mo>,</mo><mi>k</mi></mrow></msub><mo></mo><mstyle><mtext>/</mtext></mstyle><mo></mo><msub><mi>M</mi><mi>k</mi></msub></mrow><mo>)</mo></mrow></mrow><mo>+</mo><mrow><mrow><mo>(</mo><mrow><mn>1</mn><mo>-</mo><mi>α</mi></mrow><mo>)</mo></mrow><mo>×</mo><mrow><mo>[</mo><mrow><mi>log</mi><mo></mo><mrow><mo>(</mo><mrow><mn>1</mn><mo>+</mo><msub><mrow><mo>{</mo><mrow><mi>average</mi><mo></mo><mstyle><mspace width="0.8em" height="0.8ex" /></mstyle><mo></mo><mi>ebpm</mi></mrow><mo>}</mo></mrow><mi>k</mi></msub></mrow><mo>)</mo></mrow></mrow><mo>]</mo></mrow><mo></mo><mstyle><mtext>/</mtext></mstyle><mo></mo><mrow><munder><mo>∑</mo><mi>k</mi></munder><mo></mo><mrow><mi>log</mi><mo></mo><mrow><mo>(</mo><mrow><mn>1</mn><mo>+</mo><msub><mrow><mo>{</mo><mrow><mi>average</mi><mo></mo><mstyle><mspace width="0.8em" height="0.8ex" /></mstyle><mo></mo><mi>ebpm</mi></mrow><mo>}</mo></mrow><mi>k</mi></msub></mrow><mo>)</mo></mrow></mrow></mrow></mrow></mrow></mrow></math>
In other example embodiments, reinforcement learning models may be used to determine ranking structure for particular promotion templates. The acceptance of the ranked/proposed values for quantity could be considered as input for the following period. As such, various embodiments of the present invention may advantageously provide for scoring and/or ranking of various promotion templates for each of the provider service categories so as to generate promotions that provide a provider with the greatest benefits.
FIG. 6
FIG. 6
600
102
110
600
602
604
606
608
610
illustrates a schematic block diagram of circuitry , some or all of which may be included in, for example, promotion template system and/or provider device . As illustrated in , in accordance with some example embodiments, circuitry may include various means, such as a processor , memory , communication module , input/output module and/or promotion template module .
600
102
110
610
600
604
602
In some embodiments, such as when circuitry is included in a promotion template system and/or mobile device , promotion template module may be included. As referred to herein, “module” includes hardware, software and/or firmware configured to perform one or more particular functions. In this regard, the means of circuitry as described herein may be embodied as, for example, circuitry, hardware elements (e.g., a suitably programmed processor, combinational logic circuit, and/or the like), a computer program product comprising computer-readable program instructions stored on a non-transitory computer-readable medium (e.g., memory ) that is executable by a suitably configured processing device (e.g., processor ), or some combination thereof.
602
602
600
600
602
604
602
602
600
600
FIG. 6
Processor may, for example, be embodied as various means including one or more microprocessors with accompanying digital signal processor(s), one or more processor(s) without an accompanying digital signal processor, one or more coprocessors, one or more multi-core processors, one or more controllers, processing circuitry, one or more computers, various other processing elements including integrated circuits such as, for example, an ASIC (application specific integrated circuit) or FPGA (field programmable gate array), or some combination thereof. Accordingly, although illustrated in as a single processor, in some embodiments, processor comprises a plurality of processors. The plurality of processors may be embodied on a single computing device or may be distributed across a plurality of computing devices collectively configured to function as circuitry . The plurality of processors may be in operative communication with each other and may be collectively configured to perform one or more functionalities of circuitry as described herein. In an example embodiment, processor is configured to execute instructions stored in memory or otherwise accessible to processor . These instructions, when executed by processor , may cause circuitry to perform one or more of the functionalities of circuitry as described herein.
602
602
602
602
604
602
Whether configured by hardware, firmware/software methods, or by a combination thereof, processor may comprise an entity capable of performing operations according to embodiments of the present invention while configured accordingly. Thus, for example, when processor is embodied as an ASIC, FPGA or the like, processor may comprise specifically configured hardware for conducting one or more operations described herein. As another example, when processor is embodied as an executor of instructions, such as may be stored in memory , the instructions may specifically configure processor to perform one or more algorithms and operations described herein.
604
604
604
604
600
604
602
604
602
604
600
FIG. 6
Memory may comprise, for example, volatile memory, non-volatile memory, or some combination thereof. Although illustrated in as a single memory, memory may comprise a plurality of memory components. The plurality of memory components may be embodied on a single computing device or distributed across a plurality of computing devices. In various embodiments, memory may comprise, for example, a hard disk, random access memory, cache memory, flash memory, a compact disc read only memory (CD-ROM), digital versatile disc read only memory (DVD-ROM), an optical disc, circuitry configured to store information, or some combination thereof. Memory may be configured to store information, data, applications, instructions, or the like for enabling circuitry to carry out various functions in accordance with example embodiments discussed herein. For example, in at least some embodiments, memory is configured to buffer input data for processing by processor . Additionally or alternatively, in at least some embodiments, memory may be configured to store program instructions for execution by processor . Memory may store information in the form of static and/or dynamic information. This stored information may be stored and/or used by circuitry during the course of performing its functionalities.
606
604
602
600
606
602
606
2602
606
606
604
606
604
608
600
Communications module may be embodied as any device or means embodied in circuitry, hardware, a computer program product comprising computer readable program instructions stored on a computer readable medium (e.g., memory ) and executed by a processing device (e.g., processor ), or a combination thereof that is configured to receive and/or transmit data from/to another device, such as, for example, a second circuitry and/or the like. In some embodiments, communications module (like other components discussed herein) can be at least partially embodied as or otherwise controlled by processor . In this regard, communications module may be in communication with processor , such as via a bus. Communications module may include, for example, an antenna, a transmitter, a receiver, a transceiver, network interface card and/or supporting hardware and/or firmware/software for enabling communications with another computing device. Communications module may be configured to receive and/or transmit any data that may be stored by memory using any protocol that may be used for communications between computing devices. Communications module may additionally or alternatively be in communication with the memory , input/output module and/or any other component of circuitry , such as via a bus.
608
602
600
608
600
608
600
608
600
608
604
606
600
FIG. 6
Input/output module may be in communication with processor to receive an indication of a user input and/or to provide an audible, visual, mechanical, or other output to a user. Some example visual outputs that may be provided to a user by circuitry are discussed in connection with the displays described above. As such, input/output module may include support, for example, for a keyboard, a mouse, a joystick, a display, an image capturing device, a touch screen display, a microphone, a speaker, a RFID reader, barcode reader, biometric scanner, and/or other input/output mechanisms. In embodiments wherein circuitry is embodied as a server or database, aspects of input/output module may be reduced as compared to embodiments where circuitry is implemented as an end-user machine (e.g., consumer device and/or merchant device) or other type of device designed for complex user interactions. In some embodiments (like other components discussed herein), input/output module may even be eliminated from circuitry . Input/output module may be in communication with memory , communications module , and/or any other component(s), such as via a bus. Although more than one input/output module and/or other component can be included in circuitry , only one is shown in to avoid overcomplicating the drawing (like the other components discussed herein).
610
602
610
600
Promotion template module may also or instead be included and configured to perform the functionality discussed herein related to facilitating the generation of a promotion and/or a promotion template, as discussed above. In this regard, the example processes and algorithms discussed herein can be performed by at least one processor and/or promotion template module . For example, non-transitory computer readable storage media can be configured to store firmware, one or more application programs, and/or other software, which include instructions and other computer-readable program code portions that can be executed to control processors of the components of system to implement various operations, including the examples shown above. As such, a series of computer-readable program code portions may be embodied in one or more computer program products and can be used, with a computing device, server, and/or other programmable apparatus, to produce the machine-implemented processes discussed herein.
Any such computer program instructions and/or other type of code may be loaded onto a computer, processor or other programmable apparatuses circuitry to produce a machine, such that the computer, processor or other programmable circuitry that executes the code may be the means for implementing various functions, including those described herein.
The illustrations described herein are intended to provide a general understanding of the structure of various embodiments. The illustrations are not intended to serve as a complete description of all of the elements and features of apparatus, processors, and systems that utilize the structures or methods described herein. Many other embodiments may be apparent to those of skill in the art upon reviewing the disclosure. Other embodiments may be utilized and derived from the disclosure, such that structural and logical substitutions and changes may be made without departing from the scope of the disclosure. Additionally, the illustrations are merely representational and may not be drawn to scale. Certain proportions within the illustrations may be exaggerated, while other proportions may be minimized. Accordingly, the disclosure and the figures are to be regarded as illustrative rather than restrictive.
The above disclosed subject matter is to be considered illustrative, and not restrictive, and the appended claims are intended to cover all such modifications, enhancements, and other embodiments, which fall within the true spirit and scope of the description. Thus, to the maximum extent allowed by law, the scope is to be determined by the broadest permissible interpretation of the following claims and their equivalents, and shall not be restricted or limited by the foregoing detailed description.
BRIEF DESCRIPTION OF THE DRAWINGS
Having thus described embodiments of the invention in general terms, reference will now be made to the accompanying drawings, which are not necessarily drawn to scale, and wherein:
FIG. 1
illustrates an example system in accordance with some embodiments discussed herein;
FIG. 2
illustrates a flow chart detailing a method of generating a promotion template according to an example embodiment;
FIG. 3
illustrates a flow chart detailing a method of generating a promotion according to an example embodiment;
FIG. 4
illustrates a method of scoring a plurality of promotion templates according to an example embodiment; and
FIG. 5
illustrates a block diagram of circuitry which may be included in a promotion template system and/or a mobile device according to an example embodiment. |
Subsets and Splits