content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
Getting My architects in essex To Work
In the early days of software application growth little thought was provided to how the software application applications and systems we constructed were architected. There were numerous factors for this: firstly, software development being brand-new, the idea had not been considered, and second of all we didn’t understand just how essential design was to the cost of preserving our applications and also systems. Upon sober representation, we possibly need to have foreseen the demand for prepared style and also engineers since building software program isn’t substantially different from building any type of other framework, for instance buildings and bridges. We can not return as well as undo the damage done by the absence of insight that brought about severely architected applications as well as systems yet as project managers we can stay clear of making this error in our following software development job.
Today most organizations whose core proficiencies include software program advancement identify the importance of style to their organisation and also have pleased this requirement by developing the duty of architect and making this person in charge of the architecture of all the software program applications and also systems they create. Even organizations whose core competencies don’t consist of software application development, however who have actually invested heavily in IT, have produced this role. These people may be described as the Principal Designer, Head Designer, or Strategic Engineer. Wikipedia recognizes 3 various classifications of engineer depending on the range of their duties: the venture designer who is responsible for all a company’s applications and systems, the remedy engineer who is accountable for the design of a system consisted of one or more applications and also hardware platforms, as well as the application engineer whose responsibility is restricted to one application. The group and also variety of architects will generally be constrained by the dimension of the organization and also the number of applications as well as systems it supports. Regardless of what the company you work for calls them, the software architect has a vital function to play on your software task.
Your job as job manager of a software program growth task, where a software architect is in area, is to guarantee that their job is correctly specified and also arranged to make sure that your task receives maximum benefit from their know-how. If the company does not have a designer in position you will certainly have to identify a person on your group to fill up that role. What is not acceptable is to prepare the task without any recommendation of the requirement or relevance of the designer. This function requires as much knowledge of the system parts as feasible, including software program and also equipment understanding. It also requires deep technical knowledge of the modern technology being utilized, both hardware and software and strong logical skills. The individual (besides a software application designer) who most likely possesses a skill set comparable to this, is a service or systems expert. Relying on the dimension and also complexity of the existing system, and your job, existing ability may not be sufficient to fulfill your task’s requirements. There are enough training possibilities offered so choose one that the majority of very closely suits your demands as well as have your candidate participate in. If your project has adequate budget to pay for the training, penalty. If not, bear in mind that the ability obtained by the student will certainly be offered to the organization after your task is completed as well as your project must not need to birth the full price of the training.
Now that you have actually a certified software application architect involved for your job, you need to prepare that person’s tasks to take optimum advantage of their skills. I suggest engaging the architect as early on in the job as possible to ensure that they can influence the interpretation of the application or system being established. The team that specifies business demands to your task will be from the business side of the company and also have deep knowledge of how business runs yet little expertise of the existing systems and also technological functions of the software and hardware that will deliver the solution. Having a software designer readily available during demands gathering exercises will certainly help you specify needs that leverage existing system as well as remedy platform toughness as well as avoid weak points. Leaving their input till a later phase exposes your project to the threat of re-engineering the solution to fit existing design or prevent remedy weaknesses, after the truth. Entail the software application engineer in needs collecting workouts as a consultant or SME (topic specialist) that can explain dangers in specifying requirements and also provide alternative services.
The essential deliverable your architect is in charge of is the architectural drawing. This is not in fact an illustration yet a mix of drawings and text. The illustrations will represent the different components of the system and their partnership to each other. The text will explain information elements, relations between numerous building elements, as well as any type of criteria developers have to stick to. The drawing may be a brand-new one to stand for a brand-new system, or it may be an upgrade of an existing attracting to mirror the modifications to an existing system made by your project. The development of the architectural drawing is the initial design activity in your job schedule. The illustration is utilized in the exact same style that engineering personnel and also competent artisans utilize a building drawing of a structure or bridge.
know more about architects London here. | https://eleusis.biz/getting-my-architects-in-essex-to-work/ |
---
abstract: 'We employ machine learning techniques to investigate the volume minimum of Sasaki-Einstein base manifolds of non-compact toric Calabi-Yau 3-folds. We find that the minimum volume can be approximated via a second order multiple linear regression on standard topological quantities obtained from the corresponding toric diagram. The approximation improves further after invoking a convolutional neural network with the full toric diagram of the Calabi-Yau 3-folds as the input. We are thereby able to circumvent any minimization procedure that was previously necessary and find an explicit mapping between the minimum volume and the topological quantities of the toric diagram. Under the AdS/CFT correspondence, the minimum volumes of Sasaki-Einstein manifolds correspond to central charges of a class of $4d$ $\mathcal{N}=1$ superconformal field theories. We therefore find empirical evidence for a function that gives values of central charges without the usual extremization procedure.'
author:
- 'Daniel Krefl$^{a}$ and Rak-Kyeong Seong$^{b}$'
bibliography:
- 'mybib.bib'
title: 'Machine Learning of Calabi-Yau Volumes'
---
Introduction
============
In recent years machine learning has become a cornerstone for many fields of science and it has been adopted more and more as a valuable toolbox. Machine learning has attracted much interest due to significant theoretical progress and due to increased availability of large amounts of data, computing power (GPUs) and easy to use software implementations of standard machine learning techniques.
Despite these developments, applications of machine learning techniques to mathematical physics have been limited to our knowledge. One of the reasons for this is that machine learning aims to empirically approximate the underlying probability density function of a given dataset. Making use of machine learning to identify hidden structures in datasets, which teach us about new phenomena in string theory and mathematics, has not been systematically considered before.
This work aims to change the status quo and to provide evidence that machine learning can be used to discover hidden structures in large classes of gauge theories that are studied in theoretical physics as well as large classes of geometries that are studied in mathematics. Importantly, we illustrate that machine learning does not just provide an approximation of known functional relationships between physically and mathematically significant quantities, but also leads to discoveries of new functional relationships.
In particular, we concentrate on a class of $4d$ $\mathcal{N}=1$ supersymmetric gauge theories that live on the worldvolume of a stack of D3-branes probing toric Calabi-Yau 3-folds, characterized by convex lattice polygons known as toric diagrams [@Klebanov:1998hh; @Hanany:1997tb; @Hanany:1998it; @Franco:2005rj]. These theories are expected to flow at low energies to a superconformal fixed point.\
From a machine learning perspective, this work studies the minimum volumes of Sasaki-Einstein 5-manifolds. These are the base manifolds of the probed toric Calabi-Yau 3-folds. The minimum volume is of particular interest because under the AdS/CFT correspondence, it is expected to be related to the maximized $a$-function that gives the central charge of the $4d$ $\mathcal{N}=1$ superconformal field theory [@Gubser:1998vd; @Henningson:1998gx; @Intriligator:2003jj; @Butti:2005vn; @Butti:2005ps].
Using a large dataset of toric Calabi-Yau 3-folds, our aim is to train a machine learning model in such a way that it approximates a functional relationship between topological quantities of the toric Calabi-Yau 3-fold and the minimum volume of the Sasaki-Einstein base manifold. Such a functional relation would be of great use because it would circumvent the standard volume minimization procedure and highlight a direct relationship between topological quantities of the toric Calabi-Yau geometries and the central charges of the $4d$ superconformal field theories.
We ask whether for a given dataset consisting of minimal volumes $V_{min}$ and topological quantities $\mathcal T$ of the corresponding Calabi-Yau 3-folds, we can approximate from the data a mapping $F$ such that $$V_{min} \sim F(\mathcal T)\,.$$ Here, we use machine learning to model $F$. Mainly, we will consider three different machine learning models. These are a modification of usual linear regression, a convolutional neural network (CNN) and also a combination of both. In detail, a CNN model is a feed-forward neural network which includes additional convolutional layers.
We refer the reader to [@LeCun2015] for a basic introduction on CNN models and [@Schmidthuber] for a comprehensive reference list.\
Background {#sback}
==========
We concentrate on non-compact Calabi-Yau 3-folds $\mathcal{X}$ that are realized as affine cones over a complex base $X$. In particular, we focus on a special subclass of $\mathcal{X}$ where the base is a toric variety $X(\Delta)$, which is defined by a convex lattice polygon $\Delta$ known as the toric diagram. $\mathcal{X}$ can also be thought of as the real cone over a compact, smooth Sasaki-Einstein 5-manifold $Y$, whose metrics have been studied extensively for various classes of toric Calabi-Yau 3-folds. The Kähler metric of $\mathcal{X}$ has the form s\^2 () = r\^2 + r\^2 s\^2 (Y) , where $Y=\mathcal{X}|_{r=1}$.\
When we talk of Calabi-Yau volumes, we actually refer to the volume function of the Sasaki-Einstein base $Y$, which takes the form \[Y\] = \_Y = \_[r1]{} \^3 , where $\ud \mu$ is the Riemannian measure on the cone $\mathcal{X}$. $\omega$ is the Kähler form and is given by = - (r\^2 ) = i r\^2 , where $\eta$ is a global one-form on $Y$. Normalized under the volume of $S^5$, we denote the volume function of $Y$ as V(b;Y) := . Note that $V(b;Y)$ is an algebraic number and is expressed in terms of Reeb vector components $b_{i=1,\dots,3}$, where $b_3=3$ [@Martelli:2005tp; @Martelli:2006yb]. For a given projective variety $X$, realized as an affine variety in $\mathbb{C}^k$, the Hilbert series is the generating function for the dimension of the graded pieces of the coordinate ring $\mathbb{C}[x_1,\dots,x_k]/\langle f_i\rangle$, where $f_i$ are the defining polynomials of $X$. The Hilbert series of $X$ takes the form of a rational function with the expansion, g(t;) = \_[i=0]{}\^ \_(X\_i) t\^i , where the $i$th graded piece $X_i$ can be thought of as the number of algebraically independent degree $i$ polynomials on the variety $X$, with $t$ keeping track of the degree $i$.
![ (a) shows the toric diagram for $\mathbb{C}^3/\mathbb{Z}_3$ and the corresponding ideal triangulation into unit area triangles. (b) is the corresponding dual web-diagram with normal vectors to each boundary edges of the triangulation. Notice that we have 3-vectors, given that the original toric diagram is on a plane at height 1. []{data-label="ftorictriang"}](torictriang.pdf)
When $X$ is a toric variety defined by a convex lattice polygon $\Delta$, the Hilbert series for $X(\Delta)$ and the corresponding Calabi-Yau cone $\mathcal{X}$ can be obtained from the ideal triangulation of the toric diagram $\Delta$, g(t\_i; ) = \_[i=1]{}\^[r]{} \_[j=1]{}\^[n]{} (1-\^[ \_[i,j]{}]{})\^[-1]{} , where the index $i=1,\dots,r$ runs over the unit triangles in the ideal triangulation and $j=1,2,3$ runs over the boundary edges of each such triangle [@Martelli:2005tp; @Benvenuti:2006qr]. $\vec{u}_{i,j}$ is a 3-dimensional outer normal to the edge $j$ of the associated unit triangle $i$, where $\vec{t}^{~\vec{u}_{i,j}}= \prod_{a}^3 t_a^{u_{i,j}(a)}$. Note that we are dealing with 3-vectors because the 2-dimensional toric diagram is on a plane at height 1. Using , the Hilbert series for $\mathbb{C}^3/\mathbb{Z}_3$, whose toric diagram is shown in , can be obtained as follows && g(t\_i; \^3/\_3) =\
&& +\
&& + .
The volume function can be derived directly from the Hilbert series of $\mathcal{X}$ following the limit V(b\_i; Y) = \_[0]{} \^3 g(t\_i = ; ) . The leading order in $\mu$ picked up by the above limit from the expansion of the Hilbert series was shown in [@Martelli:2005tp; @Martelli:2006yb] to be directly related to the volume of the Sasaki-Einstein base $Y=\mathcal{X}|_{r=1}$. For the $\mathbb{C}^3/\mathbb{Z}_3$ example, the volume function takes the form && V(b\_i; \^3/\_3) =\
&& , where $b_3=3$.
Volume Minimization and the AdS/CFT correspondence
==================================================
The worldvolume theory on a stack of D3-branes probing Calabi-Yau 3-folds $\mathcal{X}$ is a $4d$ $\mathcal{N}=1$ supersymmetric gauge theory. It is expected that these theories flow at low energies to a superconformal fixed point. The superconformal R-charges of the theory are determined by a procedure known as $a$-maximization [@Intriligator:2003jj; @Butti:2005vn; @Butti:2005ps], which involves the maximization of the $a$-charge, a(R; Y) = (3 R\^3 - R) . $a$-maximization gives the value of the central charge at the conformal fixed point, which is by the AdS/CFT correspondence related to the volume minimum of the corresponding Sasaki-Einstein 5-manifold [@Gubser:1998vd; @Henningson:1998gx] under a(R; Y) = , where the R-charge $R$ can be expressed in terms of Reeb vector components $b_i$. In other words, computing the minimum volume, V\_[min]{} = \_[b\_i|b\_3=3]{} V(b\_i; Y) , is equivalent under to computing the maximized value of $a(R;Y)$, which is the central charge of the $4d$ $\mathcal{N}=1$ superconformal field theory.
Data {#sdata}
====
Our aim is to train a neural network to compute the volume minimum directly as a function of toric data, circumventing the minimization procedure that has been so far necessary. The available input data for the machine learning models for a given toric Calabi-Yau 3-fold takes the following form (y,), =(f\_1,f\_2,f\_3,D) where $y=1/V_{min}$ is the target inverse minimum volume, and $f_i$ are the three features f\_1= I, f\_2=E , f\_3=V, with $I$ being the number of internal lattice points, $E$ being the number of perimeter points and $V$ being the number of extremal corner points of the convex lattice polygon representing the toric diagram. Note that $2f_2 - 4$ is the Euler number of the corresponding toric variety [@He:2017gam]. In addition, we include the toric diagram itself as a square matrix $\mathcal D$, consisting of $0,1$ entries, where an entry of $1$ indicates the presence of an extremal vertex of the lattice polygon.
We generate a class of toric Calabi-Yau’s whose toric diagrams originate from the toric diagram of the orbifold of the conifold of the form $\mathcal{C}/\mathbb{Z}_5\times \mathbb{Z}_5$, which is a lattice square with side-length 5. By consecutively cutting corners of this toric diagram, we generate 187,389 distinct toric diagrams. However, this set of toric diagrams exhibits a remaining $GL(2,\mathbb{Z})$ redundancy and hence certain toric diagrams from this set can be related to the same toric Calabi-Yau 3-fold. We therefore remove the $GL(2,\mathbb{Z})$ redundancy and further reduce the number of toric diagrams down to 15,151, which now establishes a set of distinct toric Calabi-Yau 3-folds. Using the integer rounded centroid of the convex lattice polygons, we re-center the toric diagrams. All the 15,151 re-centered toric diagrams then fit into a 7x7 lattice square, which we further embed into a 9x9 lattice square. Accordingly, $\mathcal{D}$ for our dataset is a 9x9 integer matrix with entries $0,1$ . In , we illustrate the distribution of the extremal vertices of all the 15,151 toric diagrams we use for our analysis.
![ The distribution of extremal vertices of toric diagrams for the set of 15,151 distinct toric Calabi-Yau 3-folds that are used as train and test sets for our machine learning models. []{data-label="ftoricdiadistr"}](toricdiadistr.pdf)
Using Hilbert series we compute the volume function $V(b_i;Y)$ for our dataset and minimize them to obtain $V_{min}$. Given the entire dataset with the minimized volumes, we identify 4 cases where the value of $y=1/V_{min}$ is much larger than the remaining dataset. In order to keep a non-distorted dataset, we remove these 4 cases ending up with a dataset of size 15,147. We also note that the minimum volumes are algebraic numbers, which can be irrational, and that we round the actual values for the machine learning model to 4 decimal points. This gives us 797 distinct values for the minimum volume under the chosen numerical precision, corresponding to the 15,147 distinct toric Calabi-Yau 3-folds. Finally, for each toric diagram in our set of Calabi-Yau 3-folds, we identify the features $f_i$ that lead us to 15,147 input data of the form shown in . An example is illustrated in .
![ An example of a data entry vector for a particular toric Calabi-Yau 3-fold in our dataset. (a) shows the toric diagram obtained by cutting corners of a 5x5 lattice square and (b) shows the corresponding extremal lattice points of the toric diagram embedded in a 9x9 input matrix. The full data vector for this toric Calabi-Yau 3-fold is summarized in (c). []{data-label="ftoricdatasetexample"}](toricdatasetexample.pdf)
As neural networks easily overfit, meaning that they tend to memorize the specific training set instead of learning the underlying hidden structure, we have to be very careful in data preparation and usage in order not to fool ourselves. We here follow the common approach to split the data into an independent train (75%) and test (25%) set, where the machine learning models are only trained on the train set. We do not use an additional verification set, as we will not perform extensive hyper-parameter optimization. It is important to note that we have constructed our dataset of 15,147 toric Calabi-Yau 3-folds in such a way that a split will give $GL(2,\mathbb{Z})$ independent train and test sets.
Multiple Linear Regression
==========================
![Illustration of the linear regression model. The features are weighted by the respective weights and added to form the output. The bias weight $w_0$ is illustrated as an additional feature, which is fixed to 1. []{data-label="fLRnetdia"}](LRnetdia.pdf)
Let us first see how well we can model the data of the three features given in [(\[classicalFeatures\])]{} via a simple multiple linear regression, [[*i.e.*]{}]{}, y\^[(n)]{}\~F(f\^[(n)]{})=\_[i=1]{}\^[k\_f]{} \_i f\_i\^[(n)]{} +\_0, where $\omega_i$ denotes the $i$th weight, and the dataset is given by $(y,f)$, with $f$ being the features for target $y$. The weight $\omega_0$ is usually referred to as the bias and can be viewed as the weight of an additional feature fixed to 1. Furthermore, we improve the modeling non-linearly by taking order 2 combinations of the $k_f=3$ original features, which yields k\_f=2 k\_f+ new features of the form f = (f\_1,…, f\_[k\_f]{}, f\_1f\_2,f\_1 f\_3,…,f\_1\^2,…,f\_[k\_f]{}\^2). illustrates the setup of the linear regression model.
The optimization task is the usual mean least squares minimization L, where L=\_[n=1]{}\^N (y\^[(n)]{}-F(f\^[(n)]{}))\^2. Though one can solve for $\omega_i$ exactly due to the convexity of the optimization problem, we prefer here to solve iteratively via stochastic gradient descent using the Python package Keras [@Keras] (with Theano [@Theano] backend) and the Nadam optimizer in default settings running for 5000 epochs (with batch size 1000). Note that we will use the same software stacks and settings in the following section. The solution for the training set obtained via gradient descent reads (up to four digits)
[crcrcr]{} \_1= & 1.9574, &\_2= &0.8522, &\_3= &-0.7658,\
\_4= &-0.0138, &\_5= &-0.0020, &\_6= &-0.0104,\
\_7=&- 0.0120, &\_8= &-0.0523, & \_9= &-0.0478,\
& &\_0= &1.3637.& &
\
One should note that the features $f^{(n)}$ are not unique in our dataset. In fact, we have only 645 unique feature combinations for all the toric diagrams in our test dataset. Hence, we expect that the solution Ansatz [(\[linearAnsatz\])]{} built on the three features models the statistical expectation $E(y)$ (mean). After categorizing the test dataset into 645 categories $\mathcal C_i$, we take the expectation $E(y^{\mathcal C_i})$ of the datasets in each category. Furthermore, we calculate the largest and smallest $y^{\mathcal C_i}$ in each category and order the categories according to their $E(y^{\mathcal C_i})$. Then we plot the minimum and maximum of each category as well as $F(f^{\mathcal C_i})$ against $E(y^{\mathcal C_i})$, leading to the plot in . Note that the diagonal of the figure corresponds to $E(y^{\mathcal C})$. We observe that the prediction of $F(f^{\mathcal C})$ (red curve) indeed seems to roughly approximate the mean of the $y$ values in each of the categories.
![The $x$-axis corresponds to $E(y^{\mathcal C})$ of the 645 categories. The red curve plots the prediciton of $y$ via linear regression for the classes. The blue dots indicate the maximum $y$ taken for a class of values $\mathcal C$ and the green dots the minimum value.[]{data-label="LRplot1"}](plot1.pdf)
How well does this simple Ansatz actually model the true $y$, or rather the minimum volume $1/y$ of interest? We run the test set through the predictor $F$ and calculate the percentage errors (in $\times100\%$) \^[(n)]{}= 1-. We observe a maximum error $|\epsilon_{max}|\sim 0.132$ and E(||)\~0.022, (||)\~0.017, where $\sigma(|\epsilon|)$ denotes the standard deviation of the distribution of the absolute value of the percentage errors. We also averaged the values over three independent runs.
Hence, the expected prediction error of the minimal volume is around $2.2\%$. This means that the linear regression Ansatz already yields a surprisingly good approximator to the minimal volume. Here we should make a remark regarding the order of feature combinations that was taken in . Taking order 2 combinations gives a significant improvement in reducing the error in comparison to taking the plain three features, as the extended linear regression model seems to be able to learn better the more extreme volumes at the tails of the distribution of the minimum volumes. Including in addition order 3 combinations does not seem to yield any further improvement but rather seems to lead to a worse result.
In general, in order to improve this method further, we need to introduce additional features which are able to distinguish between the individual members of a class $\mathcal C_i$. However, instead of hand-crafting new features, we try in the following section the more modern approach to let the approximator $F$ learn the appropriate features [*itself*]{} from the raw data.
WIDE AND CNN
============
![The wide and deep model. The toric diagram data $\mathcal D$ is fed into a convolutional layer and further processed in two fully connected layers. The outputs are linearly combined with the output of a linear regression on the features $\hat{f}$. []{data-label="fLRCNNnetdia"}](LRCNNnetdia.pdf)
The raw data in its purest form is the toric diagram itself, hence it behooves us to ask if we can learn additional information directly from the toric diagram, in order to minimize further the prediction error. Since we treat the toric diagram as a $9\times 9$ matrix viewed as an image, the canonical computer science approach is to invoke a CNN – the main tool for image recognition.
We couple the linear regression, as described in the previous section, with a CNN as follows. The output of the linear regression is added via a single ReLu unit (rectified linear, $\max(x,0)$) to the outputs of the CNN, [[*i.e.*]{}]{}, y\^[(n)]{}\~(\^[f]{}\_1 F(f\^[(n)]{}) + \_[i=1]{}\^o\^[f]{}\_[2i]{} m\_i(D\^[(n)]{}) +\^[f]{}\_0 ,0),\
where $m_i$ denotes the $i$th output of the CNN and $\omega^f$ are the weights of the final layer. We use the ReLu unit for the final output because the minimum volume has to be positive. Note that this kind of setup is also known as a wide and deep model.
The CNN is setup as follows. For the input layer we take a $2d$ convolutional layer consisting of 32 filters (size $3\times 3$) and linear activation. The filters are convolved against the input and produce a $2d$ activation map of the filter. Hence, the layer learns spatially localized features of the inputs. For an illustration of the convolution layers see . This is followed by two relatively small fully connected layers (we use sizes 12 and 4) with $\tanh$ activation. Hence we have 4 outputs $m_i$ in the CNN part. The precise network architecture is not of utmost relevance, as the results appear to be relatively stable against modifications in the number of layers, units in each layer, etc. However, generally smaller networks seem to be preferred with $\tanh$ activation functions in the dense layers. illustrates the combined setup.
The complete setup is trained on the train set as in the previous section via stochastic gradient descent minimizing the mean squared error.
Using the trained network, the prediction of the volume minima for the independent test set exhibits the following errors averaged over three independent test runs, $$E(|\epsilon|) \sim 0.009 \,,\,\,\,\,\sigma(|\epsilon|) \sim 0.009 \,.$$ Hence, the expected prediction error is below $1\%$. The maximum observed error reads $|\epsilon_{max}|\sim 0.20$. The distribution of errors for one test run is plotted in .
![The $x$-axis corresponds to the minimum volume $V_{min}$ and the y-axis to the percentage error $\epsilon$ ($\times100\%$). The blue dots correspond to the errors between the prediction and should be results for the coupled linear regression and CNN.[]{data-label="LRCNNerrors"}](linRegCNNerrors.pdf)
We conclude that adding the CNN yields a significant improvement in predictive power. Note that the few larger errors visible in the plot are due to the tails of the minimum volume distribution, which the model is not able to predict extremely well due to the lack of training data available at the extreme values.
Finally, let us consider the case of using just the CNN alone, without being coupled to a linear regression branch. The used CNN is identical to the one above.
We obtain on the test set $$E(|\epsilon|) \sim 0.010 \,,\,\,\,\,\sigma(|\epsilon|) \sim 0.014 \,,$$ and $|\epsilon_{max}|\sim 0.51$. Note that these values are averaged as well over three independent test runs. For illustration, the individual errors for one test run are plotted in .
![The $x$-axis corresponds to the minimum volume $V_{min}$ and the y-axis to the percentage error $\epsilon$ ($\times100\%$). The blue dots correspond to the errors between the prediction and should be results for the pure CNN.[]{data-label="CNNerrors"}](CNNerrors.pdf)
In general, the machine learning models have greater difficulty in learning the tails of the minimum volume distribution. This might be due to lack of data in this regime (the data distribution restricted to our frame is not uniform). We observe that the combined setup of both linear regression and CNN performs better. This is because linear regression seems to stabilize the prediction of the tail sections of the minimum volume distribution due to its knowledge about the properties of the feature vector classes, which we discussed in the previous section.
Summary and Outlook
===================
In this work, we have demonstrated that machine learning techniques, in particular neural networks, can be a useful addition to the toolbox of researchers tackling formal questions in mathematics and physics. A necessary condition for using machine learning techniques is that at least some aspect of the question can be translated to a data science problem.
This work studied whether the minimum volume of Sasaki-Einstein base manifolds for toric Calabi-Yau 3-folds can be directly computed from topological quantities originating from toric geometry, replacing the usual minimization procedure that is necessary to identify the minimum volume. This question has aspects of a data science problem, [[*i.e.*]{}]{}, out of known topological data and the corresponding minimal volumes, can we (or rather the machine) learn a mapping between these quantities (that is, find an approximate functional relation)?
The answer seems to be affirmative. Even taking for the machine learning model just a linear combination of order 2 combinations of numbers for different characteristic vertices in the toric diagrams of the Calabi-Yau 3-folds, yields already a good universal approximation to the minimal volume of the corresponding Sasaki-Einstein manifolds. In addition, a CNN model, which learns new kinds of features of the toric diagram, further improves predictive power for the volume minimum.
It is surprising to see that the simple setups we are considering are able to predict the minimum volumes relatively well, effectively showing that the minimum volume is encoded in the toric diagrams of the Calabi-Yau 3-folds. This is an indication that the procedure of volume minimization can be avoided. In fact, we show that the volume minimum can explicitly be computed from the toric data, by using an underlying functional relation, which we have approximated in this work.
With this analysis, we have shown a working example of how a machine learning model can identify functional relationships between mathematically and physically interesting quantities in cases where such functional relationships were not known before.
Note that it would be interesting to refine our analysis by increasing the dataset of toric Calabi-Yau’s in such a way that the minimum volume distribution is more uniform. Furthermore, averaging over more test runs as well as increasing the rounding precision for irrational values for minimum volumes would be important improvements that we leave for future work.
We believe that there are other suitable problems which can be approached from a data science perspective, similar to what we have done in this work for the minimum volume of toric Calabi-Yau 3-folds. Approaching problems in this way might yield some novel insights, as well as hints to hidden and unexpected relations between physically and mathematically relevant quantities that have not been observed before.
For example, large datasets of both physical and mathematical significance exist in the context of $4d$ $\mathcal{N}=1$ theories related to toric Calabi-Yau 3-folds [@Klebanov:1998hh; @Hanany:1997tb; @Hanany:1998it; @Franco:2005rj], as well as to a recently discovered new class of $2d$ $(0,2)$ theories related to toric Calabi-Yau 4-folds [@Franco:2015tya]. Furthermore, interesting rich datasets exist in relation to so called complete intersection Calabi-Yaus (CICYs) [@Candelas:1987kf] characterized by configuration matrices that can be taken as inputs for machine learning models. Finally, large datasets exist in relation to hyperbolic 3-manifolds related to knots [@jones1985polynomial], which may exhibit hidden structures that could be discovered using again machine learning techniques. In a future work [@KreflSeongYau2017], we hope to shed light on these interesting problems.
We thank S.-T. Yau for related discussions, collaborations and encouragement to pursue this project. R.-K. S. also thanks the CERN Theory Group, where this project was initiated and the Center for Mathematical Sciences and Applications at Harvard University and the Yau Mathematical Sciences Center at Tsinghua University, for their hospitality.
| |
The Reflections program is a nation-wide, PTA-sponsored contest open to all students. Students may enter their work in the areas of photography, musical composition, visual arts and literature. This program encourages and gives children an outlet for creative expression. All participants receive an award of acknowledgement. The winning pieces go on to compete at the state and national PTA level.
It’s time to get creative!!! Students K – 5 are invited to explore their talents and express themselves in this year contest by telling THEIR story through dance choreography, film production, literature, music composition, photography and visual arts. Entry forms and rules are online (click link) and in the main office.
Jennings School Deadline is TBD.
*Please submit student entry form that includes ARTIST STATEMENT with original work to qualify.
** Please write name on the back of work to help with the judging process.
All entries will be judged and Jennings committee will choose 1st, 2nd and 3rd place winners from each category. The winning submissions will go on the state level to compete and can even go on to the national PTA level! So start creating and good luck! | http://jenningsschoolpta.org/programs-events/in-school-enrichment/reflections/ |
Welcome to MetaDynamics
We are challenged by the demands of change on all fronts - cost reductions, quality improvements, restructuring, mergers, increased competition, technological innovations and talent upgrades.
Change increasingly exceeds our ability to manage it constructively and creatively. Our mechanistic models for solving today's problems often prove inadequate. And, our linear approaches leave us feeling ill prepared for the challenges of tomorrow.
To be successful in the midst of chaos, we must fundamentally transform the ways we think and behave.
The challenges of change demand that we develop our capacities to learn. We must learn how to learn, individually and collectively.
MetaDynamics is committed to helping you build these capacities. | http://www.metadynamics.biz/index.php/2-uncategorised/143-welcome-to-metadynamics?tmpl=component&print=1 |
What is the Historical Importance of Ajanta Caves is a question often asked by people who are not aware of the value of the paintings done on the walls and the ceilings of these caves thousands of years ago. Ajanta caves are a series of twenty nine caves in the district of Aurangabad in the state of Maharashtra in India. These historical monuments are brilliant masterpieces of Buddhist architecture and sculpture. These caves are famous all over the world and they have been declared as a World Heritage Site by UNESCO in 1983. This article attempts to explain the historical significance of Ajanta caves.
Historical Importance of Ajanta Caves
Caves are divided into Chaityas and Viharas
Located at a distance of 40km from Jalgaon city, Ajanta caves are fine examples of the Buddhist art and architecture. These caves were discovered in 1819 though they are believed to have been constructed between 2nd century BC and 4th century AD. The paintings on the walls depict different images of not just Lord Buddha, but also different goddesses and characters from inspiring Jataka Tales. However, the most impressive of the paintings and the sculptures remain those of Lord Buddha in different poses. These paintings beautifully depict various events in the life of Lord Buddha. All the caves are divided into two categories namely the Chaityas or the shrines and the Viharas or the monasteries. Chaityas were used to worship Lord Buddha while the Viharas were used by the Buddhist monks for their meditation. These monks also carried out their studies in these monasteries.
Paintings and sculptures depict events of life of Buddha
The 29 caves or temples in Ajanta reflect the Mahayana and Hinayana sects of Buddhism. These caves contain some of the best Buddhist art pieces found anywhere in the world. These caves remained in use for nearly nine centuries after which they were abandoned because of persecution of Buddhist monks in India. No one was aware of the existence of these caves until 1819 when Ajanta caves were again discovered. These caves are made by cutting rocks of granite along hillside. It is said that not only the caves but also the paintings and the sculptures are the handiwork of Buddhist, Hindu, and Jain monks who stayed and prayed inside the caves during this period.
These caves contain not just sculptures but also paintings and frescos
Ajanta caves are unique in the sense that they incorporate the three elements of visual arts namely paintings, frescos, and sculpture together. The fusion of these three art forms makes these caves very important for the lovers of art and architecture. One unique feature of depicting Buddha makes use of symbols such as his footprints or his throne. Under the Mahayana tradition of Buddhist art, one finds colorful frescos and murals of Lord Buddha and sculptures showing not only Buddha, but other Bodhisattvas also. These caves also reflect the morals and values that were regarded highly in those times with the help of scenes of everyday life. Artists have made use of Jataka tales to depict the incarnations of Buddha in his previous lives. There are also inscriptions that contain names of princes and kings who donated generously to these Buddhist monks. In general, Ajanta caves reflect the brilliant Buddhist art that rose and flourished during the reigns of Chalukya and Rashtrakuta rulers.
The magnificent Buddhist art in Ajanta caves had a great influence in the development of art and architecture in India.
Images Courtesy: | https://pediaa.com/what-is-the-historical-importance-of-ajanta-caves/ |
How OpenMDAO Represents Variables¶
In general, a numerical model can be complex, multidisciplinary, and heterogeneous. It can be decomposed into a series of smaller computations that are chained together by passing variables from one to the next.
In OpenMDAO, we perform all these numerical calculations inside a Component, which represents the smallest unit of computational work the framework understands. Each component will output its own set of variables. Depending on which type of calculation you’re trying to represent, OpenMDAO provides different kinds of components for you to work with.
A Simple Numerical Model¶
In order to understand the different kinds of components in OpenMDAO,
let us consider the following numerical model that takes
x as an input:
The Three Types of Components¶
In our numerical model, we have three variables:
x,
y, and
z. Each of these variables needs to be defined as the output of a component. There are three basic types of components in OpenMDAO:
IndepVarComp : defines independent variables (e.g., x)
ExplicitComponent : defines dependent variables that are computed explicitly (e.g., z)
ImplicitComponent : defines dependent variables that are computed implicitly (e.g., y)
The most straightforward way to implement the numerical model would be to assign each variable its own component, as below.
|
|
No.
|
|
Component Type
|
|
Inputs
|
|
Outputs
|
|
1
|
|
IndepVarComp
|
|
x
|
|
2
|
|
ImplicitComponent
|
|
x, z
|
|
y
|
|
3
|
|
ExplicitComponent
|
|
y
|
|
z
Another way that is also valid would be to have one component compute both y and z explicitly, which would mean that this component solves the implicit equation for y internally.
|
|
No.
|
|
Component Type
|
|
Inputs
|
|
Outputs
|
|
1
|
|
IndepVarComp
|
|
x
|
|
2
|
|
ExplicitComponent
|
|
x
|
|
y, z
Both ways would be valid, but the first way is recommended. The second way requires the user to solve y and z together, and computing the derivatives of y and z with respect to x is non-trivial. The first way would also require implicitly solving for y, but an OpenMDAO solver could converge that for you. Moreover, for the first way, OpenMDAO would automatically combine and assemble the derivatives from components 2 and 3. | https://openmdao.org/newdocs/versions/latest/basic_user_guide/single_disciplinary_optimization/component_types.html |
Example:
To find the distances between cities in Canada and other useful information such as average speed, driving time, the recommended breaks, fuel consumption, fuel price, type in the above fields the names of localities - FROM Regina TO Kelvington and then press ENTER key or click on DISTANCE button.
You can enter the exact address, but please note that it is possible that some information are not available
Change the route for Canada
After generating the route Regina - Kelvington could be changed by simply dragging the line with the mouse. The change can be applied to any intermediate points of that route as well as the points of departure and arrival.
Once modified all the other calculations for that route are automatically recalculated:
average speed Regina - Kelvington
driving time Regina - Kelvington
recommended break Regina - Kelvington
fuel consumption Regina - Kelvington
fuel price Regina - Kelvington.
Adjustment of fuel consumption and fuel price for Regina - Kelvington
You can modify the values corresponding to the average consumption value of your vehicle, fuel prices could be as well modified with any desired value.
Calculation of fuel cost and total fuel consumption will be automatically recalculated without having to press any other button or key. | https://canada.distancesonline.com/Regina/Kelvington |
The American Institute of Architects (AIA) and the Association of Collegiate Schools of Architecture (ACSA) are pleased to announce the second annual joint AIA/ACSA INTERSECTIONS Research Conference dedicated to the INTERSECTION of Education, Research and Practice. This 2.5-day virtual conference will include dynamic presentations of current research and keynotes and sessions offering new ideas, models for practice, and challenging our profession in addressing the critical issue of climate and community. The conference builds on the eight-year partnership between AIA and ACSA toward these objectives. The focus of the INTERSECTIONS programs is intended to strengthen the INTERSECTION between academia and design practice, especially when it comes to research and innovation, focused on community strategies.
Attendees will hear about new discoveries and innovations at all scales and gain an increased awareness of research happening in both academia and practice, which will inform their work and teaching. The conference will foster opportunities for new partnerships, explore interdisciplinary opportunities, find sources of funding and collaborations. It will be a chance for both established researchers as well as those looking to enhance their research capabilities, or update their knowledge, with keynotes, sessions, breakouts, workshops and networking events.
Overview
Call for Abstracts
As we find ourselves a year (+) into a world pandemic, what has changed? What issues and silver linings have emerged to change how we learn, work, live and play? What can we as architects, academics, researchers & design leaders do to improve our world, environment, our cities, our buildings & our communities? How can we take a cue from the call for social justice which has been elevated during a time of COVID to address our communities of need, our communities of color, and a legacy of redlining and environmental racism? How can we use and share this knowledge to impact the critical issue of Climate change?
Architects are called on to see the bigger picture beyond singular buildings, to ensure the buildings and developments they design have a long-lasting beneficial impact on our cities, regions, towns, and environment. The shifting economic, political, and climatic landscape has left many communities and neighborhoods challenged and changed. How do architects process these varied and interdependent impacts and revise their approaches to building and urban/rural design? Successful community projects can rally communities to improve, rebuild, and restore their cities and towns. How can design affect social change? What role can architects and designers play in addressing community needs; such as:
- affordable housing demand
- public space, open space, community investment
- institutional buildings (health, education, government, etc.)
- projects that spur local economic development and the associated impacts,
- buildings and urban interventions that mitigate exposure to adverse conditions (pollution, heat, noise, flooding, etc.)
Where can research fill gaps in applied design work and spur innovation? How can architects enrich academic and community endeavors? And how can academia and practice collaborate to build stronger, more resilient and equitable communities?
At this conference, we look to share the latest research and innovations, and examine the role research plays in advancing architectural practice and education within our communities. Speakers and presenters will share how their research is generated in practice, at universities and in partnerships with business, industry, and government.
Keynote Speakers
Opening Plenary
Jelani Cobb
Jelani Cobb, Columbia University’s Ira A. Lipman Professor of Journalism and a long-time staff writer at The New Yorker. The conference will examine the legacy of our built environment and its communal implications. Cobb’s writings on race, history, justice, and politics earned him the 2015 Hillman Prize for opinion and analysis journalism. He is the author of Substance of Hope: Barack Obama and the Paradox of Progress, To the Break of Dawn: A Freestyle on the Hip Hop Aesthetic, and The Devil & Dave Chappelle and Other Essays. | https://www.acsa-arch.org/conference/2021-aia-acsa-intersections-research-conference-communities/ |
Los Angeles Lakers: Why signing Malik Monk was a great move
The Los Angeles Lakers made another great move during this free agency period by signing now-former Charlotte Hornets guard, Malik Monk. Monk is a very young and underrated player that has seemed to grow and blossom a bit more each year that he has been in the league.
First off, I love his size. At six foot three and weighing 200 pounds, Monk is lanky and light. There is something about his style of play and the way he moves that really gets eyes on him.
Of course, there have been things about him that many have pointed their fingers at such as the fact that he was the number 11th overall pick in the NBA Draft in 2017. He only averaged 6.7 points per game in his rookie season.
To be quite honest, he was only 19 years old during his rookie season and we know that there have been many young players his age that have shown to struggle a bit as they try and get their footing in the NBA.
MUST-READ: Ranking the day one signings by potential impact
Things take time as does maturity and growth as well as getting better seasoned for today’s basketball especially at the NBA level. In his second season, Monk improved to 8.9 points per game and then last season, he broke into double digits as he averaged 10.3 points per game.
This past season was Monk’s best as he averaged a career-high 11.7 points per game and shot a career-high from three-point at 40% as well as tied his career-high in field goal percentage at 43%.
Things seem to be getting better for him as his numbers continue to increase and his performance improves as his maturity has been shown to be getting better. It also didn’t help his situation that he played for four losing teams in his four seasons with the Charlotte Hornets.
Malik Monk will bring a lot to the Los Angeles Lakers.
A pair of young and fresh legs is one of the most important things. He can shoot the three-point shot and he also has some nice moves when he penetrates and can jump. A very athletic player that is still coming into his own and finding himself. Monk is great at finishing a fast break and plays really well in the open court.
He will fit very well into the Lakers offense and now with the addition of Russell Westbrook, the team will have a more up-tempo and attack style that will help Monk very much as he will be looked at to be a finisher, which is something he does very well, or occasionally pull up and nail a three.
Malik Monk plays with electricity and a vibrancy that will fit very well into the Lakers’ style and will be great for the fans. It is something the team needed. Think of Monk sort of like a spark plug. I can see him coming off the bench to provide the Lakers with some much-needed juice.
This is one of the better moves they have made this offseason in my opinion as they have acquired a player who hustles and plays with heart. It will do him a lot of good to be surrounded by many veterans as well as loads of talent this team now has.
His game will only improve as he shall look to shine and be an integral part of a team that will be one of the teams that have a legit chance at winning it all. Plus, he will also be playing for a very smart coach in Frank Vogel whose experience will mean a lot for his overall growth. | https://lakeshowlife.com/2021/08/03/los-angeles-lakers-fre-eagency-malik-monk/ |
---
abstract: 'We investigate products of certain double cosets for the symmetric group and use the findings to derive some multiplication formulas for the $q$-Schur superalgebras. This gives a combinatorialisation of the relative norm approach developed in [@DG]. We then give several applications of the multiplication formulas, including the matrix representation of the regular representation and a semisimplicity criterion for $q$-Schur superalgebras. We also construct infinitesimal and little $q$-Schur superalgebras directly from the multiplication formulas and develop their semisimplicity criteria.'
address:
- 'J.D., School of Mathematics and Statistics, University of New South Wales, Sydney NSW 2052, Australia'
- 'H.G., School of Science, Huzhou University, Huzhou, China'
- 'Z.Z., College of Science, Hohai University, Nanjing, China'
author:
- 'Jie Du, Haixia Gu$^\dagger$ and Zhongguo Zhou'
title: 'Multiplication formulas and semisimplicity for $q$-Schur superalgebras'
---
[^1] [^2]
Introduction
============
The beautiful Beilinson–Lusztig–MacPherson construction [@BLM] of quantum $\mathfrak{gl}_n$ has been generalised to the quantum affine $\mathfrak{gl}_n$ [@DDF; @DF], to the quantum super $\mathfrak{gl}_{m|n}$ [@DG], and partially to the other classical types [@BKLW; @FL] and affine type $C$ [@FLLLW], in which certain coideal subalgebras of quantum $\mathfrak{gl}_n$ (or affine ${\mathfrak{gl}}_n$) are used to form various quantum symmetric pairs associated with Hecke algebras of type $B/C/D$ or affine type $C$. A key step of these works is the establishment of certain multiplication formulas in the relevant $q$-Schur algebras or Hecke endomorphism algebras. These formulas were originally derived by geometric methods. When the geometric approach is not available in the super case, a super version of the Curtis–Scott relative norm basis [@J; @DU], including a detailed analysis of the explicit action on the tensor space, is used in deriving such formulas; see [@DG; @DGW; @DGW2]. However, it is natural to expect the existence of a direct Hecke algebra method involving only the combinatorics of symmetric groups.
In this paper, we will develop such a method. The multiplication formulas require to compute certain structure constants associated with the double coset basis, a basis defined by the double cosets of a symmetric group. Since a double coset can be described by a certain matrix with non-negative integer entries, our first step is to find formulas, in terms of the matrix entries, of decomposing products of certain double cosets into disjoint unions of double cosets. We then use the findings to derive the multiplication formulas in $q$-Schur superalgebras; see Theorem \[KMF\] and Corollary \[KMFcor\]. This method simplify the calculation in [@DG §§2-3] using relative norms.
The multiplication formulas result in several applications. The first one is the matrix representation of the regular representations over any commutative ring $R$; see Theorem \[KeyMF\]. When the ground ring $R$ is a field, we establish a criterion for the semisimplicity of $q$-Schur superalgebras (see Theorem \[thmqss\]), generalising a quantum result of Erdmann and Nakano to the super case and a classical super result of Marko and Zubkov [@mz2] (cf. [@DN; @EN]) to the quantum case. Finally, we introduce the infinitesimal and little $q$-Schur superalgebras directly from the multiplication formulas (Theorem \[6.1\], Corollary \[little\]). We also determine semisimple infinitesimal $q$-Schur superalgebras and semisimple little $q$-Schur superalgebras (Theorem \[thmiqs\]).
It should be interesting to point out that, unlike the traditional methods used in [@DNP; @DFW], our definitions do not involve quantum enveloping algberas or quantum coordinate algebras and the semisimplicity proof is also independent of the representation theory of these ambient quantum groups or algebras. We expect that this combinatorial approach will give further applications to various $q$-Schur superalgebras of other types in the near future.
[**Acknowledgement.**]{} We thank the referee for several helpful comments.
$q$-Schur superalgebras
=======================
Let $W={\mathfrak S}_{\{1,2,\ldots,r\}}$ be the symmetric group on $r$ letters and let $S=\{s_k\mid 1\leq k<r\}$ be the set of basic transpositions $s_k=(k,k+1)$. Denote the length function with respect to $S$ by $\ell:W\to\mathbb{N}$.
Let $R$ be a commutative ring with 1 and let $q\in R^\times$. The Hecke algebra $\mathcal{H}_R=\sH_R(\fS)$ is a free $R$-module with basis $\{T_w\mid w\in W\}$ and the multiplication defined by the rules: for $s\in S$, $$T_wT_s=\left\{\begin{aligned} &T_{ws},
&\mbox{if } \ell(ws)>\ell(w);\\
&(q-1)T_w+q T_{ws}, &\mbox{otherwise}.
\end{aligned}
\right.$$ The Hecke algebra over $R=\sZ:=\mathbb Z[\up,\up^{-1}]$ and $q=\up^2$ is simply denoted by $\sH$.
Let $W_\la$ denote the parabolic subgroup of $W$ associated with $\la=(\lambda_1,\lambda_2,\cdots,\lambda_N)\in\La(N,r)$ where $\La(N,r)=\{\la\in{\mathbb N}^N\mid |\la|:=\sum_i\la_i=r\}$. Then $W_\la$ consists of permutations that leave invariant the following sets of integers $${\mathbb N}_1^\la=\{1,2,\cdots,\lambda_1\},{\mathbb N}_2^\la=\{\lambda_1+1,\lambda_1+2,\cdots,\lambda_1+\lambda_2\},\cdots.$$
Let $\sD_\la:=\mathcal{D}_{W_\la}$ be the set of all shortest coset representatives of the right cosets of $W_\la$ in $W$. Let $\mathcal{D}_{\lambda\mu}=\mathcal{D}_\lambda\cap\mathcal{D}^{-1}_{\mu}$ be the set of the shortest $W_\lambda$-$W_\mu$ double coset representatives.
For $\la,\mu\in\La(N,r)$ and $d\in\mathcal{D}_{\la\mu}$, the subgroup $\fS_\la^d\cap
\fS_\mu=d^{-1}\fS_\la d\cap \fS_\mu$ is a parabolic subgroup associated with a composition which is denoted by $\la d\cap\mu$. In other words, we define $$\label{ladmu}
\fS_{\la d\cap\mu}=\fS_\la^d\cap \fS_\mu.$$ The composition $\la d\cap\mu$ can be easily described in terms of the following $N\times N$-matrix $A=(a_{i,j})$ with $a_{i,j}=|{\mathbb N}^\la_i\cap d({\mathbb N}^\mu_j)|$: if $\nu^{(j)}=(a_{1,j},a_{2,j},\ldots,a_{N,j})$ denotes the $j$th column of $A$, then $$\label{ladmu}
\la d\cap\mu=(\nu^{(1)},\nu^{(2)},\ldots,\nu^{(N)}).$$ Putting $\jmath(\la,d,\mu)=\big(|{\mathbb N}^\la_i\cap d({\mathbb N}^\mu_j)|\big)_{i,j}$, we obtain a bijection $$\label{jmath}
\jmath:\{(\la,d,\mu)\mid \la,\mu\in\La(N,r),d\in\sD_{\la\mu}\}\longrightarrow M(N,r),$$ where $M(N,r)$ is the set of all $N\times N$ matrices $A=(a_{i,j})$ over $\mathbb N$ whose entries sum to $r$, i.e., $|A|:=\sum_{i,j}a_{i,j} =r$.
For $A\in M(N,r)$, if $\jmath^{-1}(A)=(\la,d,\mu)$, then $\la,\mu\in\La(N,r)$ and $$\label{ro co}
\la=\ro(A):=(\sum_{j=1}^Na_{1,j},\ldots,\sum_{j=1}^Na_{N,j})\,\text{ and }\,\mu=\co(A):=(\sum_{i=1}^Na_{i,1},\ldots,\sum_{i=1}^Na_{i,N}).$$
For the definition of $q$-Schur superalgebra, we fix two nonnegative integers $m,n$ and assume $R$ has characteristic $\neq2$. We also need the [*parity function*]{} $$\label{parity}
\widehat{h}=\begin{cases}0,&\text{if }1\leq h\leq m;\\1,&\text{if }m+1\leq h\leq m+n.
\end{cases}$$ A composition $\la$ of $m+n$ parts will be written $$\la=(\lambda^{(0)}|\lambda^{(1)})=(\lambda^{(0)}_1,\lambda^{(0)}_2,\cdots,\lambda^{(0)}_m|\lambda^{(1)}_1,
\lambda^{(1)}_2,\cdots,\lambda^{(1)}_n)$$ to indicate the“even” and “odd” parts of $\la$. Let $$\Lambda(m|n,r):=\La(m+n,r)
=\bigcup_{r_1+r_2=r}(\La(m,r_1)\times\La(n,r_2)).$$
For $\lambda=(\lambda^{(0)}\mid\lambda^{(1)})\in\Lambda(m|n,r)$, we also write $$\label{notation}
\fS_\la=
\fS_{\la^{(0)}}\fS_{\la^{(1)}}\cong \fS_{\la^{(0)}}\times \fS_{\la^{(1)}},$$ where $\fS_{\lambda^{(0)}}\leq{\mathfrak S}_{\{1,2,\ldots,|\lambda^{(0)}|\}}$ and $\fS_{\lambda^{(1)}}\leq{\mathfrak S}_{\{|\lambda^{(0)}|+1,\ldots,r\}}$ are the even and odd parts of $\fS_\la$, respectively.
Denote the Hecke algebra associated with the parabolic subgroup $W_\la$ by $\sH_\la$, which is spanned by $T_w,w\in W_\la$. The elements in $\sH_{\la}$ $$\xy_\la:=x_{\la^{(0)}}y_{\la^{(1)}},\;\yx_\la:=y_{\la^{(0)}}x_{\la^{(1)}},$$ where, for $i=0,1$, $$x_{\lambda^{(i)}}=\sum_{w\in \fS_{\lambda^{(i)}}}T_w,\qquad y_{\lambda^{(i)}}=\sum_{w\in
\fS_{\lambda^{(i)}}}(-q)^{-\ell(w)}T_w$$ generate $\sH_\la$-modules $R\xy_\la$, $R\yx_\la$. Define the “tensor space” (cf. [@DR (8.3.4)]) $$\label{Tmnr}
\fT_R(m|n,r)=\bigoplus_{\lambda\in\Lambda(m|
n,r)}\xy_\la\mathcal{H}_{R}.$$ By the definition in [@DR], the endomorphism algebra $$\sS_R(m|n,r )=\End_{\mathcal{H}_R}(\fT_R(m|
n,r ))$$ is called a $q$-[*Schur superalgebra*]{} whose $\mathbb Z_2$-graded structure is given by $$\sS_R(m|n,r )_i=\bigoplus_{\la,\mu\in\La(m|n,r)\atop |\la^{(1)}|+|\mu^{(1)}|\equiv i(\text{mod}2)}\Hom_{\sH_R}(\xy_\la\sH_R,\xy_\mu\sH_R)\qquad(i=0,1).$$ We will use the notation $\sS(m|n,r)$ to denote the $\up^2$-Schur algebra over $\sZ$.
We now describe a characteristic-free basis for $\sS_R(m|n,r )$.
For $\la,\mu\in\Lambda(m|n,r)$, let $$\label{Dcirc}
\mathcal{D}^\circ_{\la\mu}=\{d\in\mathcal{D}_{\la\mu}\mid
\fS^d_{\la^{(0)}}\cap \fS_{\mu^{(1)}}=1,\fS^d_{\la^{(1)}}\cap
\fS_{\mu^{(0)}}=1\}.$$ This set is the super version of the usual $\sD_{\la\mu}$. We need the following subsets of the $(m+n)\times(m+n)$ matrix ring $M_{m+n}(\mathbb N)$ over $\mathbb N$: $$\label{M(m|n)}
\aligned
M(m|n,r)&=\{\jmath(\la,d,\mu)\mid\la,\mu\in\La(m|n,r),d\in\sD_{\la\mu}^\circ\},\\
M(m|n)&=\bigcup_{r\geq0}M(m|n,r)\subseteq M_{m+n}(\mathbb N).\endaligned$$
Following [@DR (5.3.2)], define, for $\lambda,\mu\in\Lambda(m|n,r)$ and $d\in\mathcal{D}^\circ_{\lambda\mu}$, $$\label{double coset}
T_{\fS_\lambda d \fS_\mu}:=\xy_\lambda T_dT_{\sD_{\nu}\cap W_\mu}=T_{\sD_{\nu'}\cap W_\la}T_d\xy_\mu,$$ where $\nu=\la d\cap\mu$, $\nu'=\mu{d^{-1}}\cap\la$, and $T_D=\sum_{w_0\in D_0,w_1\in D_1}T_{w_0}(-q)^{-\ell(w_1)}T_{w_1}$ for any $D\subseteq W_\eta$ ($\eta=\la$ or $\mu$) with $D_i=D\cap W_{\eta^{(i)}}$ (cf. [@DR (5.3.2)]). The element $T_{\fS_\lambda d \fS_\mu}$ is used to define an $\mathcal{H}_R$-module homomorphism $\phi_{\la\mu}^d$ on $\fT_R(m|n,r)$: $$\phi_{\la\mu}^d(\xy_\alpha h)=\delta_{\mu,\alpha}T_{\fS_\lambda d
\fS_\mu}h, \forall \alpha\in\Lambda(m|
n,r),h\in\mathcal{H}.$$
The first assertion of the following result is given in [[@DR 5.8]]{}, while the last assertion for the nonquantum case was observed in [@HKN §3.1]. Write $\phi_A:=\phi^d_{\lambda\mu}$ if $A=\jmath(\la,d,\mu)$.
\[DR5.8\] The set $\{\phi_A\mid
A\in M(m|n,r)\}$ forms an $R$-basis for $\sS_R(m|n,r )$. Hence, $\sS_R(m|n,r )\cong \sS(m|n,r )\otimes_{\sZ} R$. Moreover, there is an $R$-algebra isomorphism $$\sS_R(m|n,r)\cong\sS_R(n|m,r).$$
We only need to prove the last assertion. The Hecke algebra $\sH_R$ admits an $R$-algebra involutory automorphism $\varphi$ sending $T_s$ to $-qT_s^{-1}=(q-1)-T_s$ for all $s\in S$. Since $\varphi(x_\la)=q^{\ell(w_{0,\la})}y_\la$, where $w_{0,\la}$ is the longest element in $W_\la$ (see, e.g., [@DDPW (7.6.2)]), we have $\varphi(\xy_\la)=\varphi(x_{\la^{(0)}}y_{\la^{(1)}})=q^{\ell(w_{0,\la^{(0)}})-\ell(w_{0,\la^{(1)}})}y_{\la^{(0)}}x_{\la^{(1)}}$. If we denote by $(\xy_\la\sH_R)^\varphi$ the module obtained by twisting the action on $\xy_\la\sH_R$ by $\varphi$, i.e., $(\xy_\la h)*h'=(\xy_\la h)\varphi(h')$ for all $h,h'\in\sH_R$, then the map $$\Phi_\la:(\xy_\la\sH_R)^\varphi\rightarrow \yx_\la\sH_R,\xy_\la h\mapsto\varphi(\xy_\la h)$$ is an $\sH_R$ module isomorphism. These $\Phi_\la$ induce an $\sH_R$ module isomorphism $\Phi: \fT_R(m|n,r)^\varphi\longrightarrow\fT_R(n|m,r).$ Now the required isomorphism follows.
Decomposing products of double cosets
=====================================
Throughout the section, let $W$ be the symmetric group and let $n,r$ be positive integers. We also fix the following notation in this section: $$\label{notn1}
\left\{\begin{aligned}
M&=(m_{ij})\in M(n,r)\text{ with }\jmath^{-1}(M)=(\la,d,\mu),\; d_M:=d,\\
\nu_M&:=\la d\cap\mu=(m_{1,1},m_{2,1},\cdots,m_{n,1},\cdots,m_{1,n},m_{2,n},\cdots,m_{n,n}),\\
\sigma_{i,j}&=\sum_{k=1}^{j-1}\sum_{h=1}^nm_{h,k}+\sum_{k\leq i,l\geq j}m_{k,l},\\
M^+_{h,k}&=M+E_{h,k}-E_{h+1,k}, \text{ if }m_{h+1,k}\geq1,\\
M^-_{h,k}&=M-E_{h,k}+E_{h+1,k},\text{ if }m_{h,k}\geq1.
\end{aligned}\right.$$ Moreover, to any sequence $(a_1,a_2,\ldots,a_n)$, we associate its partial sum sequence $(\widetilde a_1,\widetilde a_2,\ldots,\widetilde a_n)$ with $\widetilde a_i=a_1+\cdots+a_i$. Thus, $\widetilde\la_i=\la_1+\cdots+\la_i$ and $\widetilde m_{i,j}$ is the partial sum at the $(i,j)$-position of $\nu_M$. We also note that $\sigma_{i,j}=\widetilde \mu_{j-1}+m_{i,j}^\corner,$ where $m_{i,j}^\corner=\sum_{k\leq i,l\geq j}a_{k,l}$. In particular, $\sigma_{i,1}=m_{i,1}^\corner=\widetilde{\lambda}_{i}.$ The following result will be proved at the end of the section.
\[prod coset\] Maintain the notation in with $\lambda=(\lambda_1,\cdots,\lambda_n)$ and, for $1\leq h\leq n$, let $\lambda^{[h^\pm]}:=\la\pm\bse_h\mp\bse_{h+1}=\ro(M^\pm_{h,k})$, where $\bse_i=(\delta_{1,i},\ldots,\delta_{n,i})$. Then $$\aligned
(\fS_{\lambda^{[h^+]}} 1\fS_\lambda)(\fS_\lambda d_M\fS_\mu)&=\bigcup_{k\atop m_{h+1,k}\geq1}\fS_{\lambda^{[h^+]}} d_{M^+_{h,k}}\fS_\mu,\\
(\fS_{\lambda^{[h^-]}} 1\fS_\lambda)(\fS_\lambda d_M\fS_\mu)&=\bigcup_{k\atop m_{h,k}\geq1}\fS_{\lambda^{[h^-]}} d_{M^-_{h,k}}\fS_\mu.\\
\endaligned$$
We first describe some standard reduced expression for $d_M$. If $m_{i,j}=0$, or $m_{i,j}>0$ but $\sigma_{i-1,j}=\widetilde{m}_{i-1,j}$ (i.e., $m_{i-1,j+1}^\llcorner=0$), set $w_{i,j}=1$; if $m_{i,j}>0$ and $\sigma_{i-1,j}>\widetilde{m}_{i-1,j}$, let $$\label{wij}
\begin{aligned}
w_{i,j}=\,&(s_{\sigma_{i-1,j}}s_{\sigma_{i-1,j}-1}\cdots s_{\widetilde{m}_{i-1,j}+1})\\
&(s_{\sigma_{i-1,j}+1}s_{\sigma_{i-1,j}}\cdots s_{\widetilde{m}_{i-1,j}+2})\cdot\cdots\cdot\\
&(s_{\sigma_{i-1,j}+m_{i,j}-1}s_{\sigma_{i-1,j}+m_{i,j}-2}\cdots s_{\widetilde{m}_{i,j}})
\end{aligned}$$ and $w^+_{i,j}=s_{\sigma_{i-1,j}+1}s_{\sigma_{i-1,j}+2}\cdots s_{\sigma_{i-1,j}+m_{i,j}}w_{i,j}$ (and $w^+_{i,j}=1$ if $m_{i,j}=0$). Note that we may rewrite $w^+_{i,j}$ as $$\label{w+ij}
\begin{aligned}
w^+_{i,j}=\,&s_{\sigma_{i-1,j}+1}(s_{\sigma_{i-1,j}}s_{\sigma_{i-1,j}-1}\cdots s_{\widetilde{m}_{i-1,j}+1})\\
&s_{\sigma_{i-1,j}+2}(s_{\sigma_{i-1,j}+1}s_{\sigma_{i-1,j}}\cdots s_{\widetilde{m}_{i-1,j}+2})\cdot\cdots\cdot\\
& s_{\sigma_{i-1,j}+m_{i,j}}(s_{\sigma_{i-1,j}+m_{i,j}-1}s_{\sigma_{i-1,j}+m_{i,j}-2}\cdots s_{\widetilde{m}_{i,j}}).
\end{aligned}$$ For example, if $M=\left(\begin{smallmatrix}1&3&2\\2&1&1\\1&0&2\end{smallmatrix}\right)$ then $(\sigma_{ij})=\left(\begin{smallmatrix}6&9&10\\10&11&11\\13&13&13\end{smallmatrix}\right)$, $(\widetilde m_{ij})=\left(\begin{smallmatrix}1&7&10\\3&8&11\\4&8&13\end{smallmatrix}\right)$, and $w_{2,1}=(s_6s_5\cdots s_2)(s_7s_6\cdots s_3)=\left(\begin{smallmatrix}2&3&4&5&6&7&8\\7&8&2&3&4&5&6\end{smallmatrix}\right)$, $w_{3,1}=s_{10}s_9\cdots s_4=\left(\begin{smallmatrix}4&5&6&7&8&9&10&11\\11&4&5&6&7&8&9&10\end{smallmatrix}\right)$, and $w_{2,2}=s_9s_8=\left(\begin{smallmatrix}8&9&10\\10&8&9\end{smallmatrix}\right)$, $w_{3,2}=1$, then $w_{2,1}w_{3,1}w_{2,2}w_{3,2}=\left(\begin{smallmatrix}1&2&3&4&5&6&7&8&9&10&11&12&13\\1&7&8&11&2&3&4&9&5&6&10&12&13\end{smallmatrix}\right),$ which is $d_M$.
\[d\_M\] Let $M$, $d_M$ and $M^+_{h,k}$ be given as in . Then a reduced expression of $d_M$ is of the form $$d_M=(w_{2,1}w_{3,1}\cdots w_{n,1})(w_{2,2}w_{3,2}\cdots w_{n,2})\cdots(w_{2,n-1}w_{3,n-1}\cdots w_{n,n-1}).$$ If $m_{h+1,k}\geq1$, then $$\begin{aligned}
d_{M^+_{h,k}}&=(w'_{2,1}w'_{3,1}\cdots w'_{n,1})(w'_{2,2}w'_{3,2}\cdots w'_{n,2})\cdots(w'_{2,n-1}w'_{3,n-1}\cdots w'_{n,n-1}),
\end{aligned}$$ where, for almost all $i,j$, $w'_{ij}=w_{ij}$, except $w'_{h+1,j}=w^+_{h+1,j}$ for $j<k$ and $$\label{bulcir}
\begin{aligned}
w'_{h,k}=w^\bullet_{h,k}:=\,&w_{h,k}(s_{\sigma_{h-1,k}+m_{h,k}}s_{\sigma_{h-1,k}+m_{h,k}-1}\cdots s_{\widetilde{m}_{h,k}+1}),\\
w'_{h+1,k}=w^\circ_{h+1,k}:=\,&(s_{\sigma_{h,k}+1}s_{\sigma_{h,k}}\cdots s_{\widetilde{m}_{h,k}+2})(s_{\sigma_{h,k}+2}s_{\sigma_{h,k}+1}\cdots s_{\widetilde{m}_{h,k}+3})\cdots\\
&(s_{\sigma_{h,k}+m_{h+1,k}-1}s_{\sigma_{h,k}+m_{h+1,k}-2}\cdots s_{\widetilde{m}_{h+1,k}}).
\end{aligned}$$ In particular, $\ell(d_{M^+_{h,k}})=\ell(d_M)+\sum_{j<k}m_{h+1,j}-\sum_{j>k}m_{h,j}.$
\[dmw\] (1) We display the factors $w_{i,j}$ of $d_M$ through a matrix notation: $$\label{d_Mm}
d_M={\left(\begin{array}{cccc}
w_{2,1}&w_{2,2}&\cdots&w_{2,n-1}\\
w_{3,1}&w_{3,2}&\cdots&w_{3,n-1}\\
\vdots&\vdots&\cdots&\vdots\\
w_{n,1}&w_{n,2}&\cdots&w_{n,n-1}
\end{array}\right),}$$ where $d_M$ is simply a product of the entries down column 1, then down column 2, and so on. Note that $w_{i,j}=1$ whenever $m_{i,j}=0$ or $m^\llcorner_{i-1,j+1}=0$.
\(2) Note that a product of the form $s_{h-1}s_{h-2}\cdots s_{k}$ for $h>k$ is in fact the cycle permutation $h\to h-1\to\cdots\to k+1\to k\to h$. Thus, each $w_{i,j}$ is a product of cycle permutations. Note also that the largest number permuted (or moved) by the partial column product $w_{2,j}w_{3,j}\cdots w_{h,j}$ is $\sigma_{h-1,j}+m_{h,j}$.
\[reflection\]
- For any non-negative integers $k,i,h$ with $0<k\leq i<h<r$, $$s_i(s_{h}s_{h-1}\cdots s_{k})=(s_{h}s_{h-1}\cdots s_{k})s_{i+1}.$$ Hence, for $0<k\leq i< h_1<h_2<\cdots<h_l<r$, $$\begin{aligned}
s_i&(s_{h_1}s_{h_1-1}\cdots s_{k})(s_{h_2}s_{h_2-1}\cdots s_{k+1})\cdots (s_{h_l}s_{h_l-1}\cdots s_{k+l-1})\\
=&(s_{h_1}s_{h_1-1}\cdots s_{k})(s_{h_2}s_{h_2-1}\cdots s_{k+1})\cdots (s_{h_l}s_{h_l-1}\cdots s_{k+l-1})s_{i+l}.
\end{aligned}$$
- With the notation given in and , if $\sigma_{h-1,j}+m_{h,j}<l<\sigma_{h,j}$ and $l\geq\widetilde m_{h,j}+1$, then $$s_l(w_{2,j}w_{3,j}\cdots w_{n,j})=(w_{2,j}w_{3,j}\cdots w_{n,j})s_{l+\sum_{i=h+1}^nm_{i,j}}.$$
- For any $1<k\leq n$, if $0<x\leq m_{h,k}$ and assume $\sum_{j=1}^{k-1}m_{h,j}+x<\la_h$, then $$\begin{aligned}
s_{\sigma_{h-1,1}+\sum_{j=1}^{k-1}m_{h,j}+x} &(w_{2,1}\cdots w_{n,1})\cdots (w_{2,k-1}\cdots w_{n,k-1})\\
=\,&(w_{2,1}\cdots w_{n,1})\cdots (w_{2,k-1}\cdots w_{n,k-1})s_{\sigma_{h-1,k}+x}.\end{aligned}$$
The proof for the first two assertions is straightforward. We now prove (3).
Consider the product $\prod_t$ of first $t$ columns of $d_M$: $$\Pi_t=(w_{2,1}\cdots w_{h-1,1}w_{h,1}w_{h+1,1}\cdots w_{n,1})\cdots\cdots(w_{2,t}\cdots w_{h-1,t}w_{h,t}w_{h+1,t}\cdots w_{n,t}).$$ We claim for all $t<k$ that $$\label{*}
s_{\sigma_{h-1,1}+\sum_{j=1}^{k-1}m_{h,j}+x}\cdot\Pi_t=\Pi_t\cdot s_{\sigma_{h-1,t+1}+\sum_{j=t+1}^{k-1}m_{h,j}+x}.$$ Thus, taking $t=k-1$ gives the assertion (3).
We prove by induction on $t$. If $t=1$, then $x>0$ implies $$l=\sigma_{h-1,1}+\sum_{j=1}^{k-1}m_{h,j}+x>\sigma_{h-1,1}+m_{h,1}.$$ As the largest number permuted by $w_{2,1}\cdots w_{h,1}$ is $\sigma_{h-1,1}+m_{h,1}$, we have $$\label{t=1}
s_l (w_{2,1}\cdots w_{h,1})=(w_{2,1}\cdots w_{h,1})s_l.$$
Now we consider $s_l (w_{h+1,1}\cdots w_{n,1})$. Assume $w_{h+1,1}\neq 1$ (and so $m_{h+1,1}>0$). Since $k>1$ and $\widetilde m_{h,1}+1\leq l=\sigma_{h-1,1}+\sum_{j=1}^{k-1}m_{h,j}+x<\sigma_{h-1,1}+\la_h= \sigma_{h,1}$, by (2), $s_l w_{h+1,1}=w_{h+1,1}s_{l+m_{h+1,1}}$ and, by an inductive argument as above, $$\label{co1}
\begin{aligned}
s_l w_{h+1,1}w_{h+2,1}\cdots w_{n,1}
=w_{h+1,1}w_{h+2,1}\cdots w_{n,1}s_{l+\sum_{i=h+1}^nm_{i,1}}.
\end{aligned}$$ But $l+\sum_{i=h+1}^nm_{i,1}=\sigma_{h-1,2}+\sum_{j=2}^{k-1}m_{h,j}+x$. This proves for $t=1$.
Suppose now $t>1$ and is true for $t-1$. That is, assume $$\begin{aligned}
s_{\sigma_{h-1,1}+\sum_{j=1}^{k-1}m_{h,j}+x}&(w_{2,1}\cdots w_{n,1})\cdots (w_{2,t-1}\cdots w_{n,t-1})\\
=\,&(w_{2,1}\cdots w_{n,1})\cdots (w_{2,t-1}\cdots w_{n,t-1})s_{\sigma_{h-1,t}+\sum_{j=t}^{k-1}m_{h,j}+x}.
\end{aligned}$$
Since $\sigma_{h-1,t}+\sum_{j=t}^{k-1}m_{h,j}+x>\sigma_{h-1,t}+m_{h,t}$ and $$\sigma_{h,t}=\sigma_{h-1,t}+\sum_{j=t}^nm_{h,j}>\sigma_{h-1,t}+\sum_{j=t}^{k-1}m_{h,j}+x\geq\widetilde{m}_{h,t}+1,$$ applying (2) with $l=\sigma_{h-1,t}+\sum_{j=t}^{k-1}m_{h,j}+x$ gives $$\aligned
s_l(w_{2,t}\cdots w_{h,t}w_{h+1,t}\cdots w_{n,t})&=(w_{2,t}\cdots w_{h,t})s_l(w_{h+1,t}\cdots w_{n,t})\\
&=(w_{2,t}\cdots w_{h,t}w_{h+1,t}\cdots w_{n,t})s_{l+\sum_{i=h+1}^nm_{i,t}},
\endaligned$$ where $$l+\sum_{i=h+1}^nm_{i,t}=\sigma_{h-1,t}+\sum_{j=t}^{k-1}m_{h,j}+x+\sum_{i=h+1}^nm_{i,t}=\sigma_{h-1,t+1}+\sum_{j=t+1}^{k-1}m_{h,j}+x.$$ This proves for $t$ and, hence, (3).
\[mainw\] For $0<x\leq m_{h,k}$, $l=\sigma_{h-1,1}+\sum_{j=1}^{k-1}m_{h,j}$ with $l+x<\sigma_{h,1}$, we have $$\begin{aligned}
&s_{l+x}d_M={\left(\begin{array}{ccccccc}
w_{2,1}&\cdots&w_{2,k-1}&w_{2,k}&w_{2,k+1}&\cdots&w_{2,n-1}\\
\vdots&\vdots&\cdots&\vdots&\vdots&\cdots&\vdots\\
w_{h-1,1}&\cdots&w_{h-1,k-1}&w_{h-1,k}&w_{h-1,k+1}&\cdots&w_{h-1,n-1}\\
w_{h,1}&\cdots&w_{h,k-1}&w^*_{h,k}&w_{h,k+1}&\cdots&w_{h,n-1}\\
w_{h+1,1}&\cdots&w_{h+1,k-1}&w_{h+1,k}&w_{h+1,k+1}&\cdots&w_{h+1,n-1}\\
\vdots&\vdots&\cdots&\vdots&\vdots&\cdots&\vdots\\
w_{n,1}&\cdots&w_{n,k-1}&w_{n,k}&w_{h,k+1}&\cdots&w_{n,n-1}\\
\end{array}\right),}
\end{aligned}$$ where $w^*_{h,k}=s_{\sigma_{h-1,k}+x}w_{h,k}$. In particular, $s_{l+1}s_{l+2}\cdots s_{l+m_{h,k}}d_M$ can be expressed by the same matrix with $w^*_{h,k}=w^+_{h,k}$, the element defined in .
The next result is the key to establish the decomposition in Theorem \[prod coset\] and the multiplication formulas in Theorem \[KMF\].
\[case1\] Maintain the notation as given in and Theorem \[prod coset\], and let $a=\sum_{j=1}^{k-1}m_{h+1,j}$, and $b=\sum_{j=k+1}^nm_{h,j}$.
- If $m_{h+1,k}\geq 1$ then, for $\lambda^+=\la^{[h^+]}=\lambda+\bse_h-\bse_{h+1}$ and $0\leq p< m_{h+1,k}$, $$\label{xx}
\aligned
s_{\widetilde{\lambda}_h+1}s_{\widetilde{\lambda}_h+2}\cdots s_{\widetilde{\lambda}_h+a+p}d_M&
=s_{{\widetilde\lambda}_h^+-1}s_{{\widetilde\lambda}_h^+-2}\cdots s_{{\widetilde\lambda}_h^+-b}d_{M^+_{h,k}}(s_{\widetilde{m}_{h,k}+1}\cdots s_{\widetilde{m}_{h,k}+p})\\
&=s_{\widetilde{\lambda}_h}s_{\widetilde{\lambda}_h-1}\cdots s_{\widetilde{\lambda}_h-b+1}d_{M^+_{h,k}}(s_{\widetilde{m}_{h,k}+1}\cdots s_{\widetilde{m}_{h,k}+p}).
\endaligned$$
- If $m_{h,k}\geq 1$ then, for $\lambda^-=\la^{[h^-]}=\lambda-\bse_h+\bse_{h+1}$ and $q=m_{h,k}-p$ with $0< p\leq m_{h,k}$ (so $0\leq q<m_{h,k}$), $$\begin{aligned}
s_{\widetilde{\lambda}_h-1}s_{\widetilde{\lambda}_h-2}\cdots s_{\widetilde{\lambda}_{h}-b-q}d_M
&=s_{\widetilde{\lambda}_h^-+1}s_{\widetilde{\lambda}_h^-+2}\cdots s_{\widetilde{\lambda}_h^-+a}d_{M^-_{h,k}}(s_{\widetilde{m}_{h,k}-1}s_{\widetilde{m}_{h,k}-2}\cdots s_{\widetilde{m}_{h,k}-q})\\
&=s_{\widetilde{\lambda}_h}s_{\widetilde{\lambda}_h+1}\cdots s_{\widetilde{\lambda}_h+a-1}d_{M^-_{h,k}}(s_{\widetilde{m}_{h,k}-1}s_{\widetilde{m}_{h,k}-2}\cdots s_{\widetilde{m}_{h,k}-q}).\\
\end{aligned}
$$
Here every product of the $s_i$’s is regarded as 1 if its “length” is 0.
We only prove (1), (2) follows from (1) with a similar argument. We first assume that $p=0$. In this case, we want to prove $$\label{p=0}
s_{\widetilde{\lambda}_h+1}s_{\widetilde{\lambda}_h+2}\cdots s_{\widetilde{\lambda}_h+a}d_M=
s_{\widetilde{\lambda}^+_h-1}s_{\widetilde{\lambda}^+_h-2}\cdots s_{\widetilde{\lambda}^+_h-b}d_{M^+_{h,k}}.$$
Since $a=m_{h+1,1}+\cdots+m_{h+1,k-1}$, repeatedly applying Corollary \[mainw\] (with $h$ replaced by $h+1$, noting $m_{h+1,k}>0$) yields $$\label{sdm}
\begin{aligned}
s_{\widetilde{\lambda}_h+1}s_{\widetilde{\lambda}_h+2}\cdots s_{\widetilde{\lambda}_h+a}d_M={\left(\begin{array}{ccccccc}
w_{2,1}&\cdots&w_{2,k-1}&w_{2,k}&\cdots&w_{2,n-1}\\
\vdots&\vdots&\cdots&\vdots&\cdots&\vdots\\
w_{h,1}&\cdots&w_{h,k-1}&w_{h,k}&\cdots&w_{h,n-1}\\
w^+_{h+1,1}&\cdots&w^+_{h+1,k-1}&w_{h+1,k}&\cdots&w_{h+1,n-1}\\
w_{h+2,1}&\cdots&w_{h+2,k-1}&w_{h+2,k}&\cdots&w_{h+2,n-1}\\
\vdots&\vdots&\cdots&\vdots&\cdots&\vdots\\
w_{n,1}&\cdots&w_{n,k-1}&w_{n,k}&\cdots&w_{n,n-1}\\
\end{array}\right).}
\end{aligned}$$ (Note that, if $k=1$, then $a=0$ and so LHS of $=d_M$. Note also that $w_{h+1,j}^+=1$ if $m_{h+1,j}=0$.) By comparing this with the “matrix” of $d_{M^+_{h,k}}$, we now show that multiplying $d_{M^+_{h,k}}$ by $s_{\widetilde{\lambda}^+_h-1}s_{\widetilde{\lambda}^+_h-2}\cdots s_{\widetilde{\lambda}^+_h-b}$ on the left will turn the product $w^\bullet_{h,k}w^\circ_{h+1,k}$ into $w_{h,k}w_{h+1,k}$.
If $b=0$, then $\sigma_{h,k}=\sigma_{h-1,k}+m_{h,k}$ and so $w^\bullet_{h,k}w^\circ_{h+1,k}=w_{h,k}w_{h+1,k}$ (cf. Lemma \[d\_M\]). This proves in this case. Assume now $b>0$. Observe that, for $\la^+=\ro(M^+_{h,k})$, $\widetilde{\lambda}^+_h-\sum_{j>k}m_{h,j}=\widetilde{\lambda}_{h-1}+\sum_{j=1}^km_{h,j}+1$. Let $l=\widetilde{\lambda}_{h-1}+\sum_{j=1}^{k-1}m_{h,j}$ and $1\leq x\leq m_{h,k}$. Then $l+x<l+x+m_{h+1,k}\leq\la_{h+1}$. By Lemma \[reflection\](3) (cf. ), $$\label{k-1column}
s_{l+x}\Pi^+_{k-1}=\Pi^+_{k-1}s_{\sigma_{h-1,k}+x},$$ where $\Pi^+_{k-1}$ is the product of the first $k-1$ columns of $d_{M^+_{h,k}}$. By for ${M^+_{h,k}}$ and noting , $$\label{***}
\begin{aligned}
s_{\widetilde{\lambda}^+_h-1}s_{\widetilde{\lambda}^+_h-2}\cdots &s_{\widetilde{\lambda}^+_h-\sum_{j>k}m_{h,j}}d_{M^+_{h,k}}\\
=\;&\Pi^+_{k-1}\cdot s_{\sigma_{h,k}}s_{\sigma_{h,k}-1}\cdots s_{\sigma_{h-1,k}+m_{h,k}+1}\\
&(w_{2,k}\cdots w_{h-1,k}w^\bullet_{h,k}w^\circ_{h+1,k} w_{h+2,k}\cdots w_{n,k})\\
&\cdots\cdots\\
&(w_{2,n-1}\cdots w_{h,n-1}w_{h+1,n-1} w_{h+2,n-1}\cdots w_{n,n-1}).\\
\end{aligned}$$
Since the smallest number permuted by $s_{\sigma_{h,k}}s_{\sigma_{h,k}-1}\cdots s_{\sigma_{h-1,k}+m_{h,k}+1}$ is $\sigma_{h-1,k}+m_{h,k}+1$, while the largest number permuted by $w_{2,1}\cdots w_{h-1,k}w_{h,k}$ is $\sigma_{h-1,k}+m_{h,k}$, it follows that $s_{\sigma_{h,k}}s_{\sigma_{h,k}-1}\cdots s_{\sigma_{h-1,k}+m_{h,k}+1}$ commutes with $w_{2,k}\cdots w_{h-1,k}$ and $w_{h,k}$. Thus, $$\begin{aligned}
&\quad\;s_{\sigma_{h,k}}s_{\sigma_{h,k}-1}\cdots s_{\sigma_{h-1,k}+m_{h,k}+1}w^\bullet_{h,k}w^\circ_{h+1,k}\\
&=w_{h,k}(s_{\sigma_{h,k}}\cdots s_{\sigma_{h-1,k}+m_{h,k}+1})
s_{\sigma_{h-1,k}+m_{h,k}}s_{\sigma_{h-1,k}+m_{h,k}-1}\cdots s_{\widetilde{m}_{h,k}+1}w^\circ_{h+1,k}\\
&=w_{h,k}(s_{\sigma_{h,k}}s_{\sigma_{h,k}-1}\cdots s_{\widetilde{m}_{h,k}+1})w^\circ_{h+1,k}\\
&=w_{h,k}w_{h+1,k}.
\end{aligned}$$ Hence, $s_{\widetilde{\lambda}^+_h-1}s_{\widetilde{\lambda}^+_h-2}\cdots s_{\widetilde{\lambda}^+_h-\sum_{j>k}m_{h,j}}d_{M^+_{h,k}}
=\;$LHS, proving the $p=0$ case.
Assume now $p>0$. Then one can easily prove by Corollary \[mainw\] that $$s_{l+1}\cdots s_{l+p}d_M=d_Ms_{\widetilde{m}_{h,k}+1}s_{\widetilde{m}_{h,k}+2}\cdots s_{\widetilde{m}_{h,k}+p}.$$ Now the required formula follows from .
[*Proof of Theoren \[prod coset\].*]{} Set $D^+_h=\diag(\lambda-\bse_{h+1})+E_{h,h+1}$. Then $\ro(D^+_h)=\lambda^{[h^+]}$, $\co(D^+_h)=\lambda$, and $$\nu':=\nu_{D^+_h}=(\lambda_1,\lambda_2,\cdots,\lambda_h,1,\lambda_{h+1}-1,\lambda_{h+2},\cdots,\lambda_n).$$ Note that in this case $d_{D^+_h}=1$. Observe that $$\label{nu'la}
\mathcal{D}_{\nu'}\cap\fS_\lambda=\{1,s_{\widetilde{\lambda}_h+1},s_{\widetilde{\lambda}_h+1}s_{\widetilde{\lambda}_h+2},\cdots,s_{\widetilde{\lambda}_h+1}s_{\widetilde{\lambda}_h+2}\cdots s_{\widetilde{\lambda}_h+\lambda_{h+1}-1}\}.$$ Putting $d_i=s_{\widetilde{\lambda}_h+1}s_{\widetilde{\lambda}_h+2}\cdots s_{\widetilde{\lambda}_h+i}$ for $0\leq i\leq \lambda_{h+1}-1$, the left hand side becomes $\bigcup_i\fS_{\lambda^{[h^+]}} d_id_M\fS_\mu$. Since $\lambda_{h+1}=\sum_{k; m_{h+1,k}\geq1}m_{h+1,k}$, the first decomposition follows from Proposition \[case1\](1). The second decomposition can be proved similarly.
Regular representation of the $q$-Schur superalgebra
====================================================
We now use Proposition \[case1\] to derive certain multiplication formulas in $\sS(m|n,r)$ and the matrix representation of the regular representation. For any integers $0\leq t\leq s$, define Gaussian polynomials in $\sZ=\mathbb Z[\up,\up^{-1}]$ by $$\left[\!\!\left[s\atop t\right]\!\!\right]=\left[\!\!\left[s\atop t\right]\!\!\right]_\bsq=\frac{[\![s]\!]^!}{[\![t]\!]^![\![s-t]\!]^!},$$ where $[\![r]\!]^{!}:=[\![1]\!][\![2]\!]\cdots[\![r]\!]$ with $[\![i]\!]=1+\bsq+\cdots+\bsq^{i-1}$ ($\bsq=\up^2$). Define $[r]^!$ similarly with $[i]=\frac{\up^i-\up^{-i}}{\up-\up^{-1}}$.
For $\lambda\in\Lambda(m|n,r)$, denote $\mathcal{P}_{W_\lambda}$ to be the [*super*]{} Poincaré polynomial $$\label{superP}
\mathcal{P}_{W_\lambda}=\sum_{w_0\in W_{\la^{(0)}},w_1\in W_{\la^{(1)}}}(\bsq)^{\ell(w_0)}(\bsq^{-1})^{\ell(w_1)}.$$
For $1\leq h\leq m+n$, define $\dq_h,\ddq_h,\up_h$ by $$\begin{cases}
\dq_h=1, &\ddq_h=\bsq, \quad\;\;\up_h=\up, \quad \text{ if }1\leq h\leq m;\\
\dq_h=-\bsq^{-1},&\ddq_h=-1, \quad \up_h=\up^{-1},\text{ if }m<h\leq m+n,
\end{cases}$$ and let $\bsq_h=\up_h^2$. Recall the basis $\{\phi_A\}_{A\in M(m|n,r)}$ given in Lemma \[DR5.8\].
\[KMF\] For any $A=(a_{i,j})\in M(m|n,r)$ and $1\leq h< m+n$, let $D_h^+,D_h^-$ be the matrices defined by the conditions that $D_h^+-E_{h,h+1}, D_h^--E_{h+1,h}$ are diagonal and $\co(D_h^+)=\co(D_h^-)=\ro(A)$, and assume $D_h^+, D_h^-\in M(m|n,r)$. Then the following multiplication formulas hold in $\sS(m|n,r)$: $$\aligned
(1)\quad&\phi_{D^+_h}\phi_A=\sum_{k\in[1,m+n]\atop a_{h+1,k}\geq 1}\dq_{h+1}^{\sum_{j<k}a_{h+1,j}}\ddq_h^{\sum_{j>k}a_{h,j}}[\![a_{h,k}+1]\!]_{\bsq_h}\phi_{A^+_{h,k}};\\
(2)\quad&\phi_{D^-_h}\phi_A=\sum_{k\in[1,m+n]\atop a_{h,k}\geq 1}\dq_h^{\sum_{j>k}a_{h,j}}\ddq_{h+1}^{\sum_{j<k}a_{h+1,j}}[\![a_{h+1,k}+1]\!]_{\bsq_{h+1}}\phi_{A^-_{h,k}}.\endaligned$$ [(Here $[1,m+n]=\{1,2,\ldots,m+n\}$.)]{}
We only prove (1). The proof of (2) is symmetric.
Let $\lambda=\ro(A)$, $\mu=\co(A)$, $d=d_A$ and $W_\nu=\fS^d_\lambda\cap \fS_\mu=W_{\nu^{(0)}}\times W_{\nu^{(1)}}$ (cf. ), where $W_{\nu^{(i)}}=W^d_{\lambda^{(i)}}\cap W_{\mu^{{(i)}}}$ for $i=0,1$. Then $\lambda=\co(D^+_h)$, $\la^{[h^+]}=\ro(D^+_h)=\lambda+\bse_h-\bse_{h+1}$, and $\jmath(\la^{[h^+]},1,\lambda)=D^+_h$.
Putting $W_{\nu'(h)}=W_{\la^{[h^+]}}\cap W_\la$, we see from , $$\mathcal{D}_{\nu'(h)}\cap W_\la=\{1,s_{\widetilde{\lambda}_h+1},s_{\widetilde{\lambda}_h+1}s_{\widetilde{\lambda}_h+2},
\cdots,s_{\widetilde{\lambda}_h+1}\cdots s_{\widetilde{\lambda}_h+\lambda_{h+1}-1}\}.$$ Since $\mathcal{D}_{\nu'(h)}\cap W_\la\subseteq W_{\la^{(1)}}$ whenever $h\geq m$, the element $T_{\mathcal{D}_{\nu'(h)}\cap W_\la}$ used in can be written as $T_{\mathcal{D}_{\nu'(h)}\cap W_\la}=\sum_{w\in \mathcal{D}_{\nu'(h)}\cap W_\la}(\dq_{h+1})^{\ell(w)}T_w.$
By definition, to compute $\phi_{D^+_h}\phi_A$, it suffices to write $\phi_{D^+_h}\phi_A(\xy_{\mu})$ as a linear combination of some $T_{W_\xi d'W_\mu}$, where $\xi=\la^{[h^+]}$. We compute this within $\sS_{\mathbb Q(\up)}(m|n,r)$: $$\begin{aligned}
\phi_{D^+_h}\phi_A(\xy_{\mu})&=\phi^1_{\xi,\lambda}\phi^d_{\lambda,\mu}(\xy_\mu)=\phi^1_{\xi,\lambda}(T_{W_\lambda d W_{\mu}})\\
&=\phi^1_{\xi,\lambda}(\xy_{\lambda} T_d T_{\mathcal{D_\nu}\cap W_\mu})\mbox{ (by \eqref{double coset})}\\
&=T_{W_\xi W_\lambda}T_d T_{\mathcal{D_\nu}\cap W_\mu}=(\mathcal{P}_{W_\nu})^{-1}T_{W_\xi W_\lambda}T_d \xy_\mu\\
&=(\mathcal{P}_{W_\nu})^{-1}\xy_\xi T_{\mathcal{D}_{\nu'(h)}\cap W_\lambda}T_d\xy_\mu\\
&=(\mathcal{P}_{W_\nu})^{-1}\sum_{w\in \mathcal{D}_{\nu'(h)}\cap W_\lambda}\xy_\xi ({\dq_{h+1}}^{\ell(w)}T_w) T_d \xy_\mu.\\
\end{aligned}$$
Note that $d=d_A\in \mathcal{D}_{\lambda\mu}$. If $a_{h+1,k}>0$ and $w_p:=s_{\widetilde{\lambda}_h+1}s_{\widetilde{\lambda}_h+2}\cdots s_{\widetilde{\lambda}_h+\sum_{j=1}^{k-1}a_{h+1,j}+p}$ for some $0\leq p<a_{h+1,k}$, then by Proposition \[case1\](1), we have $$w_pd=s_{{\widetilde\lambda}_h}s_{{\widetilde\lambda}_h-1}\cdots s_{{\widetilde\lambda}_h-\sum_{j=k+1}^{m+n}a_{h,j}+1}d^+(s_{\widetilde{a}_{h,k}+1}\cdots s_{\widetilde{a}_{h,k}+p}),$$ where $d^+=d_{A^+_{h,k}}$. Clearly, $\sum_{j<k}a_{h+1,j}=\ell(w_p)-p$. If we put $Q_{h+1,k}=\dq_{h+1}^{\sum_{j<k}a_{h+1,j}}$, then $$\aligned
\sum_{p=0}^{a_{h+1,k}-1}&{\dq_{h+1}}^{\ell(w_p)}T_{w_p} T_d=Q_{h+1,k}
T_{\widetilde{\lambda}_h}T_{\widetilde{\lambda}_h-1}\cdots T_{\widetilde{\lambda}_h-\sum_{j>k}a_{h,j}+1}T_{d^+}\\
&\cdot(1+\dq_{h+1}T_{\widetilde{a}_{h,k}+1}+\cdots+\dq_{h+1}^{a_{h+1,k}-1}T_{\widetilde{a}_{h,k}+1}\cdots T_{\widetilde{a}_{h,k}+a_{h+1,k}-1}).
\endaligned$$ Thus, $$\begin{aligned}
&\sum_{w\in \mathcal{D}_{\nu'}\cap W_\lambda}\xy_\xi (\dq_{h+1}^{\ell(w)}T_w T_d) \xy_\mu\\
=&\sum_{k\in[1,m+n]\atop a_{h+1,k}\geq 1} Q_{h+1,k} \xy_{\xi}
T_{\widetilde{\lambda}_h}T_{\widetilde{\lambda}_h-1}\cdots T_{\widetilde{\lambda}_h-\sum_{j>k}a_{h,j}+1}T_{d^+}\\
&\cdot(1+(\dq_{h+1})T_{\widetilde{a}_{h,k}+1}+\cdots+(\dq_{h+1})^{a_{h+1,k-1}}T_{\widetilde{a}_{h,k}+1}\cdots T_{\widetilde{a}_{h,k}+a_{h+1,k}-1})\xy_\mu.
\end{aligned}$$ Since $$\begin{aligned}
&(1+(\dq_{h+1})T_{\widetilde{a}_{h,k}+1}+\cdots+(\dq_{h+1})^{a_{h+1,k-1}}T_{\widetilde{a}_{h,k}+1}\cdots T_{\widetilde{a}_{h,k}+a_{h+1,k}-1})\xy_\mu\\
&=(1+\dq_{h+1}\ddq_k+\cdots+(\dq_{h+1}\ddq_k)^{a_{h+1,k}-1})\xy_\mu\\
&=[\![a_{h+1,k}]\!]_{\dq_{h+1}\ddq_k}\xy_\mu
\end{aligned}$$ and $$\xy_\xi T_{\widetilde{\lambda}_h}T_{\widetilde{\lambda}_h-1}\cdots T_{\widetilde{\lambda}_h-\sum_{j>k}a_{h,j}+1}
=\ddq_h^{\sum_{j>k}a_{h,j}}\xy_\xi,$$ it follows that $$\begin{aligned}
\phi_{D^+_h}\phi_A(\xy_\mu)&=\mathcal{P}_{W_\nu}^{-1}\sum_{a_{h+1,k}\geq1}Q_{h+1,k}\ddq_h^{\sum_{j>k}a_{h,j}}
[\![a_{h+1,k}]\!]_{\dq_{h+1}\ddq_k}\xy_\xi T_{d^+}\xy_\mu\\
&=\sum_{a_{h+1,k}\geq1}\frac{\mathcal{P}_{W_{\nu''}}}{\mathcal{P}_{W_\nu}}Q_{h+1,k}
\ddq_h^{\sum_{j>k}a_{h,j}}[\![a_{h+1,k}]\!]_{\dq_{h+1}\ddq_k}T_{W_\xi d^+W_\mu}\\
&=\sum_{a_{h+1,k}\geq1}\frac{\mathcal{P}_{W_{\nu''}}}{\mathcal{P}_{W_\nu}}Q_{h+1,k}
\ddq_h^{\sum_{j>k}a_{h,j}}[\![a_{h+1,k}]\!]_{\dq_{h+1}\ddq_k}
\phi_{A^+_{h,k}}(\xy_{\mu}),
\end{aligned}$$ where $\nu''=\nu_M$ with $M=A^+_{h,k}$ or $W_{\nu''}=W_\xi^{d^+}\cap W_\mu$. Hence, noting $$\frac{\mathcal{P}_{W_{\nu''}}}{\mathcal{P}_{W_\nu}}=\frac{[\![a_{h,k}+1]\!]_{\bsq_k}^![\![a_{h+1,k}-1]\!]_{\bsq_k}^!}
{[\![a_{h,k}]\!]_{\bsq_k}^![\![a_{h+1,k}]\!]_{\bsq_k}^!}=\frac{[\![a_{h,k}+1]\!]_{\bsq_k}}{[\![a_{h+1,k}]\!]_{{\bsq_k}}},$$ we obtain $$\begin{aligned}
\phi_{D^+_h}\phi_A=\sum_{k\atop a_{h+1,k}\geq1}\dq_{h+1}^{\sum_{j<k}a_{h+1,j}}\ddq_h^{\sum_{j>k}a_{h,j}} \frac{[\![a_{h,k}+1]\!]_{\bsq_k} [\![a_{h+1,k}]\!]_{\dq_{h+1}\ddq_k}}{[\![a_{h+1,k}]\!]_{{\bsq_k}}} \phi_{A^+_{h,k}}.
\end{aligned}$$ It remains to prove that $$\label{str const}
\frac{[\![a_{h,k}+1]\!]_{\bsq_k} [\![a_{h+1,k}]\!]_{\dq_{h+1}\ddq_k}}{[\![a_{h+1,k}]\!]_{{\bsq_k}}}=[\![a_{h,k}+1]\!]_{\bsq_h}.$$ This can be seen in cases. For example, if $h< m$ and $k\leq m$ (resp., $h>m$ and $k>m$), then $\dq_{h+1}=1$, $\ddq_k=\bsq$ (resp., $\dq_{h+1}=-\bsq^{-1}$, $\ddq_k=-1$), and so $\bsq_k=\bsq_h$ (resp., $\dq_{h+1}\ddq_k=\bsq_h$). Hence, $$\frac{[\![a_{h,k}+1]\!]_{\bsq_k} [\![a_{h+1,k}]\!]_{\dq_{h+1}\ddq_k}}{[\![a_{h+1,k}]\!]_{{\bsq_k}}}=[\![a_{h,k}+1]\!]_{\bsq_h}.$$ When $h\leq m$ and $k>m$, or $h>m$ and $k\leq m$, we must have $a_{h,k}+1=a_{h+1,k}=1$. Thus, $ [\![a_{h,k}+1]\!]_{\bsq_k}= [\![a_{h,k}+1]\!]_{\dq_{h+1}\ddq_k}=[\![a_{h+1,k}]\!]_{{\bsq_k}}=1= [\![a_{h,k}+1]\!]_{\bsq_h}.$ Finally, when $h=m$ and $k\leq m$, we have $\bsq_h=\bsq_k$ and $\dq_{h+1}\ddq_k=-\bsq^{-1}\bsq=-1$. But $a_{h+1,k}=a_{m+1,k}=1$, forcing $ [\![a_{h+1,k}]\!]_{\dq_{h+1}\ddq_k}=[\![a_{h+1,k}]\!]_{{\bsq_k}}=1$. Hence, $$\frac{[\![a_{h,k}+1]\!]_{\bsq_k} [\![a_{h+1,k}]\!]_{\dq_{h+1}\ddq_k}}{[\![a_{h+1,k}]\!]_{{\bsq_k}}}=[\![a_{h,k}+1]\!]_{\bsq_h},$$ proving and, hence, formula (1).
If $n=0$, then $\sS(m|0,r)$ is the usual $\bsq$-Schur algebra which is defined in [@BLM] as a convolution algebra of the $m$-step flags of an $r$-dimensional space. Similar multiplication formulas are obtained in loc. cit. by counting intersections of certain orbits. Observe that, for $h<m$, $$\dq_{h+1}^{\sum_{j<k}a_{h+1,j}}\ddq_h^{\sum_{j>k}a_{h,j}}=\bsq^{\sum_{j>k}a_{h,j}},\qquad\dq_h^{\sum_{j>k}a_{h,j}}\ddq_{h+1}^{\sum_{j<k}a_{h+1,j}}=\bsq^{\sum_{j<k}a_{h+1,j}}.$$
\[KMFcor\] The multiplication formulas in Theorem \[KMF\] for $\mathcal S(m|0,r)$ coincide with the ones in [@BLM Lemma 3.4].
We now make a comparison of these new formulas with ones given in [@DG Lemma 3.1], derived through the relative norm method.
The $\sH$-module $\fT(m|n,r)$ is isomorphic to the tensor superspace $V(m|n)^{\otimes r}$ (over $\sZ$!) with an $\sH$-action defined in [@DG (1.0.10)]; see [@DR Proposition 8.3]. In fact, the endomorphism algebra of $V(m|n)^{\otimes r}$ has a relative norm basis $\{N_A\}_{A\in M(m|n,r)}$ acting on the right. Matrix transposing may turn the right action to a left action and result in a basis denoted by $\{\zeta_A\}_{A\in M(m|n,r)}$. The $\sH$-module isomorphism induces an algebra isomorphism (cf. [@DR Corollary 8.4] and [@DG2 Lemma 2.3]) $$\End_{\sH}(V(m|n)^{\otimes r})^{\text{\rm op}}\longrightarrow \sS(m|n,r),\zeta_A\longmapsto (-1)^{\widehat A}\phi_A,$$ where $\widehat{A}=\sum_{m<k<i\leq m+n,\\1\leq j<l\leq m+n}a_{i,j}a_{k,l}.$
\[coincide\]Let $$f^+_{h,k}(\bsq,A)=\dq_{h+1}^{\sum_{j<k}a_{h+1,j}}\ddq_h^{\sum_{j>k}a_{h,j}}, \quad f^-_{h,k}(\bsq,A)=\dq_h^{\sum_{j>k}a_{h,j}}\ddq_{h+1}^{\sum_{j<k}a_{h+1,j}}.$$ Then $$(-1)^{\widehat{D}^+_h+\widehat A+\widehat{A}^+_{h,k}} f^+_{h,k}(\bsq,A)=f_k(\bsq,A,h)\text{ and
}(-1)^{\widehat{D}^-_h+\widehat A+\widehat{A}^-_{h,k}} f^-_{h,k}(\bsq,A)=g_k(\bsq,A,h),$$ where $f_k(\bsq,A,h)$ and $g_k(\bsq,A,h)$ are defined in [@DG (3.0.1-2)]. In particular, rewriting the multiplication formulas in Theorem \[KMF\] in terms of the $\zeta$-basis results in the formulas in [@DG Lemma 3.1].
We have $$\label{Jie}
f^+_{h,k}(\bsq,A)=
\begin{cases}
\bsq^{\sum_{j>k}a_{h,j}}, &\text{ if }h<m;\\
(-1)^{\sum_{j<k}a_{m+1,j}}\bsq^{-\sum_{j<k}a_{m+1,j}+\sum_{j>k}a_{m,j}},&\text{ if }h=m;\\
(-1)^{\sum_{j<k}a_{h+1,j}+\sum_{j>k}a_{h,j}}\bsq^{-\sum_{j<k}a_{h+1,j}},&\text{ if }h>m.\\
\end{cases}$$ On the other hand (cf. [@DG Lemma 5.1]), for the choice of $+$ or $-$, $$\widehat{D}^\pm_{h}+\widehat{A}+\widehat{A}^\pm_{h,k}=\left\{
\begin{aligned}
&2\widehat{A} &\mbox{ if } h<m;\\
&\mp\sum_{i>m+1,j<k}a_{i,j}+2\widehat{A} &\mbox{ if } h=m;\\
&\mp\sum_{j>k}a_{h,j}\pm\sum_{j<k}a_{h+1,j}+2\widehat{A} &\mbox{ if } h>m.
\end{aligned}
\right.$$ Adjusting the right hand side of by the corresponding sign for the “$+$” case gives $f_k(\bsq,A,h)$. The “$-$” case is similar.
Theorem \[KMF\] and Corollary \[coincide\] give a new method to derive the key fundamental multiplication formulas given in [@DG Lemma 3.1].
By introducing the normalised basis $\{[A]\}_{A\in M(m|n,r)}$, where[^3] $$[A]=(-1)^{\widehat{A}}\up^{-d(A)}\phi_A\;\text{ with }\; d(A)=\sum_{i>k,j<l}a_{i,j}a_{k,l}+\sum_{j<l}(-1)^{\widehat{i}}a_{i,j}a_{i,l},$$ we may modify the formulas given in Theorem \[KMF\] to obtain further multiplication formulas for the $[\;\;]$-basis; cf. (the $p=1$ case of) [@DG Propositions 4.4&4.5].
\[KyMF\] Maintain the notation above and let $\epsilon_{h,k}=0$ for $h\neq m$, and $\epsilon_{m,k}=
\sum_{i>m,j<k}a_{i,j}$. The following multiplication formulas hold in $\sS_R(m|n,r)$:
- $[D^+_h][A]=\displaystyle\sum_{k\in[1,m+n] \atop a_{h+1,k}\geq 1}(-1)^{\epsilon_{h,k}}\up_h^{f^+_{h,k}}\overline{[\![a_{h,k}+1]\!]}_{\up_h^2}[A^+_{h,k}]$,\
where $f^+_{h,k}=\sum_{j\geq k}a_{h,j}-(-1)^{\widehat h+\widehat{h+1}}\sum_{j>k}a_{h+1,j}$;
- $[D^-_h][A]=\displaystyle\sum_{k\in[1,m+n]\atop a_{h,k}\geq 1}(-1)^{\epsilon_{h,k}}\up_{h+1}^{f^-_{h,k}}\overline{[\![a_{h+1,k}+1]\!]}_{\up_{h+1}^2}[A^-_{h,k}],$\
where $f^-_{h,k}=\sum_{j\leq k}a_{h+1,j}-(-1)^{\widehat h+\widehat{h+1}}\sum_{j<k}a_{h,j}$.
The first important application of the multiplication formulas above is a new realisation of the quantum supergroup ${\bf U}_\up(\mathfrak{gl}_{m|n})$; see the argument from [@DG §5] onwards and, in particular, see [@DG Definition 6.1, Theorem 8.4].
We now seek further applications of these multiplication formulas.
We will show below that the formulas provide enough information for the regular representation of the integral $q$-Schur superalgebra $\sS_R(m|n,r)$. We then use such a representation to determine the semisimplicity of $q$-Schur superalgebras and to construct infinitesimal and little ones without involving the quantum supergroup or quantum coordinate superalgebra.
We return to the general setting for $\sS_R(m|n,r)$ defined relative to a commutative ring $R$ and an invertible parameter $\ups\in R$ or $q=\ups^2$. Base change via $\sZ\to R, \up\mapsto\ups$, we may turn the multiplication formulas in $\sS(m|n,r)$ into similar formulas in $\sS_R(m|n,r)$. In fact, these formulas can be interpreted as the matrix representation of certain generators for $\sS_R(m|n,r)$ relative to the basis $\{[A]\}_{A\in M(m|n,r)}$.
Let $$M(m|n)^\pm=\{A=(a_{i,j})\in M(m|n)\mid a_{i,i}=0,
1\leq i\leq m+n\}.$$ For $A\in M(m|n)^{\pm}$ and $\bsj=(j_1,j_2,\cdots,j_{m+n})\in\mathbb{Z}^{m+n}$, define $$\label{Ajr}
A(\bsj,r)=\begin{cases}\sum_{\substack{\lambda\in\Lambda(m|n,r-|A|)}}(-1)^{\overline{A+\lambda}}\ups^{\lambda*\bsj}[A+\lambda],&\text{ if }|A|\leq r;\\
0,&\text{ otherwise,}\end{cases}$$ where $\lambda*\bsj=\sum_{i=1}^{m+n}(-1)^{\widehat{i}}\lambda_ij_i$ is the super (or signed) “dot product”, $A+\la=A+\diag(\la)$ and $\overline{M}=\sum_{\substack{m+n\geq i> m\geq k\geq1
\\m<j<l\leq m+n}}m_{i,j}m_{k,l}$ for a matrix $M$. We also let $1_\la=[\diag(\la)]$ for all $\la\in\La(m|n,r)$, the identity map on $\xy_\la\sH_R$. Then $1_\la[A]=\delta_{\la,\ro(A)}[A].$ For the zero matrix $O$, $\bse_i\in\La(m|n,1)$ and $p\geq1$, set $$\sck_i=O(\bse_i,r),\quad \sce_h^{(p)}=(pE_{h,h+1})(\mathbf0,r),\quad \scf_h^{(p)}=(pE_{h+1,h})(\mathbf0,r).$$ Note that $\sck_i=\sum_{\la\in\La(m|n,r)}\ups^{(-1)^{\widehat i}\la_i}1_\la$ and $\sce_m^2=0=\scf_m^2$.
Let $\sS_R^-$, $\sS_R^+$ be the subsuperalgebra of $\sS_R(m|n,r)$ generated respectively by $\scf_h^{(p)}$, $\sce_h^{(p)}$ for all $1\leq h<m+n$, $p\geq1$, and $\sS_R^0$ the subsuperalgebra spanned by all $1_\la$.
The first assertion of the following is [@DG Corollary 8.5].
\[KeyMF\] The $q$-Schur superalgebra $\sS_R=\sS_R(m|n,r)$ is generated by $\sck_i,$ $1_\la,$ $\sce_h^{(p)},\scf_h^{(p)}$ for all $1\leq h,i\leq m+n, h\not=m+n,$ $\la\in\La(m|n,r)$, $1\leq p\leq r$, and $\sS_R=\sS_R^+\sS_R^0\sS_R^-$. These generateors have the following matrix representations relative to the basis $\{[A]\}_{A\in M(m|n,r)}$:
- $\sck_i[A]=\ups^{(-1)^{\widehat i}\ro(A)_i}[A]$, $1_\la[A]=\delta_{\la,\ro(A)}[A]$;
- $\sce_h^{(p)}[A]=\displaystyle\sum_{\substack{\nu\in\Lambda(m|n,p)\\\nu\leq
\row_{h+1}(A)}}\ups_h^{f^+_h(\nu,A)}\prod_{k=1}^{m+n}\overline{\left[\!\!\left[a_{h,k}+\nu_k\atop\nu_k\right]\!\!\right]}_{\ups_h^2}
[A+\sum_l\nu_l(E_{h,l}-E_{h+1,l})],$
- where $h\neq m$, $f^+_h(\nu,A)=\sum_{j\geq
t}a_{h,j}\nu_t-\sum_{j>t}a_{h+1,j}\nu_t+\sum_{t<t'}\nu_t\nu_{t'}$ and $\nu\leq\nu'$ means that $\nu_i\leq\nu_i'$ for all $i$;
- $\scf_h^{(p)}[A]=\displaystyle\sum_{\substack{\nu\in\Lambda(m|n,p)\\\nu\leq
\row_{h}(A)}}\ups_{h+1}^{f^-_h(\nu,A)}\prod_{k=1}^{m+n}\overline{\left[\!\!\left[a_{h+1,k}+\nu_k\atop\nu_k\right]\!\!\right]}_{\ups_{h+1}^2}
[A-\sum_l\nu_l(E_{h,l}-E_{h+1,l})],$
- where $h\neq m$ and $ f^-_h(\nu,A)=\sum_{j\leq
t}a_{h+1,j}\nu_t-\sum_{j<t}a_{h,j}\nu_t+\sum_{t<t'}\nu_t\nu_{t'}$;
- $\sce_m[A]=\displaystyle\sum_{k\atop a_{m+1,k}\geq 1}(-1)^{\sum_{i>m,j<k}a_{i,j}}\ups_m^{f^+_{m,k}(A)}\overline{[\![a_{m,k}+1]\!]}_{\ups_m^2}[A^+_{m,k}],$
- where $f^+_{m,k}(A)=\sum_{j\geq k}a_{m,j}+\sum_{j>k}a_{m+1,j}$;
- $\scf_m[A]=\displaystyle\sum_{k\atop a_{m,k}\geq 1}(-1)^{\sum_{i>m,j<k}a_{i,j}}\ups_{m+1}^{f^-_{m,k}(A)}\overline{[\![a_{m+1,k}+1]\!]}_{\ups_{m+1}^2}[A^-_{m,k}],$
- where $f^-_{m,k}(A)=\sum_{j\leq k}a_{m+1,j}+\sum_{j<k}a_{m,j}$.
The first assertion follows from in [@DG Corollary 8.5] (cf. [@DG Theorem 6.3]). Now the relations in (0) are clear. Since $\sce_h^{(p)}[A]=\sce_h^{(p)}1_{\ro(A)}[A]$, $\scf_h^{(p)}[A]=\scf_h^{(p)}1_{\ro(A)}[A]$, and $\sce_h^{(p)}1_{\ro(A)}=(-1)^{\overline{D^+_{h,p}}}[D^+_{h,p}]$, $\scf_h^{(p)}1_{\ro(A)}=(-1)^{\overline{D^-_{h,p}}}[D^-_{h,p}]$, where the matrices $D^\pm_{h,p}\in M(m|n,r)$ are defined by the conditions that $\co(D^\pm_{h,p})=\ro(A)$ and $D^+_{h,p}-pE_{h,h+1}$, $D^-_{h,p}-pE_{h+1,h}$ are diagonal, (1) and (2) follow from [@DG Proposition 4.4][^4] and [@DG Lemma 5.1(1)] which tells $\overline{D^\pm_{h,p}}=0$. The remaining (3) and (4) follow from the $h=m$ case of Corollary \[KyMF\]; see [@DG Proposition 4.5].
Note that we have in $\sS_F(m|n,r)$ $$\label{[E,F]}
\sce_h\scf_k-(-1)^{\widehat h\widehat k}\scf_k\sce_h=\delta_{h,k}\frac{\sck_h\sck_{h+1}^{-1}-\sck_h^{-1}\sck_{h+1}}{\ups_h-\ups_h^{-1}}.$$
Semisimple $q$-Schur superalgebras
==================================
The most fabulous application of the multiplication formulas is the realisations of quantum $\mathfrak{gl}_n$ [@BLM] and quantum super $\mathfrak{gl}_{m|n}$ [@DG]. We now use these formulas to construct certain modules from which we obtain a semisimplicity criterion of $q$-Schur superalgebras. [*From now on, let $F$ be a field of characteristic $\neq2$ and assume that $\ups\in F^\times$ and $q=\ups^2\neq1$.*]{} Since every simple $\sS_F(m|n,r)$-supermodule is also a simple $\sS_F(m|n,r)$-module (see e.g., [@DGW2 Proposition 4.1]), we will drop the prefix “super” in the sequel for simplicity.
We first determine the semisimplicity for $\sS_F(1|1,r)$ (see [@mz] for the $q=1$ case).
\[ss1\]Assume that $q\neq1$ is a primitive $l$-th root of unity.
- If $l \nmid r$ then $\sS_F(1|1, r)$ is semisimple and has exact $r$ nonisomorphic irreducible modules which are all two dimensional.
- If $l \mid r$ then $\sS_F(1|1, r)$ is not semisimple and has exact $r+1$ nonisomorphic irreducible modules which are all one dimensional.
Let $\sS_F=\sS_F(1|1,r)$. We first observe that $$M(1|1,r)=\{A_a,A_b^+,A_c^-,A_d^\pm\mid a\in[0, r], b,c\in[0,r-1],d\in[0,r-2]\},$$ where $A_a,A_b^+,A_c^-,A_d^\pm$ denote respectively the following matrices $$\begin{pmatrix}
a & 0 \\
0 & r-a
\end{pmatrix},\;
\begin{pmatrix}
b& 1 \\
0 & r-b-1
\end{pmatrix},\;
\begin{pmatrix}
c & 0 \\
1 & r-c-1
\end{pmatrix},\;
\begin{pmatrix}
d & 1 \\
1 & r-d-2
\end{pmatrix}.$$ Note that $1_a:=1_{(a,r-a)}=[A_a]$ and $\sum_{a=0}^r1_a$ is the identity element. So $$\sS_F=\bigoplus_{a=0}^{r}\sS_F1_{a}\quad\text{and}\quad\dim\sS_F=4r.$$ Since $\sS_F1_a$ is spanned by $[A]$ with $\co(A)=(a,r-a)$, it follows that $$\aligned
\sS_F1_0&=\text{span}\{1_0,[A_0^+]\}, \quad\sS_F1_r=\text{span}\{1_r,[A_{r-1}^-]\},\\
\sS_F1_a&=\text{span}\{1_a,[A_a^+],[A_{a-1}^-],[A_{a-1}^\pm]\}, \forall a\in[1,r-1].
\endaligned$$
By Theorem \[KeyMF\](3)&(4), we have $$\aligned
&\sce_1[A_0^+]=0,\;\scf_1[A_0^+]=\ups^{-(r-1)}[\![r]\!]_{q}1_0,\;\sce_11_0=[A_0^+],\;\scf_11_0=0,\;\\
&\scf_1[A_{r-1}^-]=0,\;\sce_1[A^-_{r-1}]=\ups^{r-1}[\![r]\!]_{q^{-1}}1_r, \;\sce_11_r=0,\; \scf_11_r=[A_{r-1}^-].\\
\endaligned$$ If $l\nmid r$, then $\ups^{-(r-1)}[\![r]\!]_{q}=\ups^{r-1}[\![r]\!]_{q^{-1}}\neq0$ in $F$, and we see easily that $L(1):=\sS_F1_0$ is irreducible. Similarly, $L(r):=\sS_F1_r$ is irreducible if $l\nmid r$.
If $l\mid r$, then $L(1)$ is indecomposable and $[A_0^+]$ spans a submodule $\overline L(1)$ of $L(1)$. Let $\overline{L}(0)=L(1)/\overline{L}(1)$. Similarly, $[A_{r-1}^-]$ spans a submodule $\overline{L}(r-1)$. Let $\overline{L}(r)=L(r)/\overline{L}(r-1)$.
For $a\in[1,r-1]$, applying Theorem \[KeyMF\] again yields $$\label{pence}
\aligned
(1)\quad&\sce_1[A_a^+]=0,\; \scf_1[A_{a}^+]=\ups^{-(r-1)}[\![r-a]\!]_q1_a+[A_{a-1}^\pm],\\
(2)\quad&\scf_1[A_{a-1}^-]=0,\; \sce_1[A_{a-1}^-]=\ups^{r-1}[\![a]\!]_{q^{-1}}1_a-[A_{a-1}^\pm],\\
(3)\quad&\sce_1[A_{a-1}^\pm]=\ups^{r-1}[\![a]\!]_{q^{-1}}[A_a^+],\;\sce_11_a=[A_a^+],\\
(4)\quad&\scf_1[A_{a-1}^\pm]=-\ups^{-(r-1)}[\![r-a]\!]_q[A_{a-1}^-],\;\scf_11_a=[A_{a-1}^-].
\endaligned$$ Let $$L(a+1)=\text{span}\{[A_a^+], \scf_1[A_{a}^+]\}\text{ and }L(a)=\text{span}\{[A_{a-1}^-], \sce_1[A_{a-1}^-]\}.$$ If $l\nmid r$, we claim that $\sS_F1_a=L(a+1)\oplus L(a)$ is a direct sum of irreducible submodules. Indeed, $[\![a]\!]_{q^{-1}}$ and $[\![r-a]\!]_q$ cannot be both zero in this case. So $L(a+1)\cap L(a)=0$, forcing $\sS_F1_a=L(a+1)\oplus L(a)$ as vector spaces. Since, by , $$\label{trump}
\sce_1\scf_1[A_{a}^+]=(\sce_1\scf_1+\scf_1\sce_1)[A_{a}^+]=\frac{\sck_1\sck_2^{-1}-\sck_1^{-1}\sck_2}{\ups-\ups^{-1}}
[A_{a}^+]=\frac{\ups^r-\ups^{-r}}{\ups-\ups^{-1}}
[A_{a}^+],$$ and $\frac{\ups^r-\ups^{-r}}{\ups-\ups^{-1}}\neq0$, every nonzero element in $L(a+1)$ generates $L(a+1)$. Hence, $L(a+1)$ is an irreducible submodule. Likewise, $L(a)$ is a submodule. This proves that $\sS_F1_a$ is semisimple for all $a\in[1,r-1]$. Hence, $\sS_F$ is semisimple.
Assume now $l\mid r$. Then, by , $\sce_1(\scf_1[A_{a}^+])=0$. On the other hand, $\scf_1^2=0$ implies $\scf_1(\scf_1[A_{a}^+])=0.$ Thus, $\scf_1[A_{a}^+]$ spans a submodule $\overline{L}(a)$ of $L(a+1)$. Similarly, $\sce_1[A_{a-1}^-]$ spans a submodule $\overline{L}(a)'(\cong \overline{L}(a))$ of $L(a)$. Moreover, (cf. [@mz Theorem 1]) $$\overline{L}(a+1)\cong L(a+1)/\overline{L}(a),\qquad
\overline{L}(a-1)\cong L(a)/\overline{L}(a)'.$$ Hence, $\overline{L}(a), 0\leq a\leq r,$ form a complete set of all irreducible $\sS_F$-modules.
The classification of irreducible modules for $\sS_k(1|1,r)$ in the semisimple case is consistent with a classification given in [@DR Theorem 7.5].
\[ss2\]With the same assumption on $l$ as in Lemma \[ss1\], the superalgebras $\sS_F(2|1, r)$ and $\sS_F(1|2, r)$ are not semisimple for all $r\geq l.$
By Lemma \[DR5.8\], it suffices to consider $\sS_F=\sS_F(2|1,r)$. Let $e=1_{(r,0,0)}$. Then, for $P=\sS_Fe$, End$_{\sS_F}(P)\cong F$ and so $P$ is an indecomposable $\sS_F$-module. We now show the existence of a proper submodule of $P$ if $r\geq l$. Observe that $P$ is spanned by all $[A]$ with $\co(A)=(r,0,0)$. Such $A$ will be written as $A_{a,b,c}$ where $(a,b,c)^t$ is the first column of $A$. We have two cases to consider.
[**Case 1.**]{} If $r=al+b$ with $0\leq b\leq l-2$ (i.e., $l\nmid r+1$), then $b+1<l$ and $\scf_1^{(b+1)}e=[A_{al-1,b+1,0}]\in P$. We now claim that $[A_{al-1,b+1,0}]$ is a maximal vector in the sense that $\sce_h^{(p)}[A_{al-1,b+1,0}]=0$ for all $h=1,2$ and $p\geq1$. This is clear if $h=2$ since all $a_{h+1,k}=a_{3,k}=0$. Also, by Theorem \[KeyMF\](1), we have $\sce_1^{(p)}[A_{al-1,b+1,0}]=0$ for $p>b+1$ and, for $p\leq b+1<l$, $$\sce_1^{(p)}[A_{al-1,b+1,0}]=\frac{\sce_1^{p-1}}{[p]_\ups^!}\sce_1[A_{al-1,b+1,0}]=\frac{\ups^{al-1}[\![al]\!]_{q^{-1}}}{[p]_\ups^!}\sce_1^{p-1}[A_{al,b,0}]=0.$$ By the claim, we see that $P':=\sS_F[A_{al-1,b+1,0}]=\sS_F^-[A_{al-1,b+1,0}]$ is a proper submodule of $P$ since $e\not\in P'$.
[**Case 2.**]{} If $r=al-1$ (and so $a\geq2$), then by Theorem \[KeyMF\], $\scf_2(\scf_1^{(l)}e)=\scf_2[A_{r-l,l,0}]=[A_{r-l,l-1,1}]\in P.$ Now, since $r-l+1=(a-1)l$, we have $\sce_1[A_{r-l,l-1,1}]=\ups^{r-l}[\![r-l+1]\!]_{q^{-1}}[A_{r-l+1,l-2,1}]=0$ and $\sce_2[A_{r-l,l-1,1}]=\ups^{l-1}[\![l]\!]_{q^{-1}}[A_{r-l,l,0}]=0$. Hence, $\sce_h^{(p)}[A_{r-l,l-1,1}]=0$ for all $h=1,2$ and $p<l$. Similarly, by Theorem \[KeyMF\](1), $\sce_h^{(p)}[A_{r-l,l-1,1}]=0$ for $h=1,2$ and $p\geq l$. This proves that $\sS_F[A_{r-l,l-1,1}]=\sS^-_F[A_{r-l,l-1,1}]$ is a proper submodule of $P$.
Combining the two cases, we conclude that $\sS_F$ is not semisimple whenever $r\geq l$.
The following result is the quantum analogue of a result of F. Marko and A.N. Zubkov [@mz2], which is stated in the abstract.
\[thmqss\]Let $F$ be a field containing elements $q\neq0,1$ and $\ups=\sqrt{q}$. Then the $q$-Schur superalgebra $\sS_F(m|n, r)$ with $m,n\geq1$ is semisimple if and only if one of the following holds:
- $q$ is not a root of unity;
- $q$ is a primitive $l$th root of unity and $r<l;$
- $m=n=1$ and $q$ is an $l$th root of unity with $l\nmid r.$
The first two condition implies that $\sH_F$ is semisimple and so is $\sS_F$. The semisimplicity under (3) follows from Lemma \[ss1\]. We now show that, if all three conditions fail, then $\sS_F$ is not semisimple. By Lemmas \[DR5.8\]&\[ss1\], it is suffices to look at the case for $m\geq2$ and $n\geq1$ and $l\leq r$.
Consider the subset $$\La(m|n,r)' =\{\la\in\La(m|n,r)\mid \la^{(0)}=(\la_1,\la_2,0,\ldots,0),\la^{(1)}=(\la_{m+1},0,\ldots,0)\}$$ and let $f=\sum_{\la\in \La(m|n,r)' }1_\la$ and $e=1_{(r,0,\ldots,0)}$. Then $ef=e=fe$ and it is clear that there is an algebra isomorphism $\sS_F(2|1,r)\cong f\sS_F(m|n,r)f$. By identifying the two algebras under this isomorphism, we see that there is an $f\sS_F(m|n,r)f$-module isomorphism $\sS_F(2|1,r)1_{(r,0,0)}\cong f\sS_F(m|n,r)e$. This $f\sS_F(m|n,r)f$-module is indecomposable, but not irreducible, by Lemma \[ss2\]. Since $\sS_F(m|n,r)e$ is indecomposable and its image $f\sS_F(m|n,r)e$ under the “Schur functor” is indecomposable, but not irreducible, we conclude that $\sS_F(m|n,r)e$ is not irreducible (see [@Gr (6.2g)]). Hence, $\sS_F(m|n,r)$ is not semisimple.
Semisimple $q$-Schur algebras have been classified by K. Erdmann and D. Nakano [@EN Theorem(A)]. By Corollary \[KMFcor\], we may also use this new approach to get their result; see Appendix A.
Infinitesimal and little $q$-Schur superalgebras
================================================
We now give another application of the multiplication formulas. We first construct certain subsuperalgebras of the $q$-Schur superalgebra $\sS_R(m|n,r)$ over the commutative ring $R$ in which $q=\ups^2\neq1$ is a primitive $l$-th root of unity. (So $l\geq2$.)
Let $\fks_R(m|n,r)$ be the $R$-submodule spanned by all $[A]$ with $A\in M(m|n,r)_l$, where $$M(m|n,r)_l=\{(a_{i,j})\in M(m|n,r)\mid a_{i,j}<l\;\forall i\neq j\}.$$ We have the following super analogue of the infinitesimal $q$-Schur algebras (cf. [@CGW]).
\[6.1\] The $R$-submodule $\mathfrak{s}_R(m|n,r)$ is a subsuperalgebra generated by $\sce_h,\scf_h,1_\la$ for all $1\leq h<m+n$, $\la\in\La(m|n,r)$.
Let $\fks'_R(m|n,r)$ be the subalgebra generated by $[aE_{h,h+1}+D]$ and $[bE_{h+1,h}+D']$, where $D,D'$ are diagonal matrices with $aE_{h,h+1}+D,bE_{h+1,h}+D'\in M(m|n,r)_l$ and $0\leq a,b<l$. Observe from the multiplication formulas in Theorem \[KeyMF\] that if $A\in M(m|n,r)_l$ then $\sce_h^{(a)}[A]=[aE_{h,h+1}+D][A]$ and $\scf_h^{(b)}[A]=[bE_{h+1,h}+D'][A]$, for some $D,D'$, are linear combinations of $[B]$ with $B\in M(m|n,r)_l$. This implies that $\fks'_R(m|n,r)\subseteq \fks_R(m|n,r)$. Now, by the triangular relation [@DG Theorem 7.4]: $$\label{tri}
\prod_{i\leq h<j}^{(\leq_2)}[a_{j,i}E_{h+1,h}+D_{i,h,j}]\prod_{i\leq h<j}^{(\leq_1)}[a_{i,j}E_{h,h+1}+D_{i,h,j}]=(-1)^{{\overline}{A}}[A]+\text{lower terms},$$ an inductive argument on the Bruhat order on $M(m|n,r)$ shows that every $[A]$ with $A\in M(m|n,r)_l$ belongs to $\fks'_R(m|n,r)$. Hence, $\fks_R(m|n,r)= \fks'_R(m|n,r)$ is a subalgebra and, hence, a subsuperalgebra. From the argument above, we see easily that $\sce_h,\scf_h,1_\la$ can be generators.
By [@DGW Corollary 8.4], $\fks_R(m|n,r)$ is isomorphic to the infinitesimal $q$-Schur superalgebra defined in [@CGW §3] by using quantum coordinate superalgebra.
We now construct a subsuperalgebra $\fku_R(m|n, r)$. Let $\mathbb Z_l:=\mathbb Z/l\mathbb Z$ and let $\bar\;:\mathbb Z\to \mathbb Z_l$ be the quotient map. Extend this map to $M(m|n,r)$, $\Lambda(m|n,r)$ by baring on the entries. Thus, we may identify the image $\overline{M(m|n,r)}$ with the following set: $$\overline{M(m|n,r)}=\{A^\pm+\diag(\overline{\partial}_A)\mid A\in M(m|n,r)_l\}=\overline{M(m|n,r)_l}.$$ where $A^\pm$ is obtained by replacing the diagonal of $A$ with 0’s and $\partial_A \in \mathbb Z^{m+n}$ is the diagonal of $A$ (i.e., $A=A^\pm+\diag(\partial_A)$). For $A=A^\pm+\diag(\overline{\partial}_A)\in \overline{M(m|n,r)}$, define $${\overline}{\xi}_A=\sum_{\lambda\in\Lambda(m|n,r-|A^\pm|)\atop {\overline}\lambda=\overline{\partial}_A}[A^\pm+\diag(\lambda)]=
\sum_{\lambda\in\Lambda(m|n,r-|A^\pm|)\atop {\overline}\lambda=\overline{\partial}_A}\xi_{A^\pm+\diag(\lambda)},$$ and let ${\overline}{1}_\la={\overline}{\xi}_{\diag(\la)}$. Note that every ${\overline}{\xi}_A$ is a homogeneous element with respect the super structure on $\sS_R(m|n,r)$.
We now have the super analogue of the [*little*]{} $q$-Schur algebra introduced in [@DFW].
\[little\]The subsuperspace $\fku_R(m|n,r)$ of $\fks_R(m|n,r)$ spanned by ${\overline}{\xi}_A$ for all $A\in \overline{M(m|n,r)}$ is a subsuperalgebra with identity $\sum_{x\in{\overline}{\La(m|n,r)}}{\overline}{1}_{\diag(x)}$ and generated by $\sce_h,\scf_h,{\overline}1_{\la}$ for all $1\leq h<m+n,\la \in{\overline}{\La(m|n,r)}$.
In this case, with a proof similar to that for Theorem \[6.1\], we see that $\fku_R(m|n,r)$ is the subalgebra generated by ${\overline}{\xi}_{aE_{h,h+1}+D}$ and ${\overline}{\xi}_{bE_{h+1,h}+D'}$, where $D,D'$ are diagonal matrices with $aE_{h,h+1}+D,bE_{h+1,h}+D'\in {\overline}{M(m|n,r)}$. Note that by taking the sum of the triangular relations for every $A^\pm+\diag(\lambda)$ with $\overline{\la}=\overline{\partial}_A$, we obtain the required triangular relation for ${\overline}{\xi}_A$’s (cf. the proof of [@DG Theorem 8.1]). The last assertion is clear as every ${\overline}{\xi}_{aE_{h,h+1}+D}$ or ${\overline}{\xi}_{bE_{h+1,h}+D'}$ has the form $\sce_h^{(a)}\bar1_\la$ or $\scf_h^{(b)}\bar1_\la$.
We end the paper with the following semisimplicity criteria for the infinitesimal/little $q$-Schur superalgebras; compare the nonsuper case [@DFW2 §7] and [@Fu].
\[thmiqs\] The superalgebra $\fks_F(m|n, r)$ or $\fku_F(m|n, r)$ with $m,n\geq1$ is semisimple if and only if one of the following holds:
- $r<l;$
- $m=n=1,l\nmid r.$
We first look at the “infinitesimal” case. We observe that, if $r<l$ or $m=n=1$, then $\fks_F(m|n, r)=\sS_F(m|n,r)$. The “if” part is clear. Conversely, suppose $\fks_F(m|n, r)$ is semisimple. Since $\fks_F(1|1, r)=\sS_F(1|1,r)$, its semisimplicity forces $l\nmid r$. Assume $m\geq 2, n\geq1$ and $l\leq r$. By the proof of Lemma \[ss2\], we see that $\fks_F(2|1, r)e$ ($e=1_{(r,0,0)}$) is indecomposable and contains the proper submodule $\fks_F(2|1, r)[A_{al,b,0}]$ if $l\nmid r+1$, or $\fks_F(2|1, r)[A_{r-l,l-1,1}]$ if $l\mid r+1$. Hence, we can use the Schur functor argument to conclude $\fks_F(m|n, r)$ is not semisimple unless $r<l$.
We now look at the “little” case. If $r<l$, then $\fku_F(m|n, r)=\sS_F(m|n,r)$ is semisimple. If $m=n=1$ and $l\nmid r$, then the simple module $L(a)$ constructed in the proof of Lemma \[ss1\] remains irreducible when restricted to $\fku_F(m|n, r)$. This is seen from the last assertion of Corollary \[little\]. Thus, $\fks_F(m|n, r)$ as an $\fku_F(m|n, r)$-module is semisimple. As a $\fku_F(m|n, r)$-submodule of $\fks_F(m|n, r)$, $\fku_F(m|n, r)$ is semisimple. Conversely, if condition (1) and (2) both fail. Then $r\geq l$. If one of the $m$ and $n$ is great than 1, then $\fku_F(m|n,r)$ is not semisimple. To see this, it is enough to show that $M=\fks_F(2|1, r)e$ as an $\fku_F(2|1, r)$-module is indecomposable. Indeed, suppose $M=M_1\oplus M_2$ where $M_i$ are nonzero $\fku_F(2|1, r)$-submodules. Then, for any $\la\in\La(m|n,r)$, $1_\la M_1$ and $1_\la M_2$ cannot be both non-zero since $\dim1_\la M=1$. This shows that $M_i$ is a direct sum of some $1_\la M$. Hence, $M_i$ is an $\fks_F(2|1, r)$-module, contrary to the fact that $M$ is an indecomposable $\fks_F(2|1,r)$-module. If $m=n=1$, then $l\mid r$. In this case, $\fku_F(1|1, r)$ is clearly non-semsimple as $\fku_F(1|1, r){\overline}1_0$ is indecomposable, but not irreducible.
A Theorem of Erdmann–Nakano
===========================
\[semqsch\] Let $F$ be a field of characteristic $p\geq0$ containing elements $q\neq0,1$ and $\ups=\sqrt{q}$. Then the $q$-Schur algebra $\sS_F(m, r)$ is semisimple if and only if one of the following holds:
- $q$ is not a root of unity;
- $q$ is a primitive $l$th root of unity and $r<l;$
- $m=2, p=0, l=2$ and $r$ is odd;
- $m=2, p\geq 3, l=2$ and $r$ is odd with $r<2p+1.$
If $q$ satisfies (1) or (2), then $\sS_F(m, r)$ is clearly semisimple. Suppose now that $q$ is a primitive $l$th root of unity and $r\geq l>1$. By Corollary \[KMFcor\], an argument similar to those given in the proofs of Lemma \[ss2\] and Theorem \[thmqss\] shows that both $\sS_F(m,r)1_{(r,0,\cdots,0)}, m\geq 3$, and $\sS_F(2,r)1_{(r,0)}, l\nmid r+1$, are indecomposable but not irreducible. In particular, both $\sS_F(2,l)$ and $\sS_F(2,l+1)$ are not semisimple if $l\geq3$. Since tensoring an $\sS_F(2,r)$-module with the determinant representation gives an $\sS_F(2,r+2)$-module, we see that $\sS_F(2,r)$ is not semisimple for all $r\geq l\geq 3$. Hence, a semisimple $\sS_F(m,r)$ forces $m=2,l=2$ and $2|r+1.$ It remains to determine the semisimplicity of $\sS_F(2,r)$ when $r\geq l=2$ and $r$ odd (and so $2|r+1$). We claim that, for $r\geq l=2$ with $r$ odd, $\sS_F(2,r)$ is semisimple if and only if either $p=0$ or $p\geq 3$ but $r<2p+1$. Indeed, $\sS_F(2,r)$ is semisimple if and only if all $q$-Weyl modules $\Delta(\lambda), \lambda\in\Lambda^+(2,r)$, are irreducible. For $\lambda=(\lambda_1,\lambda_2) \in\Lambda^+(2,r),$ if $x_\lambda\in \Delta(\lambda)$ is a highest weight vector, then $\Delta(\lambda)$ has a basis $
x_\lambda, \scf_1 x_\lambda, \scf_1^{(2)}x_\lambda,\cdots,
\scf_1^{(\lambda_1-\lambda_2)}x_\lambda$ and, for $1\leq a\leq \lambda_1-\lambda_2,$ we have $$\label{ss22}
\sce_1^{(a)}\scf_1^{(a)}x_\lambda =\sum_{s=0}^a \scf_1^{(a-s)}
{\left[ \lambda_{1}-\lambda_{2}; 2s-2a \atop s
\right]}_\ups \sce_1^{(a-s)}x_\lambda = {\left[
\lambda_{1}-\lambda_{2} \atop a \right]}_\ups x_\lambda.$$ Thus, the irreducibility of $\Delta(\la)$ is equivalent to $
\prod_{0\leq a\leq
\lambda_{1}-\lambda_{2}} {\left[ \lambda_{1}-\lambda_{2}
\atop a \right]}_\ups \neq 0.
$ Since $r=\lambda_{1}+\lambda_{2}$ is odd and $l=2$, we see that $\lambda_{1}-\lambda_{2}$ is also odd and $
{\left[ \lambda_{1}-\lambda_{2} \atop a
\right]}_\ups= {\left( \frac{\lambda_{1}-\lambda_{2}-1}2 \atop a_1
\right)} {\left[ 1 \atop a_0 \right]}_\ups, $ where $a=2a_1+a_0$ with $a_0=0,1.$ Obviously, ${\left[ 1 \atop
a_0 \right]}_\ups=1.$ Thus, if $p=0$ or $p\geq 3$ but $r<2p+1$ then ${\left(
\frac{\lambda_{1}-\lambda_{2}-1}2 \atop a_1 \right)}\neq 0$ for all $(\la_1,\la_2) \in\Lambda^+(2,r)$ and $1\leq a\leq \lambda_1-\lambda_2$. Hence, $\sS_F(2,r)$ is semisimple in this case. Conversely, if $r\geq 2p+1,$ choose $\la$ so that $\lambda_{1}-\lambda_{2}=2p+1$ and $a=3$. Then $ {\left[\lambda_{1}-\lambda_{2} \atop 3
\right]}_\ups={\left(
\frac{\lambda_{1}-\lambda_{2}-1}2 \atop 1 \right)}={\left( p \atop 1
\right)}=0.$ Hence, $\Delta(\lambda)$ is not simple in this case and so $\sS_F(2,r)$ is not semisimple.
[99]{}
A.A. Beilinson, G. Lusztig, R. MacPherson, [*A geometric setting for the quantum deformation of $GL_n$*]{}, Duke Math. J. [**61**]{} (1990), 655-677.
H. Bao, J. Kujawa, Y. Li, W. Wang, [*Geometric Schur duality of classical type*]{}, Transf. Groups, to appear. X. Chen, H. Gu, J. Wang, [*Infinitesimal and little $q$-Schur superalgebras*]{}, Comm. Algebra, to appear.
B. Deng, J. Du, Q. Fu, [*A Double Hall Algebra Approach to Affine Quantum Schur–Weyl Theory*]{}, LMS Lecture Note Series, [**401**]{}, CUP, 2012.
B. Deng, J. Du, B. Parshall, J. P. Wang, *Finite Dimensional Alegebras and Quantum Groups*, Mathematical Surveys and Monographs, Vol. 150, Amer. Math. Soc, Providence R. I. (2008).
S. Doty, D. Nakano, [*Semisimple Schur algebras*]{}, Math. Proc. Camb. Phil. Soc. [**124**]{} (1998), 15–20.
S. Doty, D. Nakano, K. Peters, [*Infinitesimal Schur algebras*]{}, Proc. London Math. Soc. [**72**]{} (1996), 588–612. J. Du,*The modular representation theory of q-Schur algebras*, Trans. Amer. Math. Soc. [**329**]{} (1992), 253–271. J. Du, Q. Fu, [*Quantum affine $\mathfrak{gl}_n$ via Hecke algebras*]{}, Adv. Math. [**282**]{} (2015), 23–46.
J. Du, Q. Fu, J. Wang, [*Infinitesimal quantum $\mathfrak{gl}_n$ and little $q$-Schur algebras*]{}, J. Algebra [**287**]{} (2005) 199–233.
J. Du, Q. Fu, J. Wang, [*Representations of little $q$-Schur algebras*]{}, Pacific J. Math. [**257**]{} (2012), 343–378.
J. Du, H. Gu, *A realization of the quantum supergroup $\mathbf U(\mathfrak{gl}_{m|n})$*, J. Algebra [**404**]{} (2014), 60–99. J. Du, H. Gu, *Canonical bases for the quantum supergroup $\mathbf U(\mathfrak{gl}_{m|n})$*, Math. Z., [**281**]{} (2015), 631–660. J. Du, H. Gu, J. Wang, *Irreducible representations of $q$-Schur superalgebra at a root of unity*, J. Pure Appl. Algebra, [**218**]{} (2014), 2012–2059. J. Du, H. Gu, J. Wang, *Representations of $q$-Schur superalgebras in positive characteristics*, J. Algebra [**481**]{} (2017), 393–419.
J. Du, H. Rui, *Quantum Schur superalgebras and Kazhdan–Lusztig combinatorics*, J. Pure Appl. Algebra, [**215**]{} (2011), 2715–2737.
H. El Turkey, J. Kujawa, *Presenting Schur superalgebras*, Pacific J. Math. [**262**]{} (2013), 285–316.
K. Erdmann, D.K. Nakano, [*Representaiton type of $q$-Schur algebras*]{}, Trans. Amer. Math. Soc. [**353**]{} (2001), 4729–4756.
Z. Fan, C. Lai, Y. Li, L. Luo, W. Wang, [*Affine Flag Varieties and Quantum Symmetric Pairs*]{}, Memoir Amer. Math. Soc. to appear.
Z. Fan, Y. Li, [*Geometric Schur duality of classical type, II*]{}, Trans. Amer. Math. Soc., Ser. B, [**2**]{} (2015), 51–92.
Q. Fu, [*Semisimple infinitesimal $q$-Schur algebras*]{}, Arch. Math. [**90**]{} (2008), 295–303.
J.A. Green, [*Polynomial representations of $GL_n$*]{}, 2nd ed., with an appendix on Schensted correspondence and Littelmann paths by K. Erdmann, J. Green and M. Schocker. Lecture Notes in Mathematics [**830**]{}, Springer, Berlin, 2007.
D. J. Hemmer, J. Kujawa, D. Nakano, [*Representation type of Schur algebras*]{}, J. Group Theory [**9**]{} (2006), 283–306.
L. Jones, *Centers of generic Hecke algebras*, Trans. Amer. Math. Soc. [**317**]{} (1990), 361–392. F. Marko, A.N. Zubkov, [*Schur superalgebras in characteristic $p$*]{}, Algebra Represent. Theory, [**9**]{} (2006), 1–12.
F. Marko, A.N. Zubkov, [*Schur superalgebras in characteristic p, II,*]{} Bull. London Math. Soc. [**38**]{} (2006), 99–112.
[^1]: $^\dagger$Corresponding author.
[^2]: The work was supported by a 2017 UNSW Science Goldstar Grant and the Natural Science Foundation of China (\#11501197, \#11671234). The third author would like to thank UNSW for its hospitality during his a year visit and thank the Jiangsu Provincial Department of Education for financial support.
[^3]: The element $[A]$ is denoted by $\xi_A$ in [@DG (4.2.1)].
[^4]: $D^+_{h,p}$, $D^-_{h,p}$ are denoted by $U_p$, $L_p$.
| |
This service includes maintenance of green spaces, irrigation network, phytosanitary control, maintenance of the trees in green spaces and roadway landscaping, maintenance of urban furniture and of playground equipment.
• Shrubs, bushes, and vines.
• Urban furniture: Benches, trash cans, and posters.
• Playground areas: Revisions, repairs, painting, and maintenance of cushion pavement. | https://www.sorigue.com/en/landscape-and-environment/maintenance-preservation-plazas-green-space-sabadell |
Who Made the Bible?
(PART II)
Who made the Bible? We saw in Part I, that the Catholic Church infallibly chose the inspired books of Scripture and compiled them into what we know today as the Bible. The Bible canon was formally put together in the late 4th century, and this is the same Bible that Christians used for over 1,100 years and the same Bible that Catholics still use today.
Many Christian religions have accepted a false history which asserts that the earliest Christians were a Bible-only religion, and that they relied on the Bible as their only and final authority. This, however, is not historically tenable, especially considering there was no Bible until nearly the year 400. Moreover, the Bible itself never makes this claim. Nowhere in Scripture does it state that it is the only or final authority. Rather, it clearly teaches that Jesus started an authoritative Church (Mt. 16:18-19;18:15-18) that solves problems and speaks on issues of faith and morals (Acts 15:1-11). In fact, the Church existed for nearly 400 years before the Bible ever came to be.
History of the Bible
There are some Christian denominations today who mistakenly believe that the earliest Christians were “Bible Christians” who owned Bibles and carried them around. This is far from the truth. Bibles were extremely rare, monstrously expensive, and took a very long time to copy.
Making the Bible: The Bible was copied on vellum, and over 400 animals had to be killed for their skins just to make a single copy of the Bible (Book: Where We God the Bible). Additionally, a single Bible could take a year or more to copy, perhaps up to three years. Thus, Bible’s were rare and tremendously expensive. They would cost a person up to three years wages, the price of a small house. And, people wonder why the Catholic Church chained Bibles to the pulpits. That’s the reason – so, the precious Word of God would not be stolen, and so people would always have His Word.
Copying the Bible: Over the centuries, Catholic monks and friars – even nuns and bishops painstakingly copied the Scriptures, word by word, line by line, book by book. Many dedicated their entire lives to the copying and preservation of Scripture. Throughout the Middle Ages, Barbarians invasions would sack, pillage, and burn villages – including churches. Catholic monks would rescue the Scriptures when they could, and begin the copying process again. The Bible exists today because Catholics loved the word of God, preserved it, and passed it on down through the centuries. No one can claim that the Catholics did not love Scripture. In fact, they learned it, quoted it, preached it, and used it in their writings. They also brought the stories alive through dramas and plays, and other creative means.
When Bibles became more numerous, people still did not own them because nine-tenths of the Roman Empire was illiterate and could not read. People were illiterate for much of history until the universities were started – by the Catholic Church. Therefore, to launch a “Bible-only” religion, or to start a religion based off a book that most people couldn’t afford, couldn’t read, and couldn’t own, would have been a very bad idea. That is why Jesus started a teaching and preaching church (Mt. 28:19). It wasn’t until the invention of the Printing Press that Bibles were mass produced, more inexpensive, and more accessible to all.
What about Catholics burning Bibles? And, what about the Martin Luther rescuing the Scriptures from the grip of the Catholic Church and successfully delivering them to the common people?
First, the Scriptures weren’t kept from Luther or the people, and he did not have to unearth them from some dusty dungeon somewhere. In fact, Luther was commissioned by his superiors to not only to study the Scriptures, but to preach on them, as well. Even Zwingli, a fellow Protestant Reformer, calls Luther out on his own myth that he started:
“You are unjust in putting forth the boastful claim of pulling the Bible from beneath the dusty benches of the schools. You forget that we have gained a knowledge of the Scriptures through the translations of others. You are very well aware, with all of your blustering, that previous to your time, there were a host of scholars who, in Biblical knowledge and philosophical attainments, were incomparably your superiors” (Quoted in The Facts about Luther, 191).
And, let’s remind the world that Luther himself also admitted the following:
“We concede – as we must- that so much of what they [the Catholic Church] say is true: that the papacy has God’s Word and the office of the Apostles, and that we have received Holy Scriptures, Baptism, the Sacraments, and the pulpit from them. What would we know of these if it were not for them” (From Luther’s Works, Vol. 24, quoted in Crossing the Tiber, 54)?
Consequently, Martin Luther, and indeed, all Christian denominations received the Holy Scriptures from the Catholic Church. This is a plain fact.
Another myth inevitably arises here. Many people mistakenly believe that Luther was the first person print a Bible on the printing press and to put it in the vernacular (in languages other than Latin – the “common tongue” of the people) so it could be understood. This is grossly false. It is a historical fact that the very first Bible to come off the printing press was the Catholic version of the Bible. Moreover, the Church had many versions in the vernacular long before Luther’s own version in 1520.
There were exactly 104 editions of the Bible in Latin before 1520, 27 versions in German (Luther’s own tongue) – 9 before Luther was even born, 40 editions in Italian, 18 versions in French, and numerous editions in Spain, Hungary, Denmark, Norway, and other countries. As Henry Graham states, there were 198 versions of Scripture in the language of the people long before Luther’s version saw the light of day. (Where we got the Bible. I would highly recommend this book for more information on this subject).
Lastly, I would state that it’s a shame Luther ever got involved. He removed seven books of the Old Testament and personally rejected multiple books of the New Testament (Revelation, James, Hebrews, etc). In fact, Luther’s own version of Scripture was such a hack-job, that there were as many as 30 errors per page. This is why nobody uses his version today. The Catholic Church sometimes burned completely erroneous versions of “Scripture” such as this – because they weren’t really Scripture – and the Church wanted to preserve the true Word of God. It would be the equivalent of burning the Jehovah’s Witnesses Bible, in which they have intentionally changed many passages of Scripture to fit their own pre-conceived beliefs.
St. Jerome, a Catholic in the 4th century, is famous for saying that “Ignorance of Scripture is ignorance of Christ.” Catholics agree! Even the Catholic Church today states, “For this reason, the Church has always venerated the Scriptures as she venerates the Lord’s Body (Catechism of the Catholic Church, Article 3). In conclusion, let it never be said that the Catholic Church is against the Bible. It is a fact of history, that she compiled it, preserved it, memorized it, preached it, nourished her people with it, and gave it to the world. Anyone who loves the Bible should thank the Catholic Church. | http://www.catholicbryan.org/blog/who-made-the-bible2/ |
RELATED APPLICATIONS
GOVERNMENT INTERESTS
FIELD OF THE INVENTION
BACKGROUND OF THE INVENTION
SUMMARY OF THE INVENTION
DETAILED DESCRIPTION OF THE INVENTION
Exemplary Energy Storage Devices and Methods for Manufacturing Energy Storage Devices
This application is a continuation (divisional) application of the U.S. patent application with Ser. No. 12/973,798 by Nguyen et al., filed on Dec. 20, 2010, entitled “High Performance Carbon Nanotube Energy Storage Device,” which claims priority to U.S. Provisional Patent Application No. 61/288,788, filed on Dec. 21, 2009, entitled “Capacitor using Carbon Nanotube Electrode,” by Nguyen et al. and having attorney docket number WIND-P001R, both of which are hereby incorporated by reference in their entirety.
The inventions described herein were made by non-government employees, whose contributions were made in performance of work under an Air Force contract, and is subject to the provisions of Public Law 96-517 (35 U.S.C. §202). These inventions were made with Government support under contract FA9453-09-M-0141 awarded by the Air Force. The Government has certain rights in these inventions.
Embodiments of the present invention are generally related to carbon nanotubes (CNTs), electrodes, and energy storage.
As technology has advanced, the need for energy to power technology has increased rapidly. The ability to store energy to power devices has also become increasingly important. One area of an increasing amount of research for energy storage is capacitors with carbon nanotubes (CNTs). The CNTs are typically grown with use of a metal catalyst layer. The metal catalyst layer is difficult to control during deposition. The metal catalyst layer adds to the cost to manufacturing of the capacitor. Unfortunately, the metal catalyst layer typically remains after the growing of the CNTs and negatively impacts performance.
The resistance of the interface between the CNTs and the metal is often the dominant component of resistance in a capacitor. CNTs grown with a metal catalyst layer which results in a high interface resistance due to the metal catalyst layer that remains. The high interface resistance thereby negatively impacts performance. In particular, the high resistance results in poor power performance of the capacitor.
Amorphous carbon also negatively impacts performance. The growth of CNTs using typical processes results in amorphous carbon. The amorphous carbon reduces the accessibility of pores of the CNTs which reduces the surface area thereby impacting performance of the CNTs.
Accordingly, a need exists to manufacture energy storage devices with reduced cost, reduced resistance, and better performance. Embodiments of the present invention provide an energy storage device (e.g., capacitor) with cheaper manufacturing and enhanced performance (e.g., low resistance). Embodiments of the present invention including directly growing carbon nanotubes (CNTs) on a metal substrate comprising a metal catalyst or coated with metal catalyst. The CNTs are grown directly on the metal substrate without depositing a catalyst layer. Amorphous carbon is removed from the CNTs thereby improving the performance of the energy storage device.
In one embodiment, the present invention is implemented as a method for forming a portion of an energy storage device. The method includes accessing a metal substrate and forming plurality of carbon nanotubes (CNTs) directly on a metal substrate. The metal substrate may comprise a metal catalyst or be coated with a catalyst. The plurality of CNTs may be grown directly on the metal substrate without a catalyst layer. The plurality of CNTs may be formed via chemical vapor deposition (CVD). In one embodiment, the plurality of CNTs is substantially vertically aligned. The method further includes removing amorphous carbon from the plurality of CNTs and coupling the plurality of CNTs to an electrolytic separator. In one embodiment, the amorphous carbon is removed via a process involving water.
In another embodiment, the present invention is implemented as a method of forming a capacitor. The method includes forming a first plurality of carbon nanotubes (CNTs) on a first metal substrate and removing amorphous carbon from the first plurality of carbon nanotubes (CNTs). The first plurality of CNTs may be grown on the first metal substrate without the addition of a catalyst layer. The first plurality of CNTs may be substantially vertically aligned. The method further includes forming a second plurality of carbon nanotubes (CNTs) on a second metal substrate and removing amorphous carbon from the second plurality of CNTs. In one embodiment, the first metal substrate and the second metal substrate comprise a metal catalyst. In another embodiment, the first metal substrate and the second metal substrate are coated with a metal catalyst. The first plurality of CNTs and the second plurality of CNTs may then be coupled to a membrane (e.g., electrolytic separator).
In yet another embodiment, the present invention is an energy storage device. The device includes a first metal substrate and a second metal substrate and an electrolytic separator. In one embodiment, the first metal substrate comprises a metal catalyst. In another embodiment, the first metal substrate is coated with a metal catalyst. The device further includes a plurality of carbon nanotubes (CNTs) coupled to the first metal substrate, the second metal substrate, and the electrolytic separator. The plurality of CNTs may be substantially vertically aligned. A first portion of the plurality of CNTs is grown directly on the first metal substrate and a second portion of the plurality of CNTs were grown directly on the second metal substrate. In one embodiment, the plurality of CNTs is grown directly on the first metal substrate without a catalyst layer. Amorphous carbon has been removed from the plurality of CNTs. The amorphous carbon may be removed by a process involving water.
Reference will now be made in detail to various embodiments in accordance with the invention, examples of which are illustrated in the accompanying drawings. While the invention will be described in conjunction with various embodiments, it will be understood that these various embodiments are not intended to limit the invention. On the contrary, the invention is intended to cover alternatives, modifications, and equivalents, which may be included within the scope of the invention as construed according to the appended Claims. Furthermore, in the following detailed description of various embodiments in accordance with the invention, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be evident to one of ordinary skill in the art that the invention may be practiced without these specific details. In other instances, well known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the invention.
FIGS. 1-3
FIG. 1A
102
102
108
a
a
a
show diagrams of exemplary production stages of a portion of an energy storage device, in accordance with one embodiment of the present invention. Referring to , a metal substrate is selected. Metal substrate may be metal alloy which may be a variety of alloys comprising a metal catalyst including Fe, Ni, or Co or any other metal or combination of metals that have the capability to support growth of carbon nanotubes. An example is FeCrAl alloys, Kanthal (e.g., mainly iron, chromium (20-30%) and aluminium (4-7.5%)), Nichrome®, available from the Driver-Harris Company of Morristown, N.J. (e.g., 80% nickel and 20% chromium, by mass), or stainless steel.
FIG. 1B
102
102
102
108
b
b
b
b.
Referring to , a metal substrate is selected. Metal substrate may be metal (e.g., Fe, Ni, Co, Al) or a metal foil (e.g., comprising Al and/or Cr). In one embodiment, metal substrate may be coated or deposited (e.g., via a continuous process) with catalyst
FIG. 2
FIG. 2
104
102
104
104
104
102
108
108
a
b
Referring to , carbon nanotubes (CNTs) are formed or grown directly on metal substrate . CNTs are highly porous in structure and characterized by sizeable fraction of mesopores and high useable surface area. CNTs are chemically stable and inert. CNTs are electrically conductive. It is noted that metal substrate of comprises catalyst (e.g., catalyst or catalyst ) which is not shown.
104
104
102
104
104
104
X
In one embodiment, CNTs are grown with a thermal chemical vapor deposition (CVD) process. For example, the CVD process may be performed with hydrocarbons (e.g., ethylene, any CHbased hydrocarbon, or other carbon source) at a temperature greater than 600° C. and in an environment with reduced oxygen concentration. CNTs are grown directly on the surface of metal substrate without metal catalyst deposition. In one embodiment, CNTs are multi-walled tower-like structures grown directly on metal substrate. In another embodiment, CNTs are single-walled tower-like structures grown directly on metal substrate. In yet another embodiment, CNTs are a combination of both single-walled and multi-walled tower-like structures grown directly on metal substrate.
104
The direct growth of CNTs without using a catalyst layer removes the problems of high interface resistance and a catalyst layer which remains on the substrate. Embodiments of the present invention thus have no catalyst impurities impacting the interface resistance. Embodiments thus have minimal electrical resistance at the interface between the CNTs and the metal substrates thereby improving the performance of the energy storage device. The direct growth of the CNTs on the metal substrate further eliminates the need to use a binding material which reduces unnecessary weight of inactive materials.
104
104
104
In one embodiment, CNTs are in a vertical alignment configuration. CNTs may be in a variety of configurations including horizontal, random, disorder arrays, CNTs with other materials, or other alignments, etc. For example, CNTs may be in a vertical tower structure (e.g., perpendicular to the metal surface). In another embodiment, the CNTs resemble a random network with a low degree of structural alignment in the vertical direction.
2
104
In one embodiment, a plasma-based treatment (e.g., via Oplasma) of the CNT towers is performed to impart hydrophilic character to the CNTs for better wetting by an electrolyte. This allows more ions from the electrolytes to access the pores in of CNT electrodes which increases the charge density at the Helmholtz layer.
104
104
106
106
104
104
104
During the growth of CNTs , CNTs may develop amorphous carbon . Amorphous carbon occupies the spaces between CNTs and thus render CNTs less porous thereby impacting performance of CNTs (e.g., as an electrode). In one embodiment, control of the growth temperature substantially reduces amorphous carbon impurities.
FIG. 3
104
106
104
110
106
104
Referring to , a cleaning process is applied to CNTs and amorphous carbon is removed (e.g., partially or fully) from CNTs thereby producing a portion of an energy storage device . In one embodiment, water vapor at high temperature is used to remove amorphous carbon from CNTs . The cleaning process used may be a process described in U.S. Pat. No. 6,972,056 by Delzeit et al., which is incorporated herein by reference.
2
In one embodiment, a continuous water treatment process is used for purification of carbon nanotube collector electrodes for the removal of impurities including amorphous carbon. The process may include a wet inert carrier gas stream (e.g., Ar or N) and may include an additional dry carrier gas stream. The wet inert carrier gas stream and the additional dry carrier gas stream can be mixed to control the water concentration. Water may be added using a bubbler, membrane transfer system, or other water infusion method. Water vapor can be introduced in the process chamber at an elevated temperature in the range of 50-1100° C. The process chamber is at a temperature in the range of 50-1100° C. Water treatment increases the electrode porosity thereby increasing the accessibility of pores and allows use of CNTs in applications for high electrode surface area. The increased surface area increases the performance or enhances the capacitance of an energy storage device in accordance with embodiments of the present invention. For example, water treatment may result in an increase of specific capacitance values of about three times for water treated CNT electrodes.
FIG. 4-6
FIG. 4
210
206
210
202
204
202
204
202
206
a
b
a
b
a
b
a
b
a
b
a
b
a
b
show diagrams of exemplary production stages of an energy storage device, in accordance with one embodiment of the present invention. Referring to , two portions of an energy storage device -are formed (e.g., as described herein) and membrane is selected. Portions of energy storage device -include metal substrates -and CNTs -. Metal substrates -may be coated with a catalyst or be a metal alloy comprising a metal catalyst. CNTs -have been grown directly on metal substrates -and have amorphous carbon removed. Membrane may be a porous separator comprising a variety of materials including polypropylene, Nafion, Celgard or Celgard 3400 available from Celgard LLC of Charlotte, N.C.
FIG. 5
204
206
204
202
206
408
a
b
a
b
a
b
Referring to , CNTs -are coupled to membrane . In one embodiment, CNTs -and metal substrates -are coupled to membrane via a clamp assembly (e.g., clamp assembly ).
FIG. 6
204
208
204
208
a
b
a
b
2
4
2
4
2
Referring to , CNTs -may be submersed in electrolyte which may be a liquid or gel or CNTs -may be surrounded by a specific gas, air, or vacuum. Electrolyte can be a variety of electrolytes including aqueous electrolytes (e.g., Sodium sulphate (NaSO), Potassium hydroxide (KOH), Potassium chloride (KCl), Sulfuric acid (HSO), Magnesium chloride (MgCl), etc.), nonaqueous electrolyte solvents (e.g., Acetonitrile, Propylene carbonate, Tetrahydrofuran, Gamma-butyrolactone, Dimethoxyethane), and solvent free ionic liquids (e.g., 1-ethyl-3-methylimidazolium bis(pentafluoroethylsulfonyl)imide (EMIMBeTi), etc.).
208
2
5
4
4
2
5
3
3
4
4
9
4
4
2
5
6
2
5
4
4
3
7
4
4
4
9
4
4
4
3
3
Electrolyte may include a variety of electrolyte salts used in solvents including Tetraalkylammonium salts (e.g., Tetraethylammonium tetrafluoroborate ((CH)NBF), Methyltriethylammonium tetrafluoroborate ((CH)CHNBF), Tetrabutylammonium tetrafluoroborate ((CH)NBF), Tetraethylammonium hexafluorophosphate (CH)NPF)), Tetraalkylphosphonium salts (e.g., Tetraethylphosphonium tetrafluoroborate ((CH)PBF), Tetrapropylphosphonium tetrafluoroborate ((CH)PBF), Tetrabutylphosphonium tetrafluoroborate ((CH)PBF)), and lithium salts (e.g., Lithium tetrafluoroborate (LiBF), Lithium hexafluorophosphate (LiPF6), Lithium trifluoromethylsulfonate (LiCFSO)).
FIG. 7
300
300
300
300
300
With reference to , exemplary flowchart illustrates example computer controlled processes used by various embodiments of the present invention. Although specific blocks are disclosed in flowchart , such blocks are exemplary. That is, embodiments are well suited to performing various other blocks or variations of the blocks recited in flowchart . It is appreciated that the blocks in flowchart may be performed in an order different than presented, and that not all of the blocks in flowchart may be performed.
FIG. 7
300
300
shows an exemplary flowchart of a process for manufacturing an energy storage device, in accordance with embodiments of the present invention. Process may be operable for manufacturing an electrochemical double layer capacitor (EDLC).
302
204
202
a
a
At block , a first plurality of carbon nanotubes (CNTs) (e.g., CNTs ) are formed on a first metal substrate (e.g., metal substrate ). As described herein, the CNTs may be formed directly on the metal substrate.
304
306
At block , amorphous carbon is removed from the first plurality of CNTs. As described herein, the amorphous carbon may have been removed via a water treatment process. At block , a first wire is coupled to the first metal substrate.
308
204
202
b
b
At block , a second plurality of carbon nanotubes (CNTs) (e.g., CNTs ) are formed on a second metal substrate (e.g., metal substrate ). As described herein, the CNTs may be formed directly on the metal substrate.
310
312
At block , amorphous carbon is removed from the second plurality of CNTs. As described herein, the amorphous carbon may have been removed via a water treatment process. At block , a second wire is coupled to the second metal substrate.
314
316
At block , the first plurality of CNTs and the second plurality of CNTs are coupled to a membrane (e.g., electrolytic separator). At block , electrolyte is added. The electrolyte may be a variety of electrolytes, as described herein.
FIG. 8
400
400
shows a block diagram of an exemplary energy storage device, in accordance with one embodiment of the present invention. In one embodiment, device assembly may be an electrochemical double layer capacitor (EDLC). Device assembly may have an operating voltage of 0.05V or greater. Embodiments of the present invention support fast charging time, high power delivery, and high energy density.
400
404
406
404
404
404
a
b
a
b
a
b
a
b
2
Device assembly comprises two CNT electrodes -separated by an electrolytic membrane . In one embodiment, CNT electrodes -may be larger than 1×1 cmarea on a metal substrate or metal foil coated with a catalyst and can be manufactured in a roll-to-roll fashion. CNT electrodes -may be manufactured in any continuous processing of electrode materials. CNT electrodes -may be formed with or without water treatment and from substrates with or without an additional catalyst.
408
410
402
202
400
410
a
b
a
b
Electrical leads are attached to the assembly prior to affixing the clamp assembly . Electrical leads (e.g., thin metal wires) contact the back of the collectors -(e.g., metal substrates -) to provide electrical contact. The device assembly is then submerged in a container of electrolyte (e.g., electrolyte solution including solvated ions) (not shown), as described herein. Electrical leads are fed out of the solution to facilitate capacitor operation.
408
404
406
400
408
a
b
Clamp assembly holds electrodes -in close proximity while the electrolytic membrane maintains an appropriate electrode separation and at the same time keeps the volume of device assembly to a minimum. In one embodiment, clamp assembly is a high-density assembly polyethylene (HDPE).
400
404
406
a
b
In one embodiment, device assembly is a parallel plate capacitor with two vertically aligned multi-walled CNT tower electrodes -, an electrolytic membrane (e.g., celgard or polypropylene, and using conventional aqueous electrolytes (e.g., 45% sulfuric acid or KOH).
400
Device assembly may be operable for a variety of applications including replacement for batteries and other energy storage devices, consumer electronics (e.g., cellular telephones, cameras, computers, PDAs (personal digital assistants, smartphones, pagers, and charging devices), motor vehicles (e.g., for electric/hybrid vehicles, for capturing energy wasted during the operation of motor vehicles, such as braking, and for driving motors, lights, instrumentation, etc.), smart grids (e.g., for electricity delivery to homes, commercial buildings and factories), cold-starting assistance, catalytic converter preheating, delivery vans, golf carts, go-carts, uninterruptable power supplies (UPSs) for computers, standby power systems, copy machines (e.g., accelerating warm up mode and minimizing standby mode), car stereo amplifies, etc.
Thus, embodiments of the present invention provide an energy storage device (e.g., capacitor) with cheaper manufacturing and enhanced performance (e.g., low resistance). Embodiments of the present invention including directly growing carbon nanotubes (CNTs) on a metal substrate comprising a metal catalyst or coated with metal catalyst. The CNTs are grown directly on the metal substrate without depositing a catalyst layer. Amorphous carbon is removed from the CNTs thereby improving the performance of the energy storage device.
The foregoing descriptions of specific embodiments of the present invention have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed, and many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims appended hereto and their equivalents.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the present invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements.
FIG. 1A
shows a diagram of an exemplary production stage of a portion of an energy storage device, in accordance with one embodiment of the present invention, wherein a substrate capable of supporting growth of carbon nanotubes is selected.
FIG. 1B
shows a diagram of an exemplary production stage of a portion of an energy storage device, in accordance with one embodiment of the present invention, wherein a metal substrate is selected.
FIG. 2
shows a diagram of an exemplary production stage of a portion of an energy storage device, in accordance with one embodiment of the present invention, wherein carbon nanotubes are formed or grown directly on a metal substrate.
FIG. 3
shows a diagram of an exemplary production stage of a portion of an energy storage device, in accordance with one embodiment of the present invention, wherein cleaning and removal procedures are performed.
FIG. 4
shows a diagram of an exemplary production stage of an energy storage device, in accordance with one embodiment of the present invention, wherein portions of an energy storage device are formed and a membrane is selected.
FIG. 5
shows a diagram of an exemplary production stage of an energy storage device, in accordance with one embodiment of the present invention, wherein CNTs are coupled to a membrane.
FIG. 6
shows a diagram of an exemplary production stage of an energy storage device, in accordance with one embodiment of the present invention, wherein CNTs are submersed in an electrolyte.
FIG. 7
shows an exemplary flowchart of a process for manufacturing an energy storage device, in accordance with embodiments of the present invention.
FIG. 8
shows a block diagram of an exemplary energy storage device, in accordance with one embodiment of the present invention. | |
- Please carry a valid photo id for verification of the ticket holder's name along with your e-ticket.
- No refund on a purchased ticket is possible, even in case of any rescheduling.
- Unlawful resale (or attempted unlawful resale) of a ticket would lead to seizure or cancellation of that ticket without refund or other compensation.
- Alcohol will be served to guests above the legal drinking age (LDA) and on display of valid age proof.
- Organizers reserve the right to perform security checks on invitees/members of the audience at the entry point for security reasons.
- Organizers or any of its agents, officers, employees shall not be responsible for any injury, damage, theft, losses or cost suffered at or as a result of the event of any part of it.
- Parking near or at the event premises is at the risk of the vehicle owner.
- Consumption and sale of illegal substances are strictly prohibited.
- Professional cameras, any form of recording instruments, arms, and ammunition, eatables, bottled water, beverages, alcohol are not allowed from outside the festival.
- Organizers are not responsible for any negative effects of food items or drinks consumed in the venue by festival guests; We encourage guests to drink responsibly and in moderation.
- First aid/medical facilities will be provided, however, the organizers do not take any responsibility for any problems arising.
- The holder of this ticket hereby grants organizers the right to use, in perpetuity, all or any part of the recording of any tape made of holder`s appearance on any channel for broadcast in any and all media globally and for advertising, publicity and promotion relating hereto.
- The organizers reserve the right without refund or other recourse, to refuse admission to anyone who is found to be in breach of these terms and conditions including, if necessary, ejecting the holder/s of the ticket from the venue after they have entered the ground.
- Consumption of Any Drugs is prohibited.
- Tickets once booked cannot be exchanged or refunded.
- Venue/Organiser rules apply. | https://www.eventshigh.com/detail/bangalore/f8ab7d8581e32ba5a709b7244b325e67-kount-down-2020-new-year?src=ecbox&cmode=override |
This masters course focuses on the relationship between psychological variables and biomedical conditions. It deals with the response to physical illness of patients in the healthcare system. Our students develop professional skills and skills in research methodology. The programme will appeal to individuals interested in health or counselling psychology and to those who have a background in a healthcare setting.
This MSc provides you with the knowledge and skills relevant to understanding how psychology is applied to the care of the physically ill and how it can maximise the effectiveness of health care delivery. During the course you will study counselling skills; chronic illness and its management; working with patients in the health care system; health-related behaviour, addiction and treatment; public health, health promotion and behaviour change interventions; stress; and research methods.
This course is intended for those with or without a degree in psychology but without the Graduate Basis for Chartership (GBC) with the British Psychological Society (BPS) (normally obtained through completing a psychology degree in the UK). Those with GBC may prefer to apply for our BPS accredited MSc Health Psychology, which includes a placement.
Sign up now to receive more information about studying at Middlesex University London.
We focus on supporting your future employability by helping you develop a range of professional, research and transferable skills (e.g. communication skills for working with clients and skills related to smoking cessation, health promotion and health behaviour change).
Coursework includes case studies, health behaviour journal, designing health promotion materials, laboratory reports, research proposals and essays.
This module aims to introduce students to the discipline of health psychology, setting it apart from other related disciplines. Students will be introduced to the concept of health and the main theories/concepts relating to the psychosocial determinants of health/illness, including stress and health behaviours. They will also be introduced to the applications of health psychology to health promotion and in particular, to behaviour change, including designing and evaluating interventions.
The aim of this module is to provide students with an introduction to the main schools of psychological therapy, their theoretical origins and how the theory is applied in practice. The module will also introduce students to the basic principles of communication skills that form the foundation of all counselling and therapy. Finally, the module will familiarise students with the role of counselling and therapy within all areas of applied psychology.
This module aims to introduce students to the physiological processes involved in the onset and progression of a variety of acute and long-term conditions. The process of health care delivery, from symptom perception through consultation to treatment/management from the perspective of both clients and health professionals will be discussed. The health care needs and experiences of clients across the lifespan will be considered.
The aim of this module is to provide postgraduate students with research skills and expertise from theory to implementation required by areas in Applied Psychology. The module is designed to fulfil training requirements identified in the National Occupational Standards for Applied Psychologists Key roles 2 3 by offering a comprehensive in-depth and systematic account of a range of skills in quantitative and qualitative research strategies, and the use of SPSS software in statistical analysis as applicable to the course syllabus. A variety of teaching methods and assessment will be employed with the aim of inspiring and challenging each student, whilst promoting independent learning and a critical appreciation of the research process. Students will engage in laboratory classes, workshops, lectures/seminars, tutorials, group work, and practical sessions on SPSS and qualitative data analysis. Ultimately the aim is to train students to develop, implement and maintain personal and professional standards and ethical research practice in Applied Psychology.
This module aims to provide students with an opportunity for an in-depth, advanced study in a specific area of applied psychology, pertinent to the degree for which they are registered, guided by, but largely independent of, tutor support. Students are encouraged to apply appropriate principles of empirical research to an issue of their choice within the subject area of their degree registration. Students will be guided to present their research study in the form of a written journal article, using appropriate styles and conventions.
On this module, students will develop a critical understanding of current research evidence and perspectives on psychological trauma and its effects. The impact of trauma on different groups and at different stages of the lifespan will be reviewed. Models of intervention for psychological trauma will be critically examined and the current debates around ameliorating factors and developmental outcomes will be explored.
You can find more information about this course in the programme specification. Optional modules are usually available at levels 5 and 6, although optional modules are not offered on every course. Where optional modules are available, you will be asked to make your choice during the previous academic year. If we have insufficient numbers of students interested in an optional module, or there are staffing changes which affect the teaching, it may not be offered. If an optional module will not run, we will advise you after the module selection period when numbers are confirmed, or at the earliest time that the programme team make the decision not to run the module, and help you choose an alternative module.
You will attend interactive lectures – including talks by speakers from the NHS, the public health sector, academia and industry - and workshops where you will take part in discussions, role-play and problem-solving exercises and group work. Practical work will include keeping logbooks and submitting a dissertation, which will allow you to specialise in a particular area.
The course also aims to develop your communication, research, numeracy, teamwork and critical thinking skills, and our extensive facilities include three computer laboratories and a psychophysiology laboratory.
You will be assessed on the basis of your dissertation and research reports, essays and a variety of other types of coursework. These will include critical reviews, health behaviour diaries, psychophysiology laboratory worksheets, logbooks, case studies, presentations and posters.
There are strong employment prospects for Psychology graduates and salaries in this field are excellent. The range of professional skills that psychology graduates develop ensures that they are highly valued across the economy.
After completion of the masters programme, students may work in the health service, public health, organisations, and academia. Work may include helping people to manage and cope with illnesses such as diabetes, pain, cancer, stroke, coronary heart disease etc; health promotion in communities, schools or the workplace; designing and delivering interventions for weight loss, smoking cessation, stress management, improving uptake of screening for cancers etc; research and teaching.
In addition, graduates may also pursue further postgraduate training and/or study and those who have the Graduate Basis for Chartership with the British Psychological Society may, for example, pursue clinical training.
The Psychology Department hosts a range of state-of-the-art facilities and equipment used for both teaching and research purposes.
Across the department there is a broad range of expertise in neuroscience and related disciplines, and specialised equipment includes a new 128-electrode electroencephalogram system (EEG, BioSemi) and Transcranial Magnetic Stimulation equipment (TMS, MagStim).
Psychology teaching and research resources available to staff and students also includes eye-tracking (Tobii) and use of the Biopac System to record various psychophysiology measures such as ECG, heart rate and blood pressure, electro dermal activity (EDA), respiratory rate, and pulmonary function , as well as a cold pressor testing kit.
Specialist psychology laboratory cubicles offer a place for students to conduct individual projects, and there are two large Apple Mac labs specifically adapted for psychology teaching.
Alfonzo Pezzella
MSc Psychology, Health and Wellbeing (was MSc Applied Clinical Health Psychology) graduate
I chose Middlesex University for my undergraduate study because I was interested in the research areas of staff having seen their profiles online. I was also impressed by the number of psychology experts in the department. I then decided to continue on to a postgraduate degree so that I would have the opportunity to mature and acquire more knowledge on the subject.
Middlesex was one of the few universities to offer my chosen MSc course and the staff are knowledgeable and the facilities are amazing. The staff expertise is evident in their research areas and publications.
Being a Middlesex alumnus, I was entitled to a discount towards the course fees. Postgraduate fees are very high today, but I would encourage students to further their studies as it gives them that extra knowledge and can offer an advantage when applying for a job. | https://www.mdx.ac.uk/courses/postgraduate/psychology-health-wellbeing?tab=facilities |
Received:
17
April
2003
Accepted: 19 June 2003
I present optical observations of the Blue Compact Dwarf Galaxy UM 462. The images of this galaxy show several bright compact sources. A careful study of these sources has revealed their nature of young Super Star Clusters. The ages determined from the analysis of the stellar continuum and are between few and few tens Myr. The total star formation taking place into the clusters is about 0.05 . The clusters seem to be located at the edges of two large round-like structures, possibly shells originated in a previous episode of star formation. The sizes of the shells compare well with the ages of the clusters. Evidence for the presence of an evolved underlying stellar population is found. | https://www.aanda.org/articles/aa/abs/2003/35/aah4470/aah4470.html |
NETSUITE INC's gross profit margin for the second quarter of its fiscal year 2016 is essentially unchanged when compared to the same period a year ago. The company has grown sales and net income significantly, outpacing the average growth rates of competitors within its industry. NETSUITE INC has average liquidity. Currently, the Quick Ratio is 1.06 which shows that technically this company has the ability to cover short-term cash needs. The company's liquidity has decreased from the same period last year.
During the same period, stockholders' equity ("net worth") has remained unchanged from the same quarter last year. Together, the key liquidity measurements indicate that it is relatively unlikely that the company will face financial difficulties in the near future.
STOCKS TO BUY: TheStreet Quant Ratings has identified a handful of stocks that can potentially TRIPLE in the next 12-months. To learn more visit www.TheStreetRatings.com.
|Income Statement||Q2 FY16||Q2 FY15|
|Net Sales ($mil)||230.77||177.28|
|EBITDA ($mil)||-16.67||-25.2|
|EBIT ($mil)||-31.74||-35.84|
|Net Income ($mil)||-37.74||-32.29|
|Balance Sheet||Q2 FY16||Q2 FY15|
|Cash & Equiv. ($mil)||407.7||385.65|
|Total Assets ($mil)||1189.04||1063.68|
|Total Debt ($mil)||286.03||279.04|
|Equity ($mil)||313.99||315.78|
|Profitability||Q2 FY16||Q2 FY15|
|Gross Profit Margin||70.08||72.64|
|EBITDA Margin||-7.22||-14.21|
|Operating Margin||-13.75||-20.22|
|Sales Turnover||0.71||0.61|
|Return on Assets||-11.54||-10.3|
|Return on Equity||-43.7||-34.72|
|Debt||Q2 FY16||Q2 FY15|
|Current Ratio||1.27||1.42|
|Debt/Capital||0.48||0.47|
|Interest Expense||3.8||3.52|
|Interest Coverage||-8.36||-10.18|
|Share Data||Q2 FY16||Q2 FY15|
|Shares outstanding (mil)||80.92||78.97|
|Div / share||0.0||0.0|
|EPS||-0.47||-0.41|
|Book value / share||3.88||4.0|
|Institutional Own %||n/a||n/a|
|Avg Daily Volume||1311325.0||601897.0|
Valuation
SELL. This stock’s P/E ratio is negative, making its value useless in the assessment of premium or discount valuation, only displaying that the company has negative earnings per share. For additional comparison, its price-to-book ratio of 28.07 indicates a significant premium versus the S&P 500 average of 2.82 and a significant premium versus the industry average of 7.52. The price-to-sales ratio is well above both the S&P 500 average and the industry average, indicating a premium. Upon assessment of these and other key valuation criteria, NETSUITE INC seems to be trading at a premium to investment alternatives within the industry.
|Price/Earnings||
|
|Price/Cash Flow||
|
|N NM||Peers 59.58||N 75.58||Peers 23.28|
|
|
Neutral. The absence of a valid P/E ratio happens when a stock can not be valued on the basis of a negative stream of earnings.
N's P/E is negative making this valuation measure meaningless.
|
|
Premium. The P/CF ratio, a stock’s price divided by the company's cash flow from operations, is useful for comparing companies with different capital requirements or financing structures.
N is trading at a significant premium to its peers.
|Price/Projected
|
Earnings
|
|
|Price to
|
Earnings/Growth
|
|
|N 153.41||Peers 33.35||N NA||Peers 0.85|
|
|
Premium. A higher price-to-projected earnings ratio than its peers can signify a more expensive stock or higher future growth expectations.
N is trading at a significant premium to its peers.
|
|
Neutral. The PEG ratio is the stock’s P/E divided by the consensus estimate of long-term earnings growth. Faster growth can justify higher price multiples.
Ratio not available.
|Price/Book||
|
|Earnings Growth||
|
|N 28.07||Peers 7.52||N -21.98||Peers 27.93|
|
|
Premium. A higher price-to-book ratio makes a stock less attractive to investors seeking stocks with lower market values per dollar of equity on the balance sheet.
N is trading at a significant premium to its peers.
|
|
Lower. Elevated earnings growth rates can lead to capital appreciation and justify higher price-to-earnings ratios.
However, N is expected to significantly trail its peers on the basis of its earnings growth rate.
|Price/Sales||
|
|Sales Growth||
|
|N 10.41||Peers 5.94||N 31.50||Peers 5.44|
|
|
Premium. In the absence of P/E and P/B multiples, the price-to-sales ratio can display the value investors are placing on each dollar of sales.
N is trading at a significant premium to its industry.
|
|
Higher. A sales growth rate that exceeds the industry implies that a company is gaining market share.
N has a sales growth rate that significantly exceeds its peers. | https://www.thestreet.com/r/ratings/reports/analysis/N.html |
UMaine Extension Diagnostic and Research Laboratory: AGRICULTURE
Maine’s agricultural economy is the largest in New England with over 8,100 farms and 1.3 million acres of farmland. The management of insects plant diseases, and other pests is an integral part of the production of every crop in Maine, yet effective, safe management of these pests can be a challenge. The new UMaine Extension Diagnostic and Research Laboratory will be an invaluable asset to Maine’s farms and its increasing number of new farmers.
-
The current Insect and Plant Disease Diagnostic Lab has been vital in the early detection and management of emerging agricultural pests, including spotted wing Drosophila which resulted in a $3.7 million cost to the Maine blueberry industry in 2012.
-
The lab’s coordination and association with the University of Maine Cooperative Extension Potato IPM program have resulted in a nearly $12,000,000 yearly impact on Maine’s $500 million potato industry.
- The new lab will enhance the pest monitoring, disease forecasting, and educational outreach University of Maine Cooperative Extension provides to Maine’s farming community.
- With over 120,000 head of livestock and over 1.5 million head of poultry, Maine’s farms face distinct threats from preventable diseases, such as salmonella and listeria.
-
Through the existing Animal Diagnostic Lab’s work on parasite, bacteria, and disease prevention, the facility has an estimated $18,000,000 impact on Maine’s dairy, poultry, and sheep industries.
-
The new lab will allow the animal diagnostic program to expand its work on large mammals including horses, cattle, and wildlife, increasing its impact on Maine’s agricultural and natural resource-based economies. | https://extension.umaine.edu/newlab/agriculture/ |
Keyworth Pharmacy, based in Keyworth South Nottingham, is offering a hard-working and enthusiastic person an opportunity to start their career in the pharmaceutical health sciences sector as an Apprentice Counter Dispensary Assistant.
Main duties that you will be trained to do include:
- Assist in the sale of over the counter medicines
- Complete prescription receipt and collection
- Ordering, receiving and storing pharmaceutical stock
- Liaising with customers in all areas of sales, including specialised products providing a highly personalised approach
- Managing stock levels, replenishing and cleaning sales areas
- Receive and store incoming supplies, verify quantities against orders and inform supervisor of stock needs and shortages
- Assisting future sales and maximum profits, by the analysing of seasonal trends and product selection
- Processing payments of various kinds, using the till, including handling of credit/debit cards, cheques and accounts
- Assisting in the reconciliation of the till at the end of each shift/or following day if requested to do so by the manager
- Ensuring standards for quality, customer service and health and safety are met
- Maintaining awareness of market trends and advertising, updating sales display areas
- Dealing with sales as and when required, serve customers showing high standards of customer care at all times, providing a helpful and friendly service, in order to maximise sales
- To utilise specialist product knowledge when required
- To maintain a clean and tidy working environment
- To complete compulsory training as required
- To carry out other duties which naturally fall within the reasonable expectations of the role
This apprenticeship is work based learning therefore you will be working at the employer’s address and will not need to attend college, except for initial assessments.
How to apply for this vacancy
You can apply for this vacancy through the National Apprenticeship Service. If you require any assistance with applying for this vacancy please contact us on 0115 945 7260 or email the apprenticeship team.
Application deadline: 01/03/2020
Possible start date: 02/03/2020
Skills required
- Good communicator
- Work accurately
- Attention to detail
Qualifications required
GCSE's grades A*-D/9-3 (or Functional Skills Level 1 equivalent) in maths & English (desirable).
If you do not hold these grades,then you will undertake apprenticeship assessments at college to determine the current level you are working at.
Candidate requirements
Apprentices are paid for their normal working hours and training that's part of their apprenticeship (usually one day per week). For more information please visit: https://www.gov.uk/national-minimum-wage-rates
Please note that anyone under the age of 18 will only work a maximum of 40 hours per week. Any role which exceeds these hours on advert will be amended accordingly to fall in line with the legislation.
Please note that if the successful applicant is found prior to the closing date the vacancy maybe withdrawn early.
This apprenticeship is work based learning therefore most of the time you will be working at the employers address. Depending on your training you will only need to attend college in Nottingham for initial assessments, enrolment.
Training provided
You will need to complete the following qualifications to achieve the full framework:
- BTEC Pharmaceutical Science Level 2
- Diploma in Pharmacy Service Skills Level 2 QCF
- Employer Rights and Responsibilities
- Personal Learning and Thinking Skills
- Functional Skills Level 1 or 2 maths if required
- Functional Skills Level 1 or 2 English if required
Future progression
Potential of progression onto a level 3 dispensing technician role, dependent upon the performance.
Additional questions for candidates
During the application process you will be asked the following additional question(s) below. To prepare yourself, have a think about how you might answer them.
First question
Please state what personal qualities and skills you may have that you feel make you a good candidate for this apprenticeship.
Second question
For the purposes of shortlisting your application against travel time from your home address to the employer, please state whether or not you drive and have your own car.
Employer information
Keyworth Pharmacy provide a range of private and NHS services. They pride themselves on providing a wide range of innovative, high quality services and products to meet the needs of our customers.
Address details:
5
The Square
Nottingham
NG12 5JT
Map data is calculated from the available geolocation information.
|Reference number||VAC001611014|
|Working week||
Monday - Friday 9am - 6pm, (1 hour lunch break between 1 - 2pm)
|
(40 hours per week)
|Expected duration||15 months|
|Apprenticeship level||Intermediate|
|Related course||Health Pharmacy Services - Intermediate Apprenticeship (Level 2)|
|Salary||
£166.00
(Weekly)
|
Apprentices are paid for their normal working hours and training that's part of their apprenticeship (usually one day per week). | https://www.nottinghamcollege.ac.uk/apprenticeships/vacancies/pharmacy-counter-dispensing-apprenticeship-1611014 |
At the end of last year, when I was in the midst of preparing my ‘Celebration of Learning’ speech, little did I think I would be writing a similar message again this year…the continuing obstacle, which ’shall not be named’, has again tried to create havoc in the lives of our Balmain community and beyond.
This year, as we are all so well aware, remote learning was a part of our lives for over a quarter of the school year, with certain restrictions still continuing to the end of Term 4, such as the inability to celebrate the way we normally would. It certainly hasn’t been easy, and not a simple walk in the (5km LGA) park, however, I personally choose to see the gifts that the pandemic has brought us.
The topsy-turvy world we have navigated through, and continue to do so this year, has been a positive experience...out of adversity, our students have developed such a variety of skills: resilience, patience, perseverance, independence, determination, problem-solving, time management, grit and, of course, using the Google Classroom platform. Our students have learnt to get along with siblings, spend more quality family time at home, as well as to learn new skills, such as cooking, gardening and learning new instruments. As they say, ‘practise makes perfect’ and, second time around, we have all seemingly quickly and easily slid back into the life of Google Classroom and Zoom! Our Kindergarten students have experienced remote learning for the very first time (and hopefully something that won’t need to happen again next year ?) and have also developed brand-new technological skills!
This time around, with the additional gift of the 5km restrictions, a number of our families got out to experience their ‘5km Slice of Heaven’, including myself. This is something that I know I have taken for granted in the past, yet it has been eye-opening to re-experience a love and wonder for nature again.
A ‘Celebration of Learning’ occurs before the actual (this time, digital) event. Achievement and success comes in all shapes and sizes, and sometimes is quite obvious – competing and winning at different levels, such as sport; learning to play an instrument and performing a solo and as a collective in the band in front of an audience; achieving exemplary results in tests and receiving awards; as well as standing at the front of lines and receiving weekly awards.
Sometimes, achievement is not always obvious, however, but happens very subtly – having the courage to ‘have a go’ at something new, being able to decode an unknown word, reading with more fluency, having that ‘aha!’ moment when the maths concept makes sense, learning how to make a friend, making less mistakes in spelling over time, working with others in a group…the list is endless.
After reading through and signing off on 361 Semester 2 reports recently, it quickly became apparent to me that there are currently 361 students at Balmain Public School, from Kindergarten to Year 6, who have all experienced individual successes and achievements this year. These reports are definitely all a celebration...hopefully you’ve taken the time to celebrate with your children.
They say that…“It takes a village to raise a child”…this African proverb certainly rings true at Balmain Public School; we have a fabulous community working together, to ensure the success of each and every member of our Balmain village.
Firstly, I'd like to acknowledge the hard work, dedication and passion of all of Balmain's teachers and staff, who ensure that the Department of Education’s goal of knowing, valuing and caring for each student, is undeniably upheld. Thank you to each and every one of you, who help make such a positive difference in our children's education.
The next special group of people is our P&C - your collective enthusiasm and dedication with fundraising for the benefit of all our children and school is truly commendable! Thank you so much for your ongoing support!
Together, we continue in partnership to deliver a quality education to each and every student at Balmain Public School - that all our students make a strong start in life and education - that all students are engaged and challenged and continue to learn - that all young people have a strong foundation in literacy and numeracy, as well as deep content knowledge - that all students are confident in their ability to learn, adapt and be responsible citizens - that all our students will be well-prepared for the future that lies ahead.
This year has certainly been unique and a moment in time to embrace and treasure...'a picture paints a thousand words', as they say...enjoy the highlight reel of '2021 - THE YEAR THAT WAS'... | https://enewsletter.coralcommunities.com/42794f3b |
The algorithm used by Google to rank web pages could be used to order an ecosystem’s species to help analyse the consequences of their extinction.
That was the conclusion of American scientists Stefano Allensina and Mercedes Pascual in a paper they published earlier this month in journal PLoS Computational Biology.
The Google algorithm, PageRank, ranks the importance of web pages on the basis of the number and importance of other web pages that link to it.
PageRank underlies Google’s eponymous search engine.
Allensia and Pascual were inspired by PageRank to develop a similar algorithm, which ranks a species’ importance to the survival of other species in an ecosystem on the basis of the number and importance of other species it “points” to in the system’s food-web.
This information could then be used to help forecast the effects of a species’ extinction on the extinction risk of other species.
In their paper, Googling Food Webs, Allensia and Pascual explain that “consequences of species losses such as secondary extinctions are difficult to forecast because species are not isolated, but interact instead in a complex network of ecological relationships,” and note this is a “pressing problem given current human impacts on the planet”.
Upon learning of the paper, geeky website slashdot.org promptly exploded with snarky comments that the PageRank’s only relevance to the biologists’ algorithm is that, like PageRank, it is an application of a Markov process—which mathematicians have known about for ages.
Salient thinks the slashdot-ers are just jealous that they haven’t written a paper that made the front page of the BBC News website (or the pages of Salient). | http://salient.org.nz/2009/09/biologists-discover-mathematics-plan-to-save-species-from-imminent-extinction/ |
Currently, many smart materials exhibit one or multifunctional capabilities that are being effectively exploited in various engineering applications, but these are only a hint of what is possible. Newer classes of smart materials are beginning to display the capacity for self-repair, self-diagnosis, self-multiplication, and self-degradation. Ultimately, what will make them practical and commercially viable are control devices that provide sufficient speed and sensitivity. While there are other candidates, piezoelectric actuators and sensors are proving to be the best choice.
Piezoelectric Actuators: Control Applications of Smart Materials details the authors' cutting-edge research and development in this burgeoning area. It presents their insights into optimal control strategies, reflecting their latest collection of refereed international papers written for a number of prestigious journals.
Piezoelectric materials are incorporated in devices used to control vibration in flexible structures. Applications include beams, plates, and shells; sensors and actuators for cabin noise control; and position controllers for structural systems such as the flexible manipulator, engine mount, ski, snowboard, robot gripper, ultrasonic motors, and various type of sensors including accelerometer, strain gage, and sound pressure gages.
The contents and design of this book make it useful as a professional reference for scientists and practical engineers who would like to create new machines or devices featuring smart material actuators and sensors integrated with piezoelectric materials. With that goal in mind, this book:
Describes the piezoelectric effect from a microscopic point of view
Addresses vibration control for flexible structures and other methods that use active mount
Covers control of flexible robotic manipulators
Discusses application to fine-motion and hydraulic control systems
Explores piezoelectric shunt technology
This book is exceptionally valuable as a reference for professional engineers working at the forefront of numerous industries. With its balanced presentation of theory and application, it will also be of special interest to graduate students studying control methodology.
Publisher: Taylor & Francis Inc
ISBN: 9781439818084
Number of pages: 280
Weight: 522 g
Dimensions: 235 x 156 x 20 mm
You may also be interested in...
ReviewsSign In To Write A Review
Please sign in to write a review
Sign In / Register
Sign In
Download the Waterstones App
Would you like to proceed to the App store to download the Waterstones App? | https://www.waterstones.com/book/piezoelectric-actuators/seung-bok-choi/young-min-han/9781439818084 |
Tutor profile: Jose M.
Questions
Subject: Writing
What are some ways to make my writing quicker and easier to read?
How can I write more concisely?
Subject: Psychology
What is the fundamental attribution error theory?
The fundamental attribution theory (FAE) is the tendency to disregard situational factors for other's behavior while over-emphasizing personality-based explanations for their behavior. For example, if an employee comes to work late, many of their coworkers might label that person as irresponsible (personal factor) and may not consider that this person had a family emergency that caused them to be late (situational factor).
Subject: Education
What is Vygotsky’s sociocultural theory of cognitive development?
Vygotsky’s sociocultural theory puts a strong emphasis on the environment and social factors contributing to cognitive learning. This contrasts from Piaget’s theory of cognitive development where the emphasis is more on individual factors and children advance from one developmental “stage" to another.
Contact tutor
needs and Jose will reply soon. | https://tutorme.com/tutors/353566/interview/ |
Number of RNA Genes sources:
Aliases for MIR124-3 Gene
External Ids for MIR124-3 Gene
- HGNC: 31504
- Entrez Gene: 406909
- Ensembl: ENSG00000207598
- miRBase: hsa-mir-124-3
Previous HGNC Symbols for MIR124-3 Gene
- MIRN124A3
- MIRN124-3
Previous GeneCards Identifiers for MIR124-3 Gene
- GC20P061284
- GC20P061812
- GC20P063179
Summaries for MIR124-3 Gene
-
microRNAs (miRNAs) are short (20-24 nt) non-coding RNAs that are involved in post-transcriptional regulation of gene expression in multicellular organisms by affecting both the stability and translation of mRNAs. miRNAs are transcribed by RNA polymerase II as part of capped and polyadenylated primary transcripts (pri-miRNAs) that can be either protein-coding or non-coding. The primary transcript is cleaved by the Drosha ribonuclease III enzyme to produce an approximately 70-nt stem-loop precursor miRNA (pre-miRNA), which is further cleaved by the cytoplasmic Dicer ribonuclease to generate the mature miRNA and antisense miRNA star (miRNA*) products. The mature miRNA is incorporated into a RNA-induced silencing complex (RISC), which recognizes target mRNAs through imperfect base pairing with the miRNA and most commonly results in translational inhibition or destabilization of the target mRNA. The RefSeq represents the predicted microRNA stem-loop. [provided by RefSeq, Sep 2009]
GeneCards Summary for MIR124-3 Gene
MIR124-3 (MicroRNA 124-3) is an RNA Gene, and is affiliated with the miRNA class. Diseases associated with MIR124-3 include Breast Cancer and Pediatric Ependymoma. Among its related pathways are MicroRNAs in cancer and Alzheimers Disease. | https://www.genecards.org/cgi-bin/carddisp.pl?gene=MIR124-3 |
Our blogger discusses an association of microbleeds with cerebral blood flow and cognitive impairment.
Cognitively normal elderly individuals with cerebral microbleeds experience chronic cortical hypoperfusion and may be at risk of cognitive decline, a recent cross-sectional study reports.1
Cerebral microbleeds are a common finding on magnetic resonance imaging in older individuals. Microbleeds have been detected in both healthy individuals and those with brain disease, such as symptomatic cerebral amyloid angiopathy and Alzheimer’s disease. Limited data demonstrate the role of microbleeds in predicting progression to dementia and severity of cognitive impairment.2 If microbleeds and cognitive impairment are indeed associated, what is the nature of this association?
To answer the question, Nicholas Gregg and colleagues2 measured the number and location of microbleeds, cerebral blood flow, and cortical β-amyloid, among other characteristics, in 55 cognitively normal individuals (average age 86.8 years) and analyzed these data to reveal any possible associations.
Study participants with cortical microbleeds had 25% reduction in resting-state cerebral blood flow, compared to participants with other microbleeds (subcortical, cerebellar, and brainstem), P=0.0003. Cerebral blood flow in participants with all microbleeds and without microbleeds did not differ (P=0.022; P<0.00625 was considered statistically significant).
Clearly, cortical microbleeds are different than microbleeds located in other brain regions. This observation aligns with past reports that link location of microbleeds to the form of vasculopathy and even to deficit in specific cognitive domains with which affected brain region is affiliated.
Although all study participants were identified as cognitively normal, participants with cortical microbleeds had a trend towards cognitive impairment (45% vs 19% of participants with and without cortical microbleeds, respectively, had nonzero global Clinical Dementia Rating scale score, P=0.12). These findings complement reports of decreased cerebral blood flow in patients with microbleeds diagnosed with cerebral amyloid angiopathy or Alzheimer disease.3 Individuals with cortical microbleeds regardless of cognitive deficit experience chronic cerebral hypoperfusion, which may promote cortical neurodegeneration. Cognitive deficit may be a consequence of neurodegeneration in these individuals.
Burden of β-amyloid and presence or number of microbleeds were not associated in the present study (P=0.6). Possible reasons for the lack of the association are old age and high prevalence of β-amyloid positivity in this cohort, authors speculate. Study participants with cortical microbleeds had met the Boston Criteria for possible or probable cerebral amyloid angiopathy, and authors noted “the potential of resting-state [cerebral blood flow] measured by arterial spin-labeled MRI to be a marker of [cerebral amyloid angiopathy].”
It is possible that these individuals with microbleeds, hypoperfusion of cerebral cortex, and a trend toward cognitive deficit have asymptomatic pathology of small cortical vessels. Presymptomatic diagnosis of progressive diseases like cerebral amyloid angiopathy using resting-state blood flow and other markers is important. For example, it can help understand progression of disease, develop early therapies, and select patients for clinical trials of therapies and vaccines. Hence, larger studies of markers of vascular disease are much needed. These studies should aim to clarify the association of microbleeds with cerebral blood flow and cognitive impairment.
1. Gregg NM, et al. Incidental cerebral microbleeds and cerebral blood flow in elderly individuals. JAMA Neurol. 2015 Jul 13. [Epub ahead of print]
2. Martinez-Ramirez S, et al. Cerebral microbleeds: overview and implications in cognitive impairment. Alzheimers Res Ther. 2014;6:33.
3. Doi H, et al. Analysis of cerebral lobar microbleeds and a decreased cerebral blood flow in a memory clinic setting. Intern Med. 2015;54(9):1027-1033. | https://www.neurologylive.com/view/do-cerebral-microbleeds-lead-cognitive-deficit |
Many contemporary societies suffer from low fertility for two reasons: (1) the desired number of children has declined and stabilized at a relatively low level between 2.0 and 3.0 [1: 201-207], near replacement level fertility, and the realized fertility is below replacement level because (2) most people cannot achieve their desired number of children [2: 12-19]. If we consider these as serious problems and try to raise fertility to replacement level, what resources should we mobilize? Furthermore, what obstacles to such policies could we anticipate? This study addresses these issues by focusing on the economic aspects of the work-life balance (WLB) and universal child benefit (UCB) policies in Japan.
This study used a model of people's expectations about their future equivalent incomes, measured as the household income divided by the square root of the number of household members. Suppose an unmarried person earns income s without any family responsibilities. He or she expects life in a household consisting of x children and m adults with an expected equivalent income y (x) = s (wm+bx) / sqrt(m+x), where w denotes the effect of WLB and other adults' contribution and b denotes child benefit per child. Both w and b are measured by s. We assume that m = 2 to focus on households comprising a couple with children.
Analyses of the function y (x) found limited effects of WLB. Even under the fully achieved WLB (w = 1), the equivalent income y (x) exceeded s only where x = 1, if b = 0. If UCB (b) is low, y (x) decreases as x increases, regardless of the size of w. In addition, w cannot be so large under the current conditions in Japan that a majority of young unmarried women will not want to pursue fulltime careers [3: 62, 162]. Therefore, WLB policies are not promising mechanisms to raise fertility.
In contrast, UCB improved the equivalent income of parents. High UCB (b > 0.54) let y (x) exceed s and increase monotonously, with a small effect of WLB (w = 0.6). The effect was strong enough for policymakers to pursue UCB as a fertility booster.
High UCBs are controversial in contemporary Japan because they violate some fundamental normative beliefs, such as reproductive egalitarianism and that parents have primary responsibility for maintaining their children . In contrast, the WLB policies are conservative and can coexist with those beliefs. High levels of UCB can be developed if we find ways to overcome the normative constraints and constitute a new family system in which most parents would come from a specific subpopulation of the overall society, bear large numbers of children, and take no (or secondary) economic responsibility for their children.
(See http://tsigeto.info/15y for details)
ideology, family policy, Japan
Questions/comments are welcome.
Tohoku Univ / School of Arts and Letters / Applied Japanese Linguistics / TANAKA Sigeto
History of this page:
Generated 2015-09-22 21:14 +0900 with Plain2. | http://tsigeto.info/15y.html |
The philosophy of the School of Sciences and Engineering is that the education offered should be of the highest level so that our students are enabled to acquire the knowledge and skills and cultivate the abilities which will help them to succeed in their careers, enrich their lives, and successfully face the difficulties but also take advantage of the opportunities which they will meet.
Our Vision
The vision of the School of Sciences and Engineering in a summarized form is to:
- Prepare good science graduates for postgraduate studies and to enter the workforce
- Strive for excellence in teaching and learning and provide a dynamic learning environment
- Promote research and education for research (especially in the MSc, PhD research degrees)
- Collaborate effectively with the academic and industry communities nationally and internationally
- Make available to society our science resources and human expertise
Our Departments and Programs
The School of Sciences and Engineering is one of Cyprus’ few resources of education in the fields of science and engineering. Organizationally, our School consists of two departments, namely: Computer Science, and Engineering.
The Department of Computer Science offers programs in Computer Science (BSc, MSc, PhD) and in Data Science (BSc, MSc). The MSc in Data Science is offered through Distance Learning. The MSc in Computer Science is offered both face-to-face and through Distance Learning.
The Department of Engineering offers programs in the disciplines of Computer Engineering (BSc), Electrical Engineering (BSc, MSc, PhD), Civil & Environmental Engineering (BSc), Mechanical Engineering (BSc), Oil & Gas Engineering (BSc) and Oil, Gas and Energy Engineering (MSc, PhD).
The School is also affiliated to the University of Nicosia Research Foundation (UNRF) which deals with scientific and technological research.
Our Success
Our success is evidenced through the acknowledged satisfaction of the companies and organizations which have employed our graduates. Also, the success story of students who obtained their foundation (Bachelor degree) from us and went on to pursue graduate studies (Masters and Doctoral degrees) overseas is another piece of proof concerning the quality of our programs.
Our People
A dynamic force for excellence within Cyprus’ private university educational system, our dedicated, full time faculty amount to 32 academics – 31 are teaching research faculty (PhD holders) and 1 is special teaching faculty. Additionally, we employ over 50 Adjunct faculty made up of professionals, recognized experts in their respective fields.
Our Resources
The university library collection of over 95,000 volumes, more than 1,000 print journals and 20,000 electronic journals, and numerous online resources supports basic research. The university’s sizeable investments in its specialized computer labs, Computer Science Lab, Engineering Computer Labs, Radio-Communications Lab, Electrical & Computer Engineering Lab, Electrical & Photovoltaics Lab, Civil Engineering lab, Oil & Gas engineering lab, Mechanical Engineering lab, Air-Conditioning & Refrigeration Lab, Mechanical Technology Lab, Automotive Technology Lab, Welding Lab, and Chromatography lab, along with a networked learning environment and multi-media classrooms give state-of-the-art infrastructure and support to our students.
Our Students
The School of Sciences and Engineering has over 550 students. The School’s student body is diverse and multicultural and reflects, to some extent, the demographics of Cyprus and the location of the island; 18% of the student body is female and 82% is male; 56% of the students are international coming from 58 different countries, with strong European (Greek), Middle Eastern, and African representation, and 44% are Cypriot. This cross-cultural mix provides a rich base of views and practices to enhance and expand students’ perspectives for multinational scientific and technological environments. | https://www.unic.ac.cy/school-of-sciences-and-engineering/school-profile/ |
Description:
As a Sales Associate, you will be an expert in delivering excellent customer service and a memorable shopping experience to maximise sales. You will also ensure that you have the most up-to-date product knowledge, that stock loss risks are minimised, and that the brand is represented to the required standard.
As a Visual Merchandiser, your role will be to deliver and maintain exemplary standards of visual merchandising including promotion, recommendations, and implementation. You will work with the Store Managers and their teams to lay out effective store and window displays within the company and brand guidelines.
You will be defining, designing, and implementing a creative visual merchandising strategy through outfit building. Implementing appealing and eye-catching visual displays as per Brand Guidelines that lead the customer through the entire store.
Job Requirements:
- At least 2 years’ retail visual merchandising experience
- Good planning and organizational skills
- The ability to apply sound brand principles to projects and campaigns
- Computer literate.
- Qualifications in Visual Merchandising or Art & Design are an advantage. | https://informer.pk/job/retail-opportunities-job-in-dubai-alshaya/ |
For my senior capstone project, I created a book about the Hungarian Revolution of 1956. While this event has long been forgotten in the contemporary American’s mind, yet looking back, one can see many themes, such as democracy, and freedom, that remain extremely relevant today. Through this project, I wanted to show the importance of this event to the contemporary viewer by emphasizing the emotional arc—from hope to betrayal—felt by the Hungarian people, and the false promises given to Hungary by the West, especially by the United States.
In the book, which is structured chronologically, I highlighted text from interviews of Hungarian refugees to provide a personal glimpse into the Revolution, and supplemented those with reportage and historical documents from the period. I used documentary photographs from the Revolution, and emphasized moments in some photographs with a paint roller texture. This brings to mind the hand-printing of protest posters throughout history, and adds energy to sometimes static images. As for color, I chose to use red and green to draw upon the symbolism they hold in the Hungarian flag—red symbolizing strength and shedding blood for the fatherland, and green symbolizing hope, both key concepts of the Revolution.
I decided to deboss the type on the cover to speak to the concealment of the Revolution by the Soviets, and the lack of memory for such a historically important event. The almost-there element of the text also speaks to the mirage of freedom felt by the Hungarians—something that was felt for a short period of time and then taken away. As for the binding, I used a linking stitch, and left the spine uncovered to represent an event that was never completely realized. It also allows one to see the interior of the book structure, paralleling how the mask of complacency was thrown off in Hungary during the Revolution, and the true spirit of Hungarian society was seen.
I hope that this book will help people understand an important historical event, and think about their responsibility as a global citizen, and the values that they, and the country they live in, aim to uphold. | https://madelinepartner.com/revolution-betrayal |
The potential of the cell and gene therapy field is prohibited by the current costs and logistics of harvesting a patient’s cells, processing them at a centralised location and delivering them back to the patient. To create change, both technology and manufacturing-model innovation are required. Analytics, real-time monitoring, and data are going to be essential in reducing CoGs and improving access to and commercial viability of therapies.
However, there is a serious lack of purpose-built bioprocessing technologies for cell and gene therapy manufacturing. Those that are available are mostly based on legacy MAbs technologies that have not changed in decades. When it comes to real-time, remote monitoring sensors and analytics, there is very little available to the cell and gene therapy industry.
Analytics deserves its time in the spotlight for a few reasons. First, is a recent string of highly publicized, late-stage review issues from the FDA that were mostly due to CMC-related causes. Iovance and Sarepta’s cases were linked to potency assays; could these have been avoided with the effective application of analytics to better characterise the product through critical quality attributes (CQAs)?
So, what’s preventing developers from designing mechanistically relevant assays?
This report examines the above question, focusing on:
- The identification and measurement of appropriate CQAs
- The value of fit-for-purpose analytics technologies
- What the future of CGT manufacturing analytics should look like
This report has been produced in partnership with Bio-Techne. | https://www.phacilitate.com/e-book/the-role-of-analytics-in-reproducibility-and-regulatory-approval-of-cell-and-gene-therapies/ |
It is necessary to update the current system of levying charges for railway infrastructure to reflect the current market situation, to take into account the new transport strategy established by the European Union, and to reflect legislative changes in relation to Directive 2012/34/EU on creating a single European railway area. International trends aim to categorize tracks according to technical parameters and parameters based on shipment time, as well as many others. The existence of different opinions on the amount and way to charge fees for railway infrastructure have led the authors of this academic paper to create a plan for a system of charges to be derived from the current production factors as well as from a number of specific new factors used in countries neighboring the Czech Republic.
Full Text: PDF
DOI: https://doi.org/10.14311/APP.2017.11.0049
Refbacks
- There are currently no refbacks.
This work is licensed under a Creative Commons Attribution 4.0 International License. | https://ojs.cvut.cz/ojs/index.php/APP/article/view/4451 |
Minimal uptake of sterile drug preparation equipment in a predominantly cocaine injecting population: implications for HIV and hepatitis C prevention
OBJECTIVE: To identify factors associated with using sterile drug injection equipment by injection drug users (IDUs).
METHODS: 275 IDUs were recruited from syringe exchange programs in Montreal, Canada in 2004-2005. A structured, interviewer-administered questionnaire collected information about demographics, drug injection practices, self-reported HIV and hepatitis C virus (HCV) status, and harm reduction behaviours. Logistic regression was used to model variables in relation to the use of sterile syringes, containers, filters, and drug preparation water.
RESULTS: Sterile syringes, containers, filters, and water were used for at least half of injecting episodes by 95%, 23%, 23%, and 75% of subjects, respectively. In multivariate analysis, users of sterile syringes had higher odds of being older and injecting alone, and were less likely to report problems obtaining sterile syringes and requiring or providing help with injecting. Using sterile filters was associated with having at least high school education, injecting heroin, and injecting alone. In addition to the factors associated with filters, users of sterile containers were more likely to be HCV-negative and older. Using sterile water was associated with daily injecting and being HCV-negative.
CONCLUSIONS: Improving the uptake of sterile drug preparation equipment among IDUs could be aided by considering drug-specific risks, such as drug of choice and injecting context, while reinforcing existing messages on safer injecting. The association between sterile equipment use and HCV-negative status may be representative of an established subgroup of safer injectors who have remained free of infection because of consistent safe injecting practices. | https://read.qxmd.com/read/17689367/minimal-uptake-of-sterile-drug-preparation-equipment-in-a-predominantly-cocaine-injecting-population-implications-for-hiv-and-hepatitis-c-prevention |
I'm a photographer based on the border of Cornwall and Devon.
Moving to Devon from the big smoke as a child I suddenly found myself surrounded by beautiful countryside. This new found freedom allowed me to appreciate the great outdoors, which in turn started to inspire and influence me into my adult years.
I have enjoyed pursuing my passion in photography in the incredible landscapes that surround me, and developed my skills as a professional photographer.
Family generations, including my father, and grandfather whom I helped as a child on building sites in London had a big influence on my first career choice. Working for years in the building trade has given me the knowledge to understand the processes involved in the thought and creation of buildings.
My appreciation for architectural beauty is something that has grown with my skill as a professional photographer. My passion is to capture the art within the architecture.
Capturing landscapes is another passion in my life; this is when I get to enjoy the most unsociable hours, with dawn and dusk being the most magical times of the day to shoot.
Astrophotography is my latest adventure and is becoming one of my favourite moments in time to capture images and test out new techniques.
What I really love about photography is being able to preserve the magic and emotion of the moment in time.
I now live in Bude, Cornwall with my family and travel around the UK to work with clients. Being able to go back home to the Devon and Cornwall countryside that first inspired my passion as a boy. | https://brownhillphotography.co.uk/about-me |
The department of Agricultural Economics began its activities with the establishment of the Faculty. The Master’s program was opened in 1997. There are two major field of studies: Agricultural Management and Agricultural Policy and Extension. Agricultural Management deals with planning and analysis of agricultural enterprises, marketing of agricultural products, cooperatives, and financing. Agricultural policy and extension deals with internal and external impacts of the policies applied in agriculture, agricultural extension, and socio-economic aspects of the rural development. The department offers a graduate program that leads to M.S. degree in agricultural economics. Currents research in the department includes agricultural marketing, contract farming, and international trade. Those students who graduate from the department can work everywhere in which agricultural engineers are employed. In addition, they can work as planners in investment related institutions , financial managers at banks providing agricultural loans and they are especially preferred at agribusiness and exporting firms as managers.
History
The department of Agricultural Economics was established in 1992 as a part of the Faculty of Agriculture. The Master’s program was opened in 1997. There are two major field of studied: Agricultural Management and Agricultural Policy and Extension. The academic staff includes 1 Professor, 2 associate professors, 2, assistant professors, 2 research assistants with PhDs, and 1 research assistant.
Mission
To educate its students in the areas of agricultural economics, and train them to work efficiently in agribusiness firms, and agricultural public institutions, and to carry out regional and national research in the field of agricultural economics.
Vision
To establish an educational system that follows global changes simultaneously, established main infrastructure, formed high quality standards, prepares its students for both professional and social life, and carries out research that addresses to the problems of the region and country.
Purpose
To conduct research in related fields, to offer undergraduate and graduate programs, to publish research results, extension, and to organize scientific activities. In that context the purpose can be summarized as follows: To conduct research at international, national, regional, and farm levels to provide solutions to the economic and social problems of agriculture. To approach to the national and local issues from the perspective of macro and micro scales and suggest policies. To educate students in the areas of farm management, organization, integration, and international relations covering modern production techniques and economic principles. To collaborate with national and international organizations in the areas of research and education.
Physical Infrastructure
There is a documentation center which hosts a seminar room equipped with computer and projection. In addition, students can utilize computer lab and library located at the Faculty. | https://ects.adu.edu.tr/programme-detail/3/3952/ |
Highest Quality Data, Shortest Possible Timelines.
Clinical decision support Key Performance Indicators (KPIs) Data Management (DM)
Besides the traditional data management approach, Exom utilizes advanced analytics for the examination of data using sophisticated techniques and tools, such as predictive analytics, forecasting, and data mining, to discover deeper insights into clinical trial data, make predictions, and generate recommendations. Exom’s innovative data management approach provides all study stakeholders with valuable insights at their fingertips.
Performing trend review and data analytics, comparing day-to-day safety and operational triggers across sites and countries, helps medical monitors and centralized data monitors to optimize trial performance and support the monitoring strategy. Thus, the medical monitor can efficiently help to ensure subject data are medically congruent and sound within and across subjects leading to improved patient safety. | https://www.medigy.com/offering/data-management-analytics/ |
PREVENTION OF RADIATION-INDUCED RETINOPATHY WITH AMIFOSTINE IN WISTAR ALBINO RATS.
To evaluate the radioprotective efficacy of amifostine on irradiated mature rat retina. A total of 108 Wistar albino rats were categorized into 3 groups, namely, apoptosis (n = 48), acute effects (n = 40), and late changes in retinal cell layers (n = 20). Each group was further subcategorized into 4 arms: control, amifostine (A), radiotherapy + placebo (RT), and RT + A arms, respectively. Intraperitoneal amifostine (260 mg/kg) was administrated to A and RT + A arms 30 minutes before irradiation. Control and A groups were sham-irradiated, whereas a single dose of 20 Gy whole-cranium irradiation was delivered to RT and RT + A arms. Apoptosis was assessed in 8, 12, and 18 hours after irradiation. Electron microscope was used 2 weeks after irradiation for evaluation and scoring of early morphologic changes in retina. Late effects were assessed and scored accordingly by using both the electron and the light microscope on Week 10. At acute phase, although no notable change was seen in 8 hours, significant increase in apoptosis was detected in 12 hours in RT arm (P = 0.029). Comparative analyses between the groups in 3 different time points displayed a higher apoptotic rate in RT group than the RT + A group (P = 0.008). Similarly, comparisons between groups for late effects on the basis of electron microscopic findings revealed lower scores in the RT + A than the RT arm (P < 0.001). This study suggested a potential radioprotective role for amifostine on mature rat retina by reducing radiation-induced apoptosis in retinal cells. These results form a basis for such preclinical investigations and call for future clinical studies.
| |
2018 by the author(s).
Abstract
Most private grasslands in the Great Plains are managed with the goal to optimize beef production, which tends to homogenize rangeland habitats. The subsequent loss of vegetation heterogeneity on private lands is detrimental to ecosystem function. However, conservation planners should understand the factors that lead to variation in management of rangelands. We used a mail survey targeted to ranchers in counties with intact rangeland in North Dakota, South Dakota, and Nebraska in 2016 to examine factors predicted to be related to attitudes about strategies leading to heterogeneity such as innovativeness and low risk aversion, and intended behaviors associated with creation of heterogeneity. We used survey questions and a set of relevant scales to examine predictors of behavioral intentions for rangeland management and conservation. Attitudes about fire and prairie dogs, two strategies that create heterogeneity, were largely negative, and ranchers with positive attitudes about fire and prairie dogs and higher perceived behavioral control of their ranch and surrounding landscapes had greater intention to engage in heterogeneity-promoting behaviors. Social norms were also important in predicting intended behaviors and attitudes. Our research suggests that heterogeneity of grasslands may remain low unless land managers understand the importance of spatial and temporal heterogeneity and recognize prescribed fire and prairie dogs, and other burrowing colonial mammals, as principal drivers of ecological processes on rangelands. Conservation organizations may find success by modeling management tools, reducing the perceived effort producers must make to adopt behaviors that support heterogeneity, and by developing programs that work to change social norms around fire and prairie dogs. | https://digitalcommons.unl.edu/natrespapers/861/ |
Perception of object trajectory: parsing retinal motion into self and object movement components.
A moving observer needs to be able to estimate the trajectory of other objects moving in the scene. Without the ability to do so, it would be difficult to avoid obstacles or catch a ball. We hypothesized that neural mechanisms sensitive to the patterns of motion generated on the retina during self-movement (optic flow) play a key role in this process, "parsing" motion due to self-movement from that due to object movement. We investigated this "flow parsing" hypothesis by measuring the perceived trajectory of a moving probe placed within a flow field that was consistent with movement of the observer. In the first experiment, the flow field was consistent with an eye rotation; in the second experiment, it was consistent with a lateral translation of the eyes. We manipulated the distance of the probe in both experiments and assessed the consequences. As predicted by the flow parsing hypothesis, manipulating the distance of the probe had differing effects on the perceived trajectory of the probe in the two experiments. The results were consistent with the scene geometry and the type of simulated self-movement. In a third experiment, we explored the contribution of local and global motion processing to the results of the first two experiments. The data suggest that the parsing process involves global motion processing, not just local motion contrast. The findings of this study support a role for optic flow processing in the perception of object movement during self-movement.
| |
As an equity research analyst, Glenn reviewed stocks to provide insight and recommendations to firms, traders, and institutional and individual investors. In this aspect, he worked with TheStreet.com, Worldly Investor, InsiderTrader.com, and Cantone Research.
As the Vice President of market intelligence for Citi, a position he has held for over eight years, his focus is on the American depositary receipts (ADR) marketplace. Working in the ADR field, Glenn analyses the securities of non-U.S. companies to provide accurate information and advance Citi's confidence in decisions, market opportunities, development, and penetration. Glenn also worked with Canadian mass media firm Thomson Reuters as a director of strategic research. Here he worked with guiding the collection of broad knowledge which is a mainstay for the Reuters firm.
His research goes beyond analyzing the financial statements of the companies he covers. He also speaks with the management, vendors, and distributors of those companies. His research allows him to distill vast amounts of information and facts into concise, informative, and easy-to-read articles. Glenn states his favorite part of his work as a financial writer is when he helps his readers save money and also make money.
Glenn writes for both print and online publications and brokerage firms including Advanced Trading Magazine, Investopedia, Registered Representative Magazine, RealMoney.com, TheStreet.com, Prudential Securities, and WorldlyInvestor.com. Several trade publications and venues including Business Insider, CNN, Forbes, Investor's Business Daily, and The Washington Times will quote his work.
Glenn earned his Master of Business Administration from Monmouth University.
Glenn holds the Financial Industry Regulatory Authority (FINRA) designation, as well as Series 6, 7, 24, and 63 licenses. | https://www.investopedia.com/contributors/51/ |
a. Field of the Invention
This invention relates to a means and method of mass spectrometry, and in particular, a means and method of single event time-of-flight mass spectrometry for analysis of a specimen material.
b. Problems in the Art
The benefits, needs, and desires of determining the constituent make-up of compositions and materials is well known in the art. A number of different methods and instrumentation set-ups are used to attempt to analyze materials. In general these methods are either unreliable, marginally accurate, extremely costly, or require significant amounts of time for gathering of data from the instrumentation and/or scientific manpower for interpretation of results.
One method which is fairly accurate and reliable, but costly and time consuming, is mass spectrometry. The cost and time requirements of most mass spectrometers are prohibitive for small or economical applications.
One well known type of mass spectrometry is time-of-flight mass spectrometry. With this method ions are created in packets which are accelerated, drift through a space where the masses with different velocities are separated, and then detected. One of the methods for creating the packets of ions is by bombarding a solid specimen with a pulse of ions. In turn, charged ion particles are emitted directly from the solid specimen as packets of ions which are subsequently accelerated, separated and detected. The time-of-flight from their emission-to-detection is then utilized to compute their mass, which thereafter can be converted to a determination of the composition of the particle, and thus the composition of the specimen.
Mass spectrometers used for determining the constituent make-up of solid specimens can cost in the range of one-half million dollars. Time-of-flight mass spectrometers used for these purposes cost in the range of $300,000, and to date have not had high mass resolution.
There is therefore a real need for a mass-determining analytical method and instrumentation which can be used for a variety of types of specimens, including those with high mass compositions, which is simple in design, which takes significantly less time for information gathering, and which is significantly less costly than present systems.
It is therefore a primary object of the present invention to present a means and method of single event time-of-flight mass spectrometry which solves or improves over the problems and deficiencies in the art.
Another object of the present invention is to obtain factual information which accurately defines the structure of a specimen, the type of information not currently available to date by mass spectrometers.
Another object of the present invention is to provide a means and method as above described which has increased resolution, dynamic range, and accuracy over conventional mass spectrometry methods.
A further object of the present invention is to provide a means and method as above described which is significantly less costly than other mass spectrometry methods.
A further object of the present invention is to provide a means and method as above described which requires significantly less time to produce useful results.
Another object of the present invention is to provide a means and method as above described which uses conventional equipment, and is economical, reliable, and efficient.
These and other objects, features, and advantages of the present invention will become more apparent with reference to the accompanying specification and claims.
| |
Almond milk is a wonderful soothing drink that builds strength, immunity, and grounded energy. Its cool, heavy sweetness creates a sensation of stability and calm in the body, while offering easily accessible energy. For those suffering from lactose...
(5.00 out of 5 stars) 2 reviews, 331 likes
As the most widely grown tree nut in the world, these crunchy little nuggets are highly revered for building strength and intelligence across many ancient cultures. Originally native to the levant and in Northern Africa as far west as Morocco, almonds...
(5.00 out of 5 stars) 1 review, 615 likes
In Norse mythology, apples are said to provide eternal youthfulness. Apples appear in many religious traditions, including the bible, often as a forbidden fruit. Apples originated in Western Asia, where its wild ancestor still grows today. There are...
(1.00 out of 5 stars) 1 review, 253 likes
Apples are crunchy and leave a rough feeling on the roof of the mouth, both signs of astringency. Astringency is drying. Thus raw apples provoke Vata and constipation. Cooked apples and apple sauce are more Vata friendly. They are not astringent.... | https://www.joyfulbelly.com/Ayurveda/ingredients/effect/Alkalizing |
The telecommunication industry is at the forefront of the digital transformation. The telecom industry has been making a huge shift in terms of technologies. The emergence of these digital transformation technologies allows millions of people around the world to experience new ways of accessing information, as well as carrying out interaction and communication, which is said to have long-term implications in the human lifestyle.
This is 3 days classroom training that covers the basics of Cloud Computing, SDN, NFV, IoT, M2M. It is ideal for an Engineers those who are working in Network Planning, Provisioning, Operational to obtain an overview of the concepts, and definitions in the Cloud Computing, SDN/NFV & M2M/IoT Arena.Mainly focused on new emerging technologies and its benefits to service providers. After completion of this course audience will benefits to gain knowledge in new emerging technologies and to show the business benefits of these technologies.
Telecom& IT Managers, Engineers from Network Planning, Provisioning, Operational Team and Knowledge seekers
Cloud Computing
Introduction to Cloud Computing
Telecom operators & Cloud markets
Traditional Models vs Cloud Computing
Application of Cloud Computing in Telecom Operators
Cloud Deployments Models-Private, Community, Public, Hybrid
Traditional Data Centre
Virtualization & Multi-Tenanting
Service Grids & Autonomic Computing
Types of Cloud Services
SaaS, PaaS and IaaS
XaaS Payment Models
Examples of XaaS Services & Providers
Pros: Cost, Scalability, Resilience, Collaboration
Cons: Security, Availability, Trust, De- skilling
The Future of Cloud Computing
Internet of Things (IoT)
Introduction to Internet of Things (IoT)
Applications & Use cases
IoT Business Modes
IoT standards & Requirements
Functionalists and structure
IoT enabling technologies
IoT Architecture
Major component of IoT
IoT Hardware & sensors
Role of wired and wireless communication
IoT protocols
IoT services and applications
IoT Security
Cloud Computing and the Internet of Things
IoT Cloud Platforms (Standalone/Cloud)
Challenges of adapting the concepts
Software Defined Networking (SDN)
Introduction to SDN
SDN Flavors
Introduction to TCP/IP Model
Introduction to Data Centre Model
How does OpenFlow Works? | https://etac.ae/Courses/Details/TRC-932 |
This article offers a critical ethnography of the reproduction of elites and inequalities through the lenses of class and gender. The successful transfer of wealth from one generation to the next is increasingly a central concern for the very wealthy. This article shows how the labor of women from elite and non-elite backgrounds enables and facilitates the accumulation of wealth by elite men. From covering “the home front” to investing heavily in their children’s future, and engaging non-elite women’s labor to help them, the elite women featured here reproduced not just their families, but their families as elites. Meanwhile, the aff ective and emotional labor of non-elite women is essential for maintaining the position of wealth elites while also locking those same women into the increasing inequality they help to reproduce.
A gendered ethnography of elites
Women, inequality, and social reproduction
Luna Glucksberg
Elite ethnography in an insecure place
The methodological implications of “studying up” in Pakistan
Rosita Armytage
Based on ethnographic research conducted with the wealthiest and most powerful business owners and politicians in urban Pakistan from 2013 to 2015, this article examines the particular set of epistemological and interpersonal issues that arise when studying elite actors. In politically unstable contexts like Pakistan, the relationship between the researcher and the elite reveals shifting power dynamics of class, gender, and national background, which are further complicated by the prevalence of rumor and the exceptional ability of elite informants to obscure that which they would prefer remain hidden. Specifically, this article argues that the researcher’s positionality, and the inversion of traditional power dynamics between the researcher and the researched, can ameliorate, as well as exacerbate, the challenges of undertaking participant observation with society’s most powerful.
The Specificities of French Elites at the End of the Nineteenth Century
France Compared to Britain and Germany
Christophe Charle
Thanks to a comparison of social and educational characteristics of elites in France, Germany and UK at the end of the nineteenth century, this contribution shows the specificities of the French case: a mixture of persistent traditional elites, akin to British and German ones, and the growing domination of a more recent economic and meritocratic bourgeoisie pushing for liberalism and democracy. Nevertheless, evolutions in the same direction as France are also perceptible in the two monarchies and give birth to a new divergence when after WWI the democratization of elites go faster in UK and Germany than in France where the law bourgeoisie remain dominant and blocks the reforms asked by more popular or petit bourgeois groups present in the political parties on the left.
Elites and their Representation
Multi-Disciplinary Perspectives
Jean-Pascal Daloz
The term “elite” was introduced in the seventeenth century to describe commodities of an exceptional standard and the usage was later extended to designate social groups at the apex of societies. The study of these groups was established as part of the social sciences in the late nineteenth century, mainly as a result of the work of three sociologists: Vilfredo Pareto, Gaetano Mosca and Roberto Michels. The core of their doctrine is that at the top of every society lies, inevitably, a small minority which holds power, controls the key resources and makes the major decisions. Since then, the concept of elite(s) has been used in several disciplines such as anthropology, history or political science, but not necessarily in reference to this “classical elite theory.” The concept is strongly rejected, however, by many “progressive” scholars—precisely because of its elitist denotation.
Introduction
Ethnographic engagements with global elites
Paul Robert Gilbert and Jessica Sklair
Anthropological interest in critical studies of class, system, and inequality has recently been revitalized. Most ethnographers have done this from “below,” while studies of financial, political, and other professional elites have tended to avoid the language of class, capital, and inequality. This themed section draws together ethnographies of family wealth transfers, philanthropy, and private sector development to reflect on the place of critique in the anthropology of elites. While disciplinary norms and ethics usually promote deferral to our research participants, the uncritical translation of these norms “upward” to studies of elites raises concerns. We argue for a critical approach that does not seek political purity or attempt to “get the goods” on elites, but that makes explicit the politics involved in doing ethnography with elites.
Eric Godelier
Today there is a fascination with a new category of elites: the globalized management businessman. The notion of “elite” refers here to a group of people believed to be more competent in a particular field than others; Jack Welsh (GEC), Bill Gates (Microsoft) are among the best-known examples. The members of this social group have their own perception of reality and they also have a distinct class identity, recognizing themselves as separate and superior to the rest of society. Newcomers are socialized and co-opted by the group on the basis of internal criteria established by the existing group members. Therefore group members are more or less interchangeable and may move from one institution—in this case a corporation—to another within the group. Whether defined as heterogeneous or homogeneous, this group utilizes cultural mythologies that serve to legitimize their status and power: these are the focus of this article.
Trouble in Para-sites
Deference and Influence in the Ethnography of Epistemic Elites
Paul Robert Gilbert
Through his enduring efforts to interrogate the regulative ideals of fieldwork, George Marcus has empowered doctoral students in anthropology to rethink their ethnographic encounters in terms that reflect novel objects and contexts of inquiry. Marcus' work has culminated in a charter for ethnographic research among 'epistemic communities' that requires 'deferral' to these elite modes of knowing. For adherents to this programme of methodological reform, the deliberately staged 'para-site' – an opportunity for ethnographers and their 'epistemic partners' to reflect upon a shared intellectual purpose – is the signature fieldwork encounter. This article draws on doctoral research carried out among the overlapping epistemic communities that comprise London's market for mining finance, and reviews an attempt to carve out a para-site of my own. Troubled by this experience, and by the ascendant style of deferent anthropology, I think through possibilities for more critical ethnographic research among epistemic elites.
Jean-Pascal Daloz
This article is an extension of my book on The Sociology of Elite Distinction. In this work, I sought to offer a discussion on the merits and limits of the major models of interpretation dealing with social distinction when confronted with empirical realities in a large number of environments. Here, I propose some reflections about the way historians have been using these sociological models. Although universalistic propositions were often developed, I argue that most grand theories were typical products of their time and also of the societies respectively taken into consideration. The question therefore arises as to what extent their (retrospective) use by historians seeking a conceptual apparatus is always pertinent. It is concluded that many theoretical models are valuable providing we do not see them as “reading grids” that could be systematically applied but rather as analytical tools which are more or less operational according to the contexts studied.
"The Harder the Rain, the Tighter the Roof"
Evolution of Organized Crime Networks in the Russian Far East
Tobias Holzlehner
Organized crime is not a new phenomenon in Russia; however, it differs in contemporary Russia significantly, in quality as well as in quantity, from its predecessors. Using the Russian Far East, especially the city of Vladivostok, as a case study, this article sketches the evolution of organized crime in the region during the last 20 years. Tracing interconnections between various criminal groups through time, the article shows that quick reactions to new market opportunities were essential for successful illegal entrepreneurship. Powerful local elites have emerged and monopolized particular sectors of the industry (especially the fishing and shipping business). The case studies illustrate the interlinkages between organized crime structures, big business, and the political aspirations of powerful individuals. This article is a proposition to move beyond the economic paradigm in organized crime research and to focus more intensively on the multiple functions organized crime groups carry out in contemporary Russia.
Shadowing the Bar
Studying an English Professional Elite
Justine Rogers
Once the most easily recognizable status profession, the barristers' profession or the Bar is now faced with new regulatory demands, sources of competition and commercial pressures and can, to some extent, be regarded as a contested elite. With methodology at the core of the analysis, this paper addresses the complexities of identifying and studying an historically elite group, especially when, during the research, one is being gently socialized into the ways of the group. In the process, this paper illuminates many of the norms, rituals, and social and psychological dynamics of the Bar, a group aware of its changing position and the threats and opportunities this poses. | https://www.berghahnjournals.com/search?q=%22elites%22 |
Momma always made meals ahead of time. You would always know when you were about to enjoy the same dish, several nights a week for a month, because she's be in the kitchen among gallon size tubs of ingredients with the vacuum sealer at her side. When Momma cooked ahead she cooked a single dish with the enthusiasm of a head chef- an entire freezer shelf would be chocked full of vacuum sealed baggies full of whatever recipe had caught her whim that week. Some of her favorites included (what I fondly call American) tamales, mushu pork rolls, fried rice, and stroganoff. Of course, looking back, I am sure Momma's meal obsessions were more driven by the need to take advantage of some extra hot deal in the grocer's meat department. Today, I utilize her techniques for inspiration to plan entire weeks of meals.
Have you ever prepped a large number of meals/portions at once? What meal would you like to have on hand for busy nights?
What Daughter Says: A vacuum sealer can truly be your secret weapon in the kitchen!
Pin this then print the Complete 7 Meals In 70 Minutes Shopping List and Preparation Instructions! | http://www.mommatoldmeblog.com/2014/12/7-meals-in-70-minutes-with-ziploc.html |
Arrival in Ahmedabad – a World Heritage City. Proceed towards Bhuj. Visit LLDC Museum in Ajrakhpur on the way to feel the vibrancy of the colourful handicrafts of Kutch District.
LLDC stands for ‘Living and Learning Design Centre’. The LLDC museum hosts glorious heritage of Kutch and it is a tribute to the brilliant artisans of Kutch. The museum has three galleries, studios and a library. Kutch is home to 12 tribes such as Ahir, Rabari, Maghwal, etc., that are indigenous communities of Kutch. The motive of the LLDC museum is to train, educate, support and promote the traditional crafts and communities of Kutch region. The biggest attraction of the museum is the textile and different embroidery styles of the locals, which is revered by the world. Tourists can see the different art and craft artefacts and also learn how they are created here. Also, a few more galleries are slated to be opened here featuring pottery, metal, wood and stone crafts. ‘The Living Embroideries of Kutch’ – the first show by the LLDC museum – had been seen by more than 30,000 people in just first couple of years. There are also craft shops where people can purchase the indigenous craft articles and there is also a café where the tourists can experience the local cuisine. The museum also hosts amazing sculptures and installations such as the metal installation created in the artificial pond.
Reach Bhuj – a city which rose from the ashes after the 2001 devastating Earthquake. Visit Swaminarayan Temple in Bhuj and overnight at Bhuj. | https://www.theworldgateway.com/Holidays/saurashtra-tour-with-splendid-kutch?package_id=ZGZg |
Report - Copperleaf C55 Value-Based Portfolio Management
Historically, Project Portfolio Management (PPM) tools were used by programme and project managers in large organisations to manage project requests, prioritise and select projects, predict and manage resource demands, and manage the ongoing execution of selected projects.
In the early 2000s, more senior audiences for PPM solutions began to emerge. Senior managers and executives sitting above the project and program management departments wanted assurance that the portfolio of projects being managed aligned with the organization’s overall strategic objectives.
For larger and more complex organisations, greater visibility of project work was needed to improve resource utilisation and capacity planning across the enterprise. This has lead to a broadening of the PPM mandate and the emergence of new PPM tools.
Building on research from Gartner's "Maximizing Value and Avoid Waste by Managing PPM Tool Proliferation in Your Enterprise", Copperleaf's report discusses key PPM learnings and potential solutions for utilities.
The business value of PPM is also expanding. In addition to the traditional benefits outlined above, the new breed of PPM allows organisations to realise the following benefits:
- Strategic alignment of project portfolios
- Optimisation of portfolio value
- Improved resource planning and management
- Improved collaboration across organisational divisions
- Visibility and oversight on project performance
- Improved project management office processes and execution performance
Donna Keck, Lead Engineer, Business Planning/LRP Support at Duke Energy, says of PPM: "The ability to collect and manage your investments in a portfolio management approach is really something that we use to reinforce accountability and ownership for the investments.”
With a clear business case, Copperleaf expands on the potential of more holistic PPM tools. Looking at the use cases of its own solution, Copperleaf C55 Decision Analytics, key learnings from this report include how PPM can help you: | https://www.engerati.com/smart-infrastructure/article/energy-investment/report-copperleaf-c55-value-based-portfolio |
A few weeks till assessment – will I fail?
I’m feeling really stressed and exhausted with only a few weeks to go until the end of semester and the assessment period. I don’t think I’m coping well and I’m worried that I’m going to fail. I’m procrastinating heaps! When I do sit down to study I don’t know where to start and I can’t concentrate. It feels out of control and I’ve become irritable with others. Help!
Thanks for your question. The end of semester can be a stressful time for many students. When we are under stress it is vital that we take good care of our physical and emotional selves. It’s important not to overlook having regular meals and a regular sleep routine – try going to bed at the same time each night and getting up at the same time in the morning; incorporate some regular exercise into your daily routine – this can help with better sleep.
It would probably be a worthwhile investment of your time to work out your study priorities and to design a timetable for yourself to help get you through the next few weeks – think about chunking study tasks with breaks for exercise, meals, relaxation, social events. Be reasonable with your expectations of yourself and make sure there is plenty of room for flexibility in your timetable. This can help with a sense of control and bring some clarity and focus when you know what tasks you need to attend to. It might also help reduce your irritability with others. Turn off social media and minimise distractions so you are not disrupted, use them as incentives – look at them when you have worked for a while or completed a task. Mindfulness exercises can be helpful in the practice of shifting attention from worries and difficult feelings, and help you engage in the present moment.
These suggestions aren’t going to take away all the stress associated with the end of semester but they can help you to feel a greater sense of control, and reduce worry. Counselling and Psychological Services (CAPS) is also available to assist. The service is free of charge, confidential and counsellors are experienced in helping students manage similar situations to what you describe. | https://blogs.unimelb.edu.au/ask-counselling/2014/05/29/a-few-weeks-till-assessment-will-i-fail/ |
Do You Habitually Over-Commit On Delivery Dates And Performance Levels?
You may pride yourself on setting tight schedules and high performance levels, but have you looked at what it has done to your life? Targets that are unrealistically high with unreal schedules wreak havoc on the people who are expected to deliver the products or services,
You may not think of yourself as being
unrealistic, but how do you feel about your work schedule? Are you
constantly working to keep up or are you doing great work and still
having time for a family or personal life?
If you are always under the gun as far as work schedules, it is possible that you are doing something wrong. If a project is carefully planned, the outcome should be a working system or product delivered in a timely manner.
When the schedule is arbitrarily decided on a management whim rather than on solid experience, the product quality generally suffers. As the scheduled end date nears, features are dropped from the product so that the delivery date will be met. When management provides bonuses for meeting delivery dates, product quality inevitably suffers. Key features are left untested or omitted and the project is turned over to Customer Service personnel for delivery and patching in the field. When that happens, you get a delivery that isn’t a delivery because it is a protracted repair and patch job in the field.
With Spiritual Rescue Technology it is quite easy to predict this kind of behavior and to remedy it so that it does not continue to occur. No one in their right mind would commit to a schedule or a specification that they could not meet, but in the presence of overwhelming demands, it becomes easy to lose sight of the penalties for misrepresenting your ability to deliver.
Adding to the sense of urgency are worries that someone else will get the order if you don’t step up to the challenge and close the deal. I am quite familiar with this kind of pressure as I spent many years working 60-70 hour weeks so I would be first in line for the next big project. I had no idea what was driving me at the time, but my family life suffered because of my work schedules.
With SRT it is fairly easy to locate the source of the tendency to overcommit. It is almost always spiritual in nature and comes from earlier failures to deliver on your part or on the part of others. You will experience it as a counter-intention to telling the customer what the project will take in terms of time and resources. There will be fears of loss of prestige or income if you tell the customer what the job will take. These fears undermine your resolution to be truthful and you end up compromising your integrity. Now you have promised something that you do not actually believe you can deliver and your torment begins. You take it out on yourself by working extra hours and making excuses when you consistently fail to meet the customer’s expectations which you set.
With SRT, you locate and remove the impulses to stretch the truth and you operate from a position of certainty. Once you have established your certainty, you can work with the customer to give them certainty and a realistic appraisal of the risks involved. You also give yourself the ability to move the goalposts for the project as it becomes evident that vital steps are taking longer than expected.
When you are working from realistic estimates, you are able to see when partial deliveries are possible so that you can have additional time to complete the final requirements. Your schedules are predictable and when changes are required, they can be anticipated well in advance.
The secret is to locate all of the sources of upset and counter-intention before making commitments to a customer, then you will be able to negotiate from strength. If your proposal does not meet the customer’s needs you will be able to discuss changes intelligently and propose alternative solutions. If your solutions are not acceptable to the customer, you will be able to end off knowing that you have provided the best solution that would work for your benefit.
If you have handled all of the counter-intention on your side, you will have made the best offer you could make. If the prospective customer is looking for more then you can provide, you are better off without that business.
When you know what you can provide and how long it takes to do things, there is no good reason to commit to doing more unless there is a suitable financial reward. There are businesses that routinely do rush jobs, but they are structured for this activity and charge accordingly. There is a personal cost for this type of activity, and I am not prepared to discuss it here, other than to say that this kind of rush business is hard on the individuals involved.
Those of you who pride yourselves on your special abilities should take a good look at work situations where stress is the normal mode of operation. Stress is never a good addition to the work scene. It usually means that there is a lack of fairness involved and important data is being hidden. | https://caring-communication.com/SRTHOME/?p=1510 |
Holiday in Pembrokeshire, Wales, UK.
Pembrokeshire is one of the most beautiful areas located in the Wales territorial region of the United Kingdom. It is a very popular holiday and tourist destination due to its magnificent and picturesque landscapes, open access to the ocean, beautiful recreational areas and tranquility. There are numerous activities for recreation within Pembrokeshire both for adults as well as for children. Its moderate climate, national parks, beaches, coastal path and the diverse availability of recreational activities make it a unique place to enjoy a well-deserved holiday.
One of the most sought after and highly recommended activities is to walk, jog or run along the Pembrokeshire Coast Path, which is identified as one of the best 15 walking trails within England and Wales. Being recognized also as one of Britain’s National Trails, the Coast Path encompasses a wide variety of interesting landscapes suitable for all types of individuals. These range from flat terrain to hilly descends, steep limestone cliffs, undulating terrain across the bay and even glacial valleys during the winter. Visitors can find numerous coastal towns across the coast path, guaranteeing magnificent views across the Atlantic Ocean during any time of the year. Over 180 miles of terrain are available for exploration covering more than 50 beaches and 14 harbours from Cardigan all the way to Saundersfoot.
The second most recommended travel spot within Pembrokeshire is its National Park. Regarded as one Britain’s most breathtaking places, the Pembrokeshire National Park cover almost a third of the entire landmass of Pembrokeshire, being one of the very few national parks that is totally coastal around the world. Activities for children as well as for adults are widely available at the park such as crab catching, time travel, bat walks, rockpool safaris and sightseeing. It is truly a phenomenal park full of natural characteristics that will for certain take anyone’s breath away.
Cycling within Pembrokeshire is a very enjoyable activity to do. Many tourists from England, Europe and other parts of the world come to Pembrokeshire to cycle due to its overall tranquility, the absence of vehicle traffic, breathtaking views and amazing cycling trails that allow cyclists experience different types of cycling terrains. Other main advantage of cycling within Pembrokeshire is the ability to get to know and explore the entire region through cycling, making the holiday time period more effective and highly rewarding in the end.
Even though these are the three main attractions to explore in Pembrokeshire, there are other major attractions and activities to do while on holiday in Pembrokeshire. Arts and crafts activities are available for children, boat trips, gardens, galleries, powerboats, shopping, historic sites, museums and churches. Pembrokeshire also contains numerous restaurants, cafes, tearooms, bakeries and delicatessen available for tourists and visitors. Various gastronomic entrees can be eaten at these locations, along with local seafood dishes, national British dishes as well as gourmet entrees from all over Europe.
Finding accommodation for your holiday in Pembrokeshire is a very simple task with many lodging options available. One of the most common and sought after forms of accommodation are the holiday cottages. These holiday cottages offer a cozy, comfortable and secure form of lodging that is also highly affordable. One of the best holiday cottages present in Pembrokeshire is the Poppit Sands holiday cottage. Located within the sandy beach of Poppit Sands, the holiday cottage is directly situated on the beach offering numerous amenities for all guests including both adults and children.
The Poppit Sands holiday cottage encompasses a large lounge area and kitchen. The lounge area has a TV, HD satellite television and a DVD player. The sofa on the bed can be converted into a double bed. A bathroom, shower and cubicle are also present for guests. The Poppit Sands holiday cottage can accommodate up to 6 guests at a time. It is truly a very comfortable, enjoyable and tranquil cottage within a very beautiful area of the beach in Pembrokeshire.
Come and enjoy the beautiful sunset views of Pembrokeshire with many activities to enjoy during your holiday season. Beautiful picturesque views from all different landscapes can bring long lasting memories that will be for you for the rest of your life. Apart from these wonderful characteristics in Pembrokeshire, you can also choose the best and most affordable level of accommodation in town with the Poppit Sands holiday cottage. A comfortable place to lounge, relax and enjoy your holiday in Pembrokeshire. | https://www.poppit-sands.co.uk/holiday-in-pembrokeshire-wales-uk/ |
PROBLEM TO BE SOLVED: To provide an asset operation management system and an asset operation management method, allowing operation of financial assets of a customer by a plurality of operation styles without taking trouble, and allowing reduction of a system development period and reduction of development cost.
SOLUTION: This asset operation management system 10 dividing the financial assets of the customer into a plural number and respectively operating them by the different operation styles, is provided with: a trade instruction management system 20 managing a trade instruction in each the operation style; an account management system 30 managing balance information of an account of each the customer in an account unit; and an in-account portfolio management system 50 managing a portfolio constructed in each section inside the account formed by dividing the account of each the customer by the operation style. By linking the three systems 20, 30, 50, one system is constructed as the whole.
COPYRIGHT: (C)2006,JPO&NCIPI | |
Q:
Why does RDD.foreach fail with "SparkException: This RDD lacks a SparkContext"?
I have a dataset (as an RDD) that I divide into 4 RDDs by using different filter operators.
val RSet = datasetRdd.
flatMap(x => RSetForAttr(x, alLevel, hieDict)).
map(x => (x, 1)).
reduceByKey((x, y) => x + y)
val Rp:RDD[(String, Int)] = RSet.filter(x => x._1.split(",")(0).equals("Rp"))
val Rc:RDD[(String, Int)] = RSet.filter(x => x._1.split(",")(0).equals("Rc"))
val RpSv:RDD[(String, Int)] = RSet.filter(x => x._1.split(",")(0).equals("RpSv"))
val RcSv:RDD[(String, Int)] = RSet.filter(x => x._1.split(",")(0).equals("RcSv"))
I sent Rp and RpSV to the following function calculateEntropy:
def calculateEntropy(Rx: RDD[(String, Int)], RxSv: RDD[(String, Int)]): Map[Int, Map[String, Double]] = {
RxSv.foreach{item => {
val string = item._1.split(",")
val t = Rx.filter(x => x._1.split(",")(2).equals(string(2)))
.
.
}
}
I have two questions:
1- When I loop operation on RxSv as:
RxSv.foreach{item=> { ... }}
it collects all items of the partitions, but I want to only a partition where i am in. If you said that user map function but I don't change anything on RDD.
So when I run the code on a cluster with 4 workers and a driver the dataset is divided into 4 partitions and each worker runs the code. But for example i use foreach loop as i specified in the code. Driver collects all data from workers.
2- I have encountered with a problem on this code
val t = Rx.filter(x => x._1.split(",")(2).equals(abc(2)))
The error :
org.apache.spark.SparkException: This RDD lacks a SparkContext.
It could happen in the following cases:
(1) RDD transformations and actions are NOT invoked by the driver, but inside of other transformations;
for example, rdd1.map(x => rdd2.values.count() * x) is invalid because the values transformation and count action cannot be performed inside of the rdd1.map transformation. For more information, see SPARK-5063.
(2) When a Spark Streaming job recovers from checkpoint, this exception will be hit if a reference to an RDD not defined by the streaming job is used in DStream operations. For more information, See SPARK-13758.
A:
First of all, I'd highly recommend caching the first RDD using cache operator.
RSet.cache
That will avoid scanning and transforming your dataset every time you filter for the other RDDs: Rp, Rc, RpSv and RcSv.
Quoting the scaladoc of cache:
cache() Persist this RDD with the default storage level (MEMORY_ONLY).
Performance should increase.
Secondly, I'd be very careful using the term "partition" to refer to a filtered RDD since the term has a special meaning in Spark.
Partitions say how many tasks Spark executes for an action. They are hints for Spark so you, a Spark developer, could fine-tune your distributed pipeline.
The pipeline is distributed across cluster nodes with one or many Spark executors per the partitioning scheme. If you decide to have a one partition in a RDD, once you execute an action on that RDD, you'll have one task on one executor.
The filter transformation does not change the number of partitions (in other words, it preserves partitioning). The number of partitions, i.e. the number of tasks, is exactly the number of partitions of RSet.
1- When I loop operation on RxSv it collects all items of the partitions, but I want to only a partition where i am in
You are. Don't worry about it as Spark will execute the task on executors where the data lives. foreach is an action that does not collect items but describes a computation that runs on executors with the data distributed across the cluster (as partitions).
If you want to process all items at once per partition use foreachPartition:
foreachPartition Applies a function f to each partition of this RDD.
2- I have encountered with a problem on this code
In the following lines of the code:
RxSv.foreach{item => {
val string = item._1.split(",")
val t = Rx.filter(x => x._1.split(",")(2).equals(string(2)))
you are executing foreach action that in turn uses Rx which is RDD[(String, Int)]. This is not allowed (and if it were possible should not have been compiled).
The reason for the behaviour is that an RDD is a data structure that just describes what happens with the dataset when an action is executed and lives on the driver (the orchestrator). The driver uses the data structure to track the data sources, transformations and the number of partitions.
A RDD as an entity is gone (= disappears) when the driver spawns tasks on executors.
And when the tasks run nothing is available to help them to know how to run RDDs that are part of their work. And hence the error. Spark is very cautious about it and checks such anomalies before they could cause issues after tasks are executed.
| |
Tanya Franklin, M.D., M.S.P.H.
An alumna of the School of Public Health and Information Sciences Clinical Investigation Sciences program and UofL School of Medicine alumna and faculty, Tanya Franklin, M.D., M.S.P.H., assistant Professor of Obstetrics, Gynecology, and Women's Health, recently received an honor from the Gold Humanism Honor Society.
The organization, comprised of medical students, resident and physicians presented Franklin with the Fitzbutler Award for Humanism. The members of the Society, who strive to focus on patient-centered medical care by modeling the qualities of integrity, excellence, compassion, altruism, respect and empathy praised, praised Franklin for her compassion in practice.
As an OBGYN physician, Franklin says it is important to be comfortable talking about taboo sexual topics, indicating patients may have no one else to discuss important health-related concerns.
“We need to think outside the box when working with patients, a few extra minutes of listening to a person goes a long way,” Franklin said during a presentation to student members of the society.
“It is my goal to treat every patient with respect and dignity, while providing excellent care,” she said. | http://louisville.edu/sphis/news/alumna-honored-for-compassionate-care |
FORT WAYNE, Ind. (WANE) — The Fort Wayne Fire Department welcomed a new class of recruits Monday morning.
The department’s 93rd recruit class is made up of 17 men and one woman, as well as two guest recruits from the Huntington Fire Department.
The class officially began training at the Public Safety Academy on Monday. The recruits will participate in 20 weeks of training before their graduation day on Oct. 28, 2021.
Participants will obtain the following seven certifications after completing the rigorous, hands-on training: | https://www.wane.com/news/local-news/new-recruits-start-journey-to-become-fort-wayne-firefighters/ |
Will a $10-million reward solve the world's biggest art heist?
Chat with us in Facebook Messenger. Find out what's happening in the world as it unfolds.
Photos:These 9 stolen artworks are still missing
"The Storm on the Sea of Galilee" (1633) by Rembrandt – Rembrandt's "The Storm on the Sea of Galilee" was one of 13 artworks stolen from Boston's Isabella Stewart Gardner Museum in 1990, which still haven't been found. Check out the gallery for other valuable stolen artworks that authorities have yet to track down.
Hide Caption
1 of 9
Photos:These 9 stolen artworks are still missing
"Nativity with St. Francis and St. Lawrence" (1609) by Caravaggio – Caravaggio's "Nativity with St. Francis and St. Lawrence" was stolen in 1969 from a church in Palermo by members of Cosa Nostra. It has never been recovered. Its theft prompted the foundation of the world's first dedicated art recovery police unit, called Tutela Patrimonio Culturale, or the Division for the Protection of Cultural Heritage. A mafia informant claimed that the Caravaggio was damaged in an earthquake and fed to pigs, but one hopes this is not the case.
Hide Caption
2 of 9
Photos:These 9 stolen artworks are still missing
"Portrait of a Young Man" (1514) by Raphael – "Portrait of a Young Man" is one of an estimated 5 million cultural heritage objects thought to have changed hands illegally during the Second World War. The masterpiece was taken from the Czartoryski Museum in Krakow in 1939, and destined for Hitler's home in Berlin. There it hung until 1945, when a Nazi official, Hans Frank, moved all of the paintings from Hitler's home to Wawel Castle in Krakow. It has not been seen since.
Hide Caption
3 of 9
Photos:These 9 stolen artworks are still missing
"Auvers-sur-Oise" (1879-1882) by Paul Cezanne – Oxford University's Ashmolean Museum was burgled of its only Cezanne painting on Dec. 31, 1999. The sound of the break-in was masked because it was timed during a New Year's Eve fireworks display.
Hide Caption
4 of 9
Photos:These 9 stolen artworks are still missing
"Charing Cross Bridge, London" (1901) by Claude Monet – Among many works stolen from the Rotterdam Kunsthal in October 2012, one can find Monet's "Waterloo Bridge, London" and "Charing Cross Bridge, London." The mother of one of the thieves claimed to have burned the stolen paintings in an attempt to hide the evidence, but hope remains that this is not the case.
Hide Caption
5 of 9
Photos:These 9 stolen artworks are still missing
"The Poor Poet" (1839) by Carl Spitzweg – Hitler's favorite painting was "The Poor Poet" by Carl Spitzweg, a somewhat kitschy romantic painting that has one of art theft's most bizarre and serpentine stories. It was famously stolen in 1976 by the performance artist Ulay, who took it from the National Gallery in Berlin and hung it on the wall in the home of a poor, immigrant Turkish family as part of what he called a "political action." He immediately phoned the museum and turned himself in, explaining that he did this as a form of political protest. The painting was returned, but it was stolen again in 1989 (not by Ulay), and it has never been recovered.
Hide Caption
6 of 9
Photos:These 9 stolen artworks are still missing
The Ghent Altarpiece (1432) by Jan van Eyck – The Ghent Altarpiece is the most frequently stolen artwork in history, having been stolen (all or in part) six times over a period of more than 600 years. Of the twelve panels that comprise the enormous altarpiece, one is still missing. Referred to as the "Righteous Judges" panel, it was stolen from the cathedral of St. Bavo in Ghent, Belgium in 1934. The theft was designed by Arsene Goedetier, a middle-aged stockbroker active in the cathedral community. He was not the actual thief, but designed the theft based on the plot of one of his favorite books, "The Hollow Needle" by Maurice LeBlanc. After many false leads and a protracted, failed attempt to ransom the panel back to the bishopric, it remains missing.
Hide Caption
7 of 9
Photos:These 9 stolen artworks are still missing
Stolen antiquities of the Middle East – Thousands of illegally excavated archaeological objects have emerged from conflict zones in the Middle East over the last few years, with ISIS most overtly financing their activities through illicit trade in antiquities. While it is difficult to trace the sales of individual objects excavated in ISIS-occupied territories, it is thought that millions have been made through this dark art trade.
Hide Caption
8 of 9
Photos:These 9 stolen artworks are still missing
1727 Stradviari violin – A 1727 violin by famed luthier Antonio Stradviari was stolen in October 1995 from 91-year-old violinist Erica Morini's New York apartment. Stradivarius instruments have a habit of being stolen, and are each worth in the low millions, this one valued at $3 million.
Hide Caption
9 of 9
Noah Charney is an international best-selling author and professor of art history.
(CNN)Last month, the board of the Isabella Stewart Gardner Museum in Boston issued a statement that they would double the longstanding reward for the return of artworks stolen from their premises back in 1990. They are now offering a cool $10 million, but with a time limit: the deal is only good until Dec. 31.
This is the latest chapter in an epic saga of the biggest art theft in peacetime history. Thirteen artworks, valued at between $300-500 million (if sold legitimately on the open market) were lifted from the museum during an 81-minute window in the night after the St. Patrick's Day revels, in 1990.
But while this reward doubling has made headlines, it is an act more of frustration and desperation than a sign of impending solution.
Noah Charney
When the original $5 million reward was set, it stirred up many leads, almost all of them dead ends. Myriad theories have swirled around who was behind this crime, for surely it was some larger organized crime group, more elaborate than just the two thieves disguised as policemen who bluffed their way into the museum, tricking student security staff into opening the door without first checking with the police department.
The same criminals had tried another tactic some days prior to the theft, when one of their gang, posing as a mugging victim, frantically banged on the service entrance door to the museum, shouting for help. That night, with professional security staff on duty, the door was not opened, and it was noted that the alleged mugging victim was seen leaving amicably with his muggers later that night.
But the criminals were eventually successful, with works including Rembrandt's "The Storm on the Sea of Galilee," Manet's "Chez Tortoni" and Vermeer's "The Concert" headlining their haul.
Photos:The search for stolen masterpieces
The search for stolen masterpieces – On March 18, 1990, a pair of thieves disguised as Boston police officers entered the Isabella Stewart Gardner Museum and stole 13 priceless works of art. Twelve of the 13 pieces stolen are included in this gallery. Here you see one of five "Gouache" drawings by Edgar Degas.
Hide Caption
1 of 12
Photos:The search for stolen masterpieces
The search for stolen masterpieces – Upon entering, the intruders handcuffed the security guards, bound them with duct tape and left them in the basement, authorities said at the time. Pictured here is another of Degas' "Gouache" drawings.
Hide Caption
2 of 12
Photos:The search for stolen masterpieces
The search for stolen masterpieces – In less than 90 minutes, the bandits went through the museum's Dutch Room on the second floor and stole three Rembrandts, including the Dutch artist's only seascape, "Storm on the Sea of Galilee," along with Vermeer's "The Concert," five Degas drawings and other items, according to the museum's website. Pictured here: a third of Degas' "Gouache" drawings.
It was a complicated crime, too. There were details that suggested that the thieves knew exactly what they were looking for, that they had been instructed what to steal. They bypassed some works of equal or greater value (and perhaps more portable) than the art they took.
The crime was an odd balance of calculated and messy. They tried to open a glass case to remove a Napoleonic battle flag, but failed to do so. Instead of smashing the glass, they took the eagle-shaped fitment to the flag instead. But while they were tentative about smashing a vitrine, they did break glass that was covering a large painting that they appear to have considered stealing, but then decided was too unwieldy.
Tracking the haul
In 2013, on the 23rd anniversary of the Gardner art heist crime, the FBI held a press conference that sounded promising. They revealed some new information considered sensational by the media at the time.
As a professor specializing in the history of art crime, I get a lot of questions about this, the highest-profile art heist since the theft of the Mona Lisa. I know much about the soap opera that has been going on behind-the-scenes for many years, and I know how to read police press conferences and offers of reward.
While the press conference was interesting to update the general public, those of us in the know have been aware of all that was revealed, and for some time. In the press conference, couched in terms of an appeal for information, it was revealed that the investigation has shown that the Gardner works passed through several hands after the theft, that they were transported through Connecticut and Pennsylvania, and were offered for sale in Philadelphia.
This is useful information, as some theorists suggested that the works were destroyed, or had been shipped to Ireland, with IRA links to the theft. (The IRA were involved in quite a few major art thefts).
That the art was offered for sale means that it was not immediately brought to the secret sitting room of some Thomas Crown or Dr. No (who, in the first James Bond film, has a hideout decorated in copies of real stolen art), where it has since remained. The careful phrasing of the press conference demonstrated that, while progress has been made in terms of learning some of the backstory, the investigation is at a stand-still, and has been for years. Much is known, but not quite enough to recover the art, which seems to be intact and largely unharmed, all 13 pieces.
The risks of rewards
For years now, driven by desire for glory -- for the Gardner hoard is the Holy Grail for art detectives -- and probably by a healthy interest in the eyeopening reward, a number of prominent investigators, in addition to the FBI, have been searching for clues, and have made enormous strides.
There are several fine books written about the ins-and-outs of the case, but the general consensus is this: The thieves, and those who know where the art is hidden, are dead. It remains to be seen if anyone still living knows the hiding place of the loot. That's what the reward, and increased attempts at stirring public interest, aim to do.
Rewards, in the world of art theft, are sharp-handled swords. They can work well, or they can hurt the handler.
In 2008, a theft of gold statuary from the Museum of Anthropology at the University of British Columbia was solved thanks to the museum board posting a reward that was worth significantly more than the raw material value of the gold stolen. The thieves might have initially intended to melt their loot, thereby erasing the evidence. But the lure of the reward stayed their hand long enough for the police to catch them.
On the other hand, rewards can backfire. In 1975, 28 paintings were stolen from the Gallery of Modern Art in Milan. A reward was offered, the paintings were returned (by associates of the thieves) and paid (to associates of the thieves) and the art was placed back on display. Within months, thieves broke in again and stole 35 works, including many of the same paintings. It was likely the same thieves dipping into the same well twice. The fruits of this second theft have never been recovered.
Photos:Lost and found: Incredible works discovered
Rembrandt's drawing of a dog has been in the collection of the Herzog Anton Ulrich Museum in Braunschweig, Germany, since 1770, but was long thought to be the work of a different artist.
Hide Caption
1 of 19
Photos:Lost and found: Incredible works discovered
In 2016, a drawing attributed to Italian master Leonardo da Vinci was discovered in Paris, after a portfolio of works was brought to Tajan auction house for valuation by a retired doctor. It was valued at 15 million euros ($16 million).
Hide Caption
2 of 19
Photos:Lost and found: Incredible works discovered
The drawing also features sketches of light and shadows and notes on the back.
Hide Caption
3 of 19
Photos:Lost and found: Incredible works discovered
While researching for an episode of BBC's "Britain's Lost Masterpieces" series at the National Trust for Scotland's Haddo House collection in Aberdeenshire , art historian Bendor Grosvenor and a team of experts found a painting that could have been painted by artist Raphael.
Hide Caption
4 of 19
Photos:Lost and found: Incredible works discovered
In April 2016, a painting believed to be by Caravaggio was found in an attic in France. Experts said it could be worth $136 million.
Hide Caption
5 of 19
Photos:Lost and found: Incredible works discovered
The work was originally purchased for $25 dollars at the end of the 19th century. It could now be worth $26 million.
Hide Caption
6 of 19
Photos:Lost and found: Incredible works discovered
In 1911, Leonardo Da Vinci's "Mona Lisa" was stolen from the Louvre by an Italian who had been a handyman for the museum. The famous painting was recovered two years later.
Hide Caption
7 of 19
Photos:Lost and found: Incredible works discovered
A statue called "Young Girl with Serpent" by Auguste Rodin was stolen from a home in Beverly Hills, California, in 1991. It was returned after someone offered it on consignment to Christie's auction house. Rodin, a French sculptor considered by some aficionados to have been the father of modern sculpture, lived from 1840 until 1917. His most famous work, "The Thinker," shows a seated man with his chin on his hand.
Hide Caption
8 of 19
Photos:Lost and found: Incredible works discovered
Picasso's "La Coiffeuse" ("The Hairdresser") was discovered missing in 2001 and was recovered when it was shipped from Belgium to the United States in December 2014. The shipper said it was a $37 piece of art being sent to the United States as a Christmas present. The feds say it was actually a stolen Picasso, missing for more than a decade and worth millions of dollars.
Hide Caption
9 of 19
Photos:Lost and found: Incredible works discovered
Italy's Culture Ministry unveils two paintings by the French artists Paul Gauguin and Pierre Bonnard on April 2, 2014. The paintings, worth millions of euros, were stolen from a family house in London in 1970, abandoned on a train and then later sold at a lost-property auction, where a factory worker paid 45,000 Italian lire for them -- roughly equivalent to 22 euros ($30).
Hide Caption
10 of 19
Photos:Lost and found: Incredible works discovered
A Renoir painting finished in the 1800s, loaned to a museum, reported stolen in 1951 and then bought at a flea market in 2010 has to be returned to the museum, a judge ruled on January 10, 2014. The 5½-by-9-inch painting, titled "Landscape on the Banks of the Seine," was bought for $7 at a flea market by a Virginia woman. The estimated value is between $75,000 and $100,000.
Hide Caption
11 of 19
Photos:Lost and found: Incredible works discovered
Seven famous paintings were stolen from the Kunsthal Museum in Rotterdam, Netherlands, in 2012, including Claude Monet's "Charing Cross Bridge, London." The paintings, in oil and watercolor, include Pablo Picasso's "Harlequin Head," Henri Matisse's "Reading Girl in White and Yellow," Lucian Freud's "Woman with Eyes Closed" and Claude Monet's "Waterloo Bridge," seen here. Works by Gauguin and Meyer de Haan were also taken.
Hide Caption
12 of 19
Photos:Lost and found: Incredible works discovered
Eight months after Salvador Dali's "Cartel de Don Juan Tenorio" was stolen in a New York gallery, a Greek national was indicted on a grand larceny charge in 2013.
Hide Caption
13 of 19
Photos:Lost and found: Incredible works discovered
In 1473, Hans Memling's "The Last Judgment" was stolen by pirates and became the first documented art theft.
Among their many crimes, the Nazis plundered precious artworks as they gained power during World War II. "Adele Bloch-Bauer I," by Austrian artist Gustav Klimt, was confiscated from the owner when he fled from Austria.
Hide Caption
16 of 19
Photos:Lost and found: Incredible works discovered
Many works of art that were taken by the Nazis were never recovered. Others were returned after years of legal battles. "Christ Carrying the Cross," by Italian artist Girolamo de' Romani, was returned to his family in 2012.
Hide Caption
17 of 19
Photos:Lost and found: Incredible works discovered
"The Scream" was one of two Edvard Munch paintings that were stolen from the Munch Museum in Oslo, Norway, in 2004.
Hide Caption
18 of 19
Photos:Lost and found: Incredible works discovered
In 2007, Pablo Picasso's oil painting ''Portrait of Suzanne Bloch" was taken from the Sao Paulo Museum of Art. It was recovered two years later.
There is also the complicated issue of how to swap the stolen art for the reward without granting illegal amnesty to the thieves, and without appearing to be paying a ransom, which is illegal in many countries.
So, will the lure of $10 million, and a closing window of opportunity, suddenly shake the art out of the woodwork, and get results, when $5 million led to no tangible results? Anyone, aside from the thieves themselves, is eligible for the full reward, but only if all 13 objects are returned in acceptable condition.
The answer is likely no, for $5 million is already so robust a reward, so far beyond the amount that thieves could possibly get for such famous art on the black market (where experts estimated that stolen art, if a buyer can be found at all, goes for around 7-10% of its estimated legitimate auction value, with more famous works all but impossible to sell, full stop), that doubling it does not suddenly provide an incentive that had previously been absent.
I have no doubt that the art will eventually be found. But it will be a matter of luck, of stumbling on its hiding place at some unknown point in the future, of accidentally pricking oneself while wading through a haystack, and thereby finding the lost needle. | |
Q:
How do I use argc, *argv[] to compute in C?
I'm calculating the volume of a sphere with a r-meter radius using argc and *argv[], I'm thinking if I enter "./radius 2" on the command line, argv[1] would become "2", so that the code would be:
int main(int argc, char *argv[]){
float v;
v = 4.0 / 3.0 * 3.14 * argv[1] * argv[1] * argv[1];
printf("V = %f\n", v);
return 0;
}
but it seems that argv can't do the calculation.
What should I do?
A:
Command line arguments are passed to the program as strings - you'll need to use atof or strtod to convert the string representation of a value to its numeric equivalent.
#include <stdlib.h>
#include <stdio.h>
int main( int argc, char **argv )
{
// All kinds of error checking ommitted
double input = strtod( argv[1], NULL );
double v = 4.0 / 3.0 * 3.14 * input * input * input;
...
}
Unless you're really constrained on space, use double instead of float.
| |
Note: EPA no longer updates this information, but it may be useful as a reference or resource.
Please see www.epa.gov/nsr for the latest information on EPA's New Source Review program.
March 28, 1978 Jewell Coal and Coke Company - Applicability of Condition 2 of the Interpretative Ruling 23.13
|
THE TEXT YOU ARE VIEWING IS A COMPUTER-GENERATED OR RETYPED VERSION OF A
PAPER PHOTOCOPY OF THE ORIGINAL. ALTHOUGH CONSIDERABLE EFFORT HAS BEEN
EXPENDED TO QUALITY ASSURE THE CONVERSION, IT MAY CONTAIN TYPOGRAPHICAL
ERRORS. TO OBTAIN A LEGAL COPY OF THE ORIGINAL DOCUMENT, AS IT
CURRENTLY EXISTS, THE READER SHOULD CONTACT THE OFFICE THAT ORIGINATED
THE CORRESPONDENCE OR PROVIDED THE RESPONSE.
|
23.13
----------
SUBJECT: Jewell Coal and Coke Company - Applicability of Condition 2 of the Interpretative Ruling
FROM: Director
TO: Gordon M. Rapier, Director
This is in response to your request dated February 17, 1978, concerning Jewell Coal and Coke Company's planned construction of 33 new coke ovens and the applicability of EPA's Interpretative Ruling (IR) (in particular, Condition 2 of the Ruling).
Condition 2 of the IR requires that the owner or operator of the proposed new or modified major source (Jewell Coal & Coke) demonstrate that all existing sources owned or controlled by the applicant in the same Air Quality Control Region as the proposed source are in compliance with all applicable SIP requirements (or are in compliance with an expeditious schedule which is federally enforceable or contained in a court decree).
The 33 new coke ovens will be constructed ostensibly as replacements to Batteries 1 and 5. This closure will result in adequate emission offsets satisfying Conditions 3 and 4 of the IR. However, Batteries 2, 3, and 4 in plant 2 are currently operating in violation of the Virginia SIP. A schedule issued by Virginia has not been approved by EPA, nor has Jewell Coal and Coke signed a similar Consent Order initiated by Region III.
Although EPA has, in the past, suspended Condition 2 for replacement
type facilities, I have concluded that such a suspension for Jewell Coal and
Coke is not warranted by the facts. By telephone on March 8, 1978, members
of your staff informed DSSE that the new ovens have a rated capacity
of 205,0000 tons per year, while Batteries 1 and 5 had a rated capacity of 110,000 tons per year. Since production capacity of the new ovens is in excess of the capacity of the shutdown batteries (by 95,000 tons per year) any new ovens which provide the production capacity increase are not replacement facilities and do not come with the limited exception from Condition 2.
Therefore, we concur with your recommendation that Condition 2 be complied with by Jewell Coal and Coke before any Section 51.18 new source review permit may be issued for the construction of the 33 new ovens. Any State delayed compliance order requiring compliance at the existing sources will not become effective for the purposes of satisfying Condition 2 until publication of a notice in the Federal Register approving the order after the appropriate proposal and public comment period.
I have also noted from your memo that you feel Condition 3 of the IR is satisfied due to the closure of Batteries 1 and 5. While Batteries 1 and 5 were closed on April 5, 1977, application for the permit to construct the new ovens was filed April 15, 1977. A strict interpretation of footnote 7 of the current IR would mean that the emission reductions associated with the closure of Batteries 1 and 5 could not be used to provide offsets consistent with the IR since these closures were not required by an enforcement action providing for the new source as a replacement for the shutdown. It is my understanding that although Batteries 1 and 5 were, at the time of the permit application, under State order to cease operation, this order did not provide for any new source as a replacement. Therefore, Jewell Coal and Coke would not normally be permitted to credit the decrease in allowable emissions provided by the shutdown of Batteries 1 and 5 as offsets for the emissions from the new ovens. I understand that this position is counter to communications you have had with the State and source.
I believe it is appropriate, upon full consideration of the facts, to suggest a possible alternative Agency position regarding these offsets which you may elect to adopt in this case, based on the facts and past history of the Jewell situation. This alternative involves a less restrictive approach than literal compliance with the IR's Condition 3, footnote 7, in that Jewell Coal and Coke would be excepted from the requirement that shutdowns prior to permit application can be used for offset credit only if required by an enforcement order requiring shutdown and replacement by a new source. Such an exception is based on and limited to the unique circumstances of the Jewell situation, including the cause of the shutdown and the close proximity in time between shutdown and permit application.
This approach would permit Jewell Coal and Coke to apply the decrease in emissions from the shutdown (approximately 633 tons/year) only to that portion of the allowable emissions from the new ovens which is related to the replacement capacity of the ovens (approximately 54% or 292 tons/year). Such a limitation on offset credit is required by footnote 7 and limits any exception to the IR to that discussed above. Jewell would thus be required to obtain offsets for that portion of the emissions for the new ovens which is related to the capacity increase (approximately 46% or 250 tons/year).
If you have any questions or comments please contact Rich Biondi (FTS 755-2564) or Jean Vernet (FTS) 755-7224) of my staff.
cc: Kent Berry, OAQPS
Date: February 17, 1978
The attached memorandum was magnafaxed to John Rasnic on 2-17-78 and a request was made for decision on the questions raised as soon as possible.
EPA will be meeting with State and Company officials on February 23 and 24 and we would appreciate at least a preliminary decision by this time.
Thank you.
EILEEN M. GLEN
SUBJECT: Jewell Coal & Coke Company - Applicability of DATE: FEB 17 1978
FROM: Gordon M. Rapier, Director Air & Hazardous Materials Division, 3AH00
TO: Edward E. Reich, Director Division of Stationary Source Enforcement, EN-341
Question: Should the proposed 33 new ovens be considered replacement facilities? If so, should Jewell Coal and Coke Company be exempted from Condition No. 2 of the December 21, 1976 Interpretative Ruling?
Discussion
Jewell Coal and Coke Company, Vansant, Virginia (hereinafter "the Company") operates a facility for the manufacture of metallurgical coke. The facility consists of two plants, Nos. 1 and 2, both of which violate current air pollution control regulations (particulate mass emissions and visible emissions).
Plant No. 1 consists of Batteries Nos. 1, 2, 3, 4 and 5. The ovens are modified beehive ovens (Mitchell ovens) and are unique to this facility. A flood in April 1977 caused the closing of Batteries Nos. 1 and 5 (86 ovens); they will not be reopened. Batteries Nos. 2, 3, and 4 are in operation manufacturing foundry coke and continue to violate the standards. These batteries were to have been shut down by June 30, 1977 but the Virginia State Air Pollution Control Board (VSAPCB) has twice extended that closing date and now, the batteries are expected to be in operation until at least April 30, 1978. On January 10, 1978, the Company submitted a Compliance Plan to the VSAPCB for the control of particulates and visible emissions at Plant No. 1 and requested its continued Operation thru July 1, 1979. The proposed plan is scheduled for completion by December 1978. The Board will review the Company's proposal at its February 6, 1978 meeting with final action taking place at the following Board meeting.
Plant No. 2 produces furnace coke. These ovens are also in violation of the State's particulate and visible emissions regulations. On October 21, 1977, the VSAPCB issued an Order calling for the compliance of all 45 ovens by December 1978. This schedule has not been approved by EPA and the Company has failed to sign a similar Consent Order initiated by us.
Construction of the 33 new ovens was approved by VSAPCB at its meeting on June 6, 1977. These ovens will be an extension to Plant No. 2 and will produce furnace coke. Permanent closure of Batteries Nos. 1 and 5 provides adequate emission offsets. However, approximately 72 more ovens would have to be closed before production capability of the closed ovens would equal that of the 33 new ovens.
EPA has not yet approved the offsets because Conditions Nos. 1 and 2 of the December 21, 1976 Interpretative Ruling have not been met. The VSAPCB permit, issued on June 6, 1977, references the Company's application which had specified that sheds would be built to control pushing emissions (LAER) but does not specifically require said sheds to be installed. Furthermore, although construction is proceeding on the ovens, the Company has not begun any work on the sheds. This leads us to believe the ovens will begin operation without emission controls. The Company's failure to satisfy Condition No. 2 is discussed above.
The Legal Branch, EPA, Region III has requested Headquarters approve a referral to the Department of Justice. This referral includes a Motion to Enjoin Jewell from operating Batteries #2, 3, and 4 of Plant No. 1. No action has yet been taken on this request.
By letter dated February 3, 1978, we have notified VSAPCB that certain problems exist with the construction permit (copy attached). We will be meeting with their staff to try to resolve these deficiencies as soon as possible.
EPA has never formally revised the December 21, 1976 Interpretative Ruling to allow for the waiver of Condition 2 because the new source has been designated a replacement facility. Page 6 of the December 19, 1977 proposed Emission Offset Interpretative Ruling states: "...The original intent was that such facilities (replacement) should be considered a major modification subject to the emission offset requirements, and the Ruling is revised to make this clear." Based on the proposed revision of the I.R., it is our recommendation that Condition 2 not be waived.
Your earliest response to the replacement/exemption question with respect to this facility will be appreciated. If you have any questions about this matter, please contact Ms. Eileen M. Glen at 215/597-9871. | https://archive.epa.gov/airquality/ttnnsr01/web/html/n23_13.html |
ALLISON ROSE OVENS is a Individual/Sole Trader based in or near Badgin, Balladong & Burges in Western Australia, Australia. ALLISON ROSE OVENS is a registered Australian Business Name with the Australian Business Number (ABN) of: 30580893550.
This business was first added to the ABN register on 28th September 2015 and has been trading for 4 years.
This business has been registered for Goods and Services Tax in Australia since [php].
ALLISON ROSE OVENS last updated Australian Business Name information on 7th November 2019. | http://www.auscompanies.com/en/30580893550/OVENS-ALLISON |
Five Mile Prairie School is a homeschool partnership program where families are supported as the primary educators. A certificated teacher provides a traditional classroom experience once per week where students explore, grow and nurture their life-long love of learning. Success at FMP involves availability of the teaching parent to be at home to teach.
Mission Statement
To ensure all students learn at high levels.
Vision Statement
If we are on mission, our policies, programs, and practices at Five Mile Prairie School reflect a commitment to help all students learn at high levels. Students do their coursework with integrity, earn satisfactory progress and earn high school credits needed for graduation.
Letter From the Principal:
Hello and thank you for visiting Five Mile Prairie School! We are a strong school community that takes great pride in partnering with a child’s most important teacher to provide a rich and empowering learning environment for your child.
We are excited to work with families who chose to do some or all of their children’s education from home. At Five Mile Prairie, we work closely with parents to provide quality educational opportunities and resources for those families. Our mission is to collaborate with parents to customize a child’s education in order to prepare students for life’s responsibilities, challenges and opportunities.
We actively partner with parents to provide a quality, academic challenging, safe, and supportive learning environment, while recognizing parental authority to the educational direction of their students. We are proud of our staff and their commitment to students to meet each individual child’s needs.
We operate under the authority of the State of Washington and the laws pursuant to an alternative learning experience (ALE, WAC 392-121-182). Five Mile Prairie is a school that takes its responsibility for helping students reach their learning goals and objectives very seriously.
Our Family Handbook is provided to you in an effort to answer some of your questions and provide information on the operation of our school. However, if you have additional questions or feel your questions were not fully answered please call us a 509-465-7700. We look forward to working with you and your child.
Nick Edwards, Principal
- Who is a good fit for Five Mile Prairie Secondary (6-12)? | https://mlo.mead354.org/fmp-6-12 |
We run a series of scripts on our server that continuously gather information from a variety of sources: public API's, private API's, and scraping various sites directly.
We currently gather information from the following sources:
Steam
SteamCharts
IsThereAnyDeal
HowLongToBeat
ESRB
&
PEGI
Metacritic
&
OpenCritic
We currently only track stats for Steam games, but have plans to add stats for the following platforms (in roughly this order):
Nintendo Switch
Epic Games Store
Playstation
Xbox
GOG
Itch
(No current plans for mobile games)
We currently only track applications classifed as "games" -- so no tools, demos, mods, bundles, DLC, editors, etc.
We run our server scripts continuously, updating the basic records for every game on every currently supported platform once per day.
The metrics we collect are simple facts directly provided by the API endpoints and pages we query (see
Meet the Metrics
for more details).
2. Contextualize information
Once we have the data, we present several views for properly contextualizing this information. These views were designed based on interviews with game industry professionals including developers, publishers, PR agencies, and generally anyone in this field with a strong opinion.
Generally speaking, we do not provide statistical estimates. Instead, for any given category of thing we might want to measure -- such as performance or critical acclaim -- we measure several related metrics that we can get at directly. By default we display these metrics as
relative ranks
(though the metrics can also be viewed in their raw form, or as percentiles). This means instead of saying a game has 14,386 of metric X, we say that for metric X, it is the #12 game on Steam. Our hypothesis is that various metrics will have some degree of correlation with the underlying insights our users are seeking to discover, and converting figures to relative ranks makes it easy to compare games directly, even if we can't say for certain what the absolute value of the underlying insight is (ie, "how many copies did this game sell?").
3. Meet the Metrics
We track an ever-growing list of metrics, which are exhaustively catalogued in the glossary below:
Special thanks to all the following:
GameDiscoverCo
Patreon supporters
The Clark Tank
SteamDB
Steam250
SteamCharts
PlayTracker
HowLongToBeat
IsThereAnyDeal
Something wrong/broken/stupid? | https://www.gamedatacrunch.com/method |
Note that all information in this FAQ is compiled by community moderators Jennifer and Vainamoinen. We check our facts thoroughly and have a lot of Telltale experience on our backs, but this is not an 'official' source, we're just volunteers!
(1) What is Tales of Monkey Island?
Tales of Monkey Island is the fifth game in the Monkey Island series, originally created by LucasArts (originally known as Lucasfilm Games). The series follows Guybrush Threepwood, mighty pirate, and his wife Governor Elaine Marley, as they sail the Caribbean and fight the evil undead pirate LeChuck.
(2) Were the creators of Monkey Island involved in the game?
Ron Gilbert was involved in early brainstorming sessions, as the team worked out the plot and devised some puzzles. Dave Grossman was heavily involved, as he was the director of the season. The only one of the three people who created the original two Monkey Island games who was not involved was Tim Schafer, as he was busy running his own studio, Double Fine Productions. In addition to the original creators of the series, Tales of Monkey Island was co-designed by Chuck Jordan, the co-writer of the third Monkey Island game, The Curse of Monkey Island, and was co-designed and co-written by Michael Stemmle, the co-director of the fourth Monkey Island game, Escape from Monkey Island.
(3) What is gameplay like?
This game was released before the cinematic story games such as The Walking Dead, so the gameplay doesn't consist of choices and consequences. The gameplay consists of puzzles which are mostly solved through manipulating objects and talking to characters.
(4) For which platforms is Tales of Monkey Island available?
It is available for PC and Mac, PSN on PlayStation 3, and on iOS. It was available on WiiWare for Wii, but it is no longer possible to purchase digital games on the Wii platform.
(5) How much does Tales of Monkey Island cost?
The entire season (five episodes) can be purchased for PC and Mac for $20 on Steam, GOG.com, or the Telltale store. It's also $20 for PS3 at PSN. It is also available in individual episodes on iOS at $3 a piece for a total of $15. It is no longer available to purchase on WiiWare or at the Telltale store.
(6) How many episodes are there in Tales of Monkey Island?
There are five episodes. The episode titles are Launch of the Screaming Narwhal, The Siege of Spinner Cay, Lair of the Leviathan, The Trial and Execution of Guybrush Threepwood and Rise of the Pirate God.
(7) Will Tales of Monkey Island be ported to other platforms?
This is unlikely as Tales of Monkey Island is several years old now and Telltale mainly only ports their newer games to other platforms.
(8) Is there a bonus/free collector's DVD version for Tales of Monkey Island available from the Telltale store?
There was, but unfortunately the collector's DVD is now out of print.
(9) Is there a soundtrack CD available?
No soundtrack CD was ever produced for Tales of Monkey Island. Since Telltale has mostly moved on from physical merchandise, it seems unlikely that a soundtrack CD will be released in the future. | https://community.telltale.com/discussion/82687/monkey-island-unofficial-faq-please-read-before-posting |
It is difficult to overstate the importance of loss development factors and their impact on aspects of retained risk, whether through traditional insurance policies or captive structures, say Enoch Starnes and L. Michelle Bradley of SIGMA Actuarial Consulting Group.
There is a widespread and growing interest in the use of captive insurance as a tool for risk management, transfer, and financing. Naturally, this rising interest is accompanied by an influx of corporate decision-makers who may have little or no experience in the realm of insurance, especially with regard to risk retention issues.
While the SIGMA team has written in the past about the ways in which one might track loss development for captives, we believe an opportunity has presented itself to acquaint new entrants to the captive insurance landscape with one of the core concepts that must be understood when retaining risk at any level: loss development factors.
Loss development is a term often used by those in the insurance industry to describe the high-level activity of claim experience that occurs over time. The majority of claims in property casualty insurance tend to have a period of time in which their associated costs “develop” until they reach their ultimate value at closure. This timeframe can vary significantly depending on the risk being observed—property claims are typically reserved and paid over a relatively short amount of time, whereas workers’ compensation claims might take months or years to reach their ultimate value.
This high-level change is quantified through loss development factors, which measure the aggregate improvement or deterioration in groups of claims. Usually, the claims being measured are grouped based on the specific risk being analysed and the policy period in which they occurred or were reported. These factors drive a large portion of actuarial analytics, and the amounts used in a specific report might be based on industry benchmark data or claims experience unique to the entity being analysed. The ultimate values (or ultimate loss estimates) they produce usually drive the results of loss projections and estimates of required reserves, two of the most frequently used components of an actuarial report.
It is difficult to overstate the importance of loss development factors and their impact on numerous aspects of retained risk, whether that is through more traditional insurance policies or captive structures. As such, the remainder of this article will hopefully serve as a primer for those unfamiliar with their use by answering several common questions about loss development factors.
Question 1: How are loss development factors calculated and used?
As mentioned above, loss development factors are used to project the additional cost expected on claims associated with occurrence or reporting periods. While these factors quantify the late developing aspects of certain losses, they also account for losses that occurred during the period but are not reported until a later date. Both possibilities are often combined into a singular concept known as incurred but not reported (IBNR).
The first step of calculating a loss development factor is the construction of a loss development triangle, which organises loss data based on an associated policy period and a specific point in time, often referred to as an “evaluation date”. The next steps involve several mathematical calculations that measure the change in total claim costs for each policy period from one evaluation date to the next. While a detailed description of these calculations is outside the scope of this high-level article, numerous educational resources can be used to review this process.
Once calculated, loss development factors are applied elsewhere in an actuarial report to estimate the ultimate loss value of claims in specific policy periods. In recognition of the uncertainty inherent in such calculations, multiple “development methods” (and in many situations, other actuarial methods) are used to produce a range of estimates of ultimate loss values based on the various aspects of insurance claims (payments, reserves, etc). Consider below an example of a loss development factor being used to estimate ultimate losses in a period using incurred losses.
|
|
01/01/17-18 Incurred losses =
|
|
$1,000,000
|
|
Incurred loss development factor =
|
|
x 1.100
|
|
Estimated ultimate losses =
|
|
$1,100,000
As a final note, consider that the process of calculating development factors isn’t limited exclusively to loss values, as they are often used to analyse claim counts and other quantifiable amounts as well.
Question 2: How can a captive develop unique loss development factors or find industry benchmark factors?
If a captive has sufficient historical data, the loss development triangles referenced above can be constructed allowing the calculation of loss development factors which are unique to the insured entity (or entities). Theoretically, the use of unique factors as opposed to industry averages produces a more accurate calculation of estimated ultimate losses.
Doing so requires multiple loss runs at evaluation dates in equally cadenced intervals, such as 12/31/17, 12/31/18, 12/31/19, etc.
Industry benchmark factors can be determined based on published loss development triangles from various organisations or rating bureaus. Often, this information is available only through a fee or subscription basis. Actuarial and other types of firms which have access to this information might use it as part of their own analytics, and this access (or lack thereof) could be an important piece in the decision of which third-party firms a captive partners with.
Question 3: What special considerations do captives have regarding loss development factors?
Most often, the following considerations should be reviewed for any entity seeking to enter the captive insurance space or better understand their existing captive:
- New captives should consider building loss development triangles in the time period shortly following captive inception. This year-to-year process can be much more efficient than trying to construct unique loss development triangles at a later date.
- Accounting Standards Update disclosures require some triangles and payment development patterns to be accumulated each year. The paid loss development disclosures, for example, should produce paid loss development information which could then be reused in other captive analytics.
- If a captive owner decides to add an insured entity, change the retention level for specific risks, or make other changes impacting the captive’s risk portfolio, the effects on loss development should be reviewed closely. This process can be done as part of or separately from standard actuarial reporting to ensure the actuarial team is involved in the early stages of understanding the potential impact.
Question 4: What are some common mistakes made when using loss development factors?
In our history as an actuarial consulting firm, we have seen several mistakes made by those reviewing or using loss development factors. The most common of these are:
- Using a factor for the wrong coverage. Loss development factors for workers’ compensation, for example, are very different from those used to analyse automobile liability or products liability, and using these factors interchangeably produces significant risk of over or under-estimating ultimate losses. For workers’ compensation, factors may even differ significantly by state. If unique, credible data is available, loss development factors by coverage should be developed, but in lieu of such data, an appropriate industry factor for that coverage may be used.
- Using a factor on the wrong type of loss. Paid loss development factors are often very different from incurred factors. More specifically, paid factors tend to be much larger, as payments almost always lag the reserving changes impacting incurred losses. Industry loss development factors based on paid losses and incurred losses are available and should be applied only to the appropriate type of loss.
- Selecting a loss development factor that does not properly reflect the coverage trigger. This type of mistake might involve the use of a claims-made loss development factor when the coverage being analysed is occurrence-based.
- Applying loss development factors to individual claims. The construction of loss development factors relies on the law of large numbers to produce reasonable estimates. That being the case, applying such factors to a single claim is not in line with the intended scope of their use.
Question 5: How does my captive’s retention change affect my historical loss development patterns?
We generally recommend that, as a starting point, loss development triangles should be constructed on an unlimited (or ground-up) basis. While there are certainly situations where triangles at specific retention levels should be considered and used, having unlimited triangles allows for a base line of unadjusted loss summaries through time. In general, losses at lower retention levels tend to have more predictable loss development patterns, relatively lower development at each evaluation, and a smaller tail.
Question 6: How do loss development factors impact the analysis of emerging risks?
Many emerging risks are of a low-frequency, high-severity nature. In these situations, loss development factors and the associated loss development methods may not be a reasonable approach for estimating ultimate losses. This is largely because of the difficulty in constructing credible loss development triangles, as losses may be highly volatile and claim history can be limited.
Another issue is that benchmark factors may not yet be available for emerging risks, as the types of firms which gather and analyse this data on a widespread scale don’t have enough information to produce reasonable results. As a result, other actuarial methods which do not rely on development patterns are normally relied upon.
Current examples of emerging risks matching this profile include cyber risk or breach liability. A company facing such risks may expect to experience a claim once every five to 10 years, and the associated severity of an event could easily be in the millions of dollars. In addition, there may be a significant lag in loss-producing events between their occurrence date, the date at which they become known, and the date at which they are eventually reported. Regardless of the impact on loss development, it is always worth noting that coverage triggers for emerging risk policies should be carefully reviewed prior to captive placement.
Conclusion
The concept of loss development is certainly more nuanced than can be covered in a single article, but a high-level comprehension should always be the objective of those involved in captive insurance. Its reach and impact are simply too great to be ignored or misunderstood, especially by those tasked with making large-scale decisions impacting their organisation.
That said, it is important to keep in mind that loss development is only one of a number of concepts surrounding actuarial analytics, and a focus should be maintained on the numerous ways in which actuaries and their work can help captive owners better understand their captive and accomplish their long-term goals relating to risk and finance.
Actuaries are always happy to help those using our reports get the most value possible from their contents, so potential captive owners new to this exciting industry should never hesitate to reach out for assistance.
Enoch Starnes is an actuarial consultant at SIGMA Actuarial Consulting Group.
- Michelle Bradley is a consulting actuary at SIGMA Actuarial Consulting Group. | https://www.captiveinternational.com/contributed-article/loss-development-factors-a-primer-for-captives |
8 June 2019 AD; Feast of Mary, Mediatrix of All Graces, St Medard and St Gildard
This article has few videos explaining Higgs Boson, as well as Z, W bosons.
The findings from the Equation of the Construction of the World are getting really interesting now, as we move into the Unknown. Unknown to the present science.
Bosons are a Family exactly as in the previous cases of quarks and neutrinos, electron, tau and muon. By analogy, there is Z Boson, W+, and W- boson and Higgs Boson. Now, these are four bosons and there should be six, so either there is another type of Z Boson and another type of Higgs Boson if they follow the rule of Quarks (6 quarks) or there are three bosons and then three "relative" to the particles, as in case of electron, tau and muon and three neutrinos. In the second case, Higgs Boson would be 4th particles so it does not add up. Which means that there is another Z Boson-like particle as well as Higgs Boson-like particle, they sum up to number six. Z Boson has the value of mixing angle called Weinberg Angle and it is on the first position, W+ and W- occupy second position (i.e second angle) and Higgs Boson belongs either with them or in next part - Gravity particles, but assuming that it is a weak force boson it will belong here at third position (i.e. third angle).
Here are the links explaining a little bit this difficult topic: | https://luxdeluce.com/167-narrative-for-boson-mixing-angles.html |
We all know the story: a group of clever invaders sneak into a city, plant a virus, and take over. The invaders are quickly defeated, but the damage has been done. That’s essentially what happened with the Trojan horse. In the ancient world, it was a sneaky way to get into a city and sow chaos. The Greeks used it in their war against Troy, and it worked like a charm. The story highlights one of the dangers of using technology for evil purposes: once it’s out there, it can be hard to stop. That’s why it’s important not to rely on any one piece of technology too much; instead, make sure you have a well-rounded security strategy in place.
What is the trojan horse?
The trojan horse was a method of infiltration and deception used by the ancient Greeks in order to gain upper hand in their wars. The horse was concealed inside a large wooden wagon, which would be taken into the enemy’s camp and allowed to be unloaded. Once the horse was free, it would start neighing and creating a disturbance, drawing attention away from the real Trojan Horse inside. Once inside, the Greeks would be able to sneak into the enemy’s camp undetected.
What are the risks of using the trojan horse?
Trojans are malicious software programs that infect computer systems without the user’s knowledge or consent. Once installed, trojans can be used to steal data, monitor activity, and/or plant spyware on a victim’s computer. Trojans have been used by criminals to launch attacks on various organizations for years, and they continue to be a major threat today.
There are a number of ways that trojans can damage your computer. For example, if a trojan downloads and installs malware on your system, it could steal information or install spyware in order to track your activity. Trojans can also be used to attack other computers on the same network, which can lead to widespread damage or even theft of sensitive information.
Some common types of trojans include viruses, worms, and Trojan horses. Viruses spread automatically from machine to machine through infected files, while worms travel through networks by replicating themselves until they reach their target. Trojan horses are a particularly dangerous type of virus because they disguise themselves as innocuous software programs such as Adobe Acrobat Reader or Microsoft Office 2007. Once installed, these Trojan horses can allow intruders access to your computer or steal sensitive information.
It is important to be aware of the risks associated with using trojans and take appropriate steps to protect yourself against them. Always use caution when clicking on links in messages or emails — even if they seem legitimate — and always install updates for programs you use regularly. Additionally, be sure to keep your computer virus and malware protection up-to-date, and be aware of the signs that your computer is being attacked by a Trojan.
What can be done to protect oneself from the trojan horse?
There are a few things that can be done to protect oneself from the trojan horse. One way to do this is to make sure that all of your software is up-to-date and has been scanned for viruses. Another thing that can be done is to be careful about what type of file you are downloading from the internet. If it looks suspicious, then it probably is. Finally, never open a file sent to you in an email if you don’t know who it came from.
How it works?
The trojan horse was a method of stealthy infiltration used by the ancient Greeks in order to gain access to enemy strongholds. The horse was disguised as something harmless, like an animal or a piece of furniture, and would be allowed inside the fortress walls. Once inside, the Greeks would secretively place Trojan horses among the belongings of the garrison’s leaders, who would then be convinced to remove their guards and allow the Greek infantry into the fortifications. This tactic was often successful, and resulted in many victories for Athens over its rivals during the Peloponnesian War.
Effects
The trojan horse is an ancient story about a message that a group of people wanted to spread but was hidden inside something else. The story goes that in order to get the message out, the people had to get the other people to open the thing that had the message inside it. This is an example of how using deception can be powerful, and it can be used for good or bad purposes.
How to prevent it?
There is no one answer to this question as the trojan horse has been used in a variety of ways over the years. However, some tips on how to prevent this type of attack are:
- Keep all software up-to-date and monitor suspicious e-mails closely.
- Educate employees on the dangers of clicking on unsolicited links, especially if they are not familiar with the website or sender.
- Ensure that all computers are kept up-to-date with anti-virus protection and basic firewalls.
- Regularly back up data and create safe passwords for online accounts.
Conclusion
The trojan horse was a political scheme used by the Roman general and politician, Gaius Julius Caesar. It involved bribing decurions (local leaders) to support Caesar’s bid for election as dictator of Rome, with the promise of additional benefits in return. However, once he was elected as dictator, he reneged on his promises and used his power to enact radical changes that went against the interests of the decurions. | https://cybersguards.com/what-was-the-trojan-horse/ |
In this first session, participants will compare the current communications landscape of wire-based and wireless communications with the foreseeable future of content delivery such as streaming services, apps, the Internet of Things and other influences on the broadband environment of 2020. Are there new constructs for how one thinks about the competitive communications delivery system?
10:45 a.m. – 12:15 p.m. Session II. Competition: The Big Picture
Given the realities and foresights of the first session, what would the competitive communications marketplace of the future look like? That is, just as Congress envisioned intermodal competition in the 1996 Telecommunications Act, what is the vision for a 2016 Communications Competition Act?
2:00 p.m. – 4:00 p.m. Session III. Working Groups
Each Working Group will explore a traditional area of communications regulation, below, with the purpose of describing what kind of a competitive landscape would achieve the desired results in that area. That is, what does such a competitive environment look like? How do we know when it is achieved? The Working Group will then set forth what type of regulation is necessary until the market transitions to that desired space.
Working Group A: Equitable Access Considerations and Measures
Includes digital divide issues, and ubiquitous access to services, apps, content
Working Group B: Competitive Considerations
Particular attention to the types of regulation when competition is lacking
Working Group C: Protection of Consumers/Users
Includes privacy, security, fraud
Friday, August 14, 2015
8:45 a.m. – 9:30 a.m. Session IV. Working Group Initial Reports
9:30 a.m. – 4:00 p.m. Session V. Working Groups Continued
6:00 p.m. Write up of Working Group Reports Due
Saturday, August 15, 2015
8:45 a.m. – 10:10 a.m. Session VI. Working Group Reports and Refinements
Participants will consider the reports of the Working Groups and refine their conclusions.
10:30 a.m. – 12:00 p.m. Session VII. Overview of Future of Broadband Regulation
In this final session, participants will draw conclusions from the scenarios and working groups to suggest broader conclusions about the future of broadband competition and how to think about the role of government in that future. | http://csreports.aspeninstitute.org/Conference-on-Communications-Policy/2015/agenda |
The problem of alcoholism is the most acute social problem of modern Russian society. But for a child whose family has a drinking parent, or even worse - both parents drink, alcoholism in 99 cases out of 100 is his personal tragedy.
Instructions
Step 1
Whatever the parents, for the child, these are the only close people, and he loves them, despite their shortcomings and bad habits. But sometimes, when parents go over all the limits of reason in following their inclinations, the child may develop a persistent feeling of dislike and even hostility. This is especially pronounced in adolescence, when hormonal changes in the body take place, and the teenager has more than enough of his own problems. The need to solve their teenage problems against the background of drinking parents creates additional stress on the child's psyche.
Step 2
What advice can you give to a young immature mind in such a situation? It all depends on the general mood in the family. If parents belong to the category of so-called quiet alcoholics, then you can conduct a constructive dialogue with them. All parents love their children, and alcoholics are no exception, unless they are, of course, completely degraded individuals. It makes sense for a teenager to start a conversation at the moment of enlightenment of parents that their drunkenness is the cause of the teenager's problems. The reasons may be the inability to position oneself in the society of peers, the inability to prepare well for the lessons, material problems, in the end. It is not a fact that one conversation can change the situation, but, as they say, water wears away a stone.
Step 3
A teenager should understand that alcohol for a person who does not yet have physical dependence is a kind of veil that disguises more serious problems. A teenager is not yet an adult, but no longer a child. He can, as far as possible, make his own attempts to eliminate the root cause. Perhaps the relationship between the parents cooled down and this burdens them - you can try to unite the family by proposing a joint event that requires thorough preparation. Perhaps one of the parents has lost value orientations, and the other's drunkenness is a consequence of empathy. It is appropriate to remind here that the child's future is the main value, and the teenager still needs parental care, moral and material.
Step 4
If parents, in principle, agree with the arguments, but do not have the strength to resist the habit, you can try to persuade them to seek qualified psychological or even medical help. In the event that the measures taken do not bring the desired result, then it is easier to abstract from their problems and lead an independent life. It should only be remembered that in adulthood there will hardly be an opportunity to wait for help from such parents, and in most cases you will have to rely solely on your own strength. To do this, one should not only study well, but already look out for a promising field of activity for the future. Some teenagers in such a situation begin to earn money on their own already at school, fortunately, there are many opportunities to receive money through honest work, at least by working on the Internet. | https://householdfranchise.com/10562352-what-a-teenager-should-do-if-parents-drink |
This cinnamon has quite a different flavor profile than the more prevalent cassia cinnamon. Ceylon, also known as "true" cinnamon, has a delicate and more complex flavor, with essence of orange, floral notes and warmth. It's significantly less potent and spicy than cassia. This is the cinnamon you'll find in Mexican Hot Chocolate or mole poblano. It works well with Indian cuisines too. In baking, it pairs well with citrus and vanilla.
Also known as Ceylon Cinnamon.
Ingredients: Sri Lankan cinnamon
Net weight: 1.6 oz
Need a shaker top? We suggest one with small holes. Don't forget to add one to your cart!
Recipes: | https://oaktownspiceshop.com/products/ceylon-cinnamon?_pos=2&_sid=2937bc2fc&_ss=r |
Prince Harry is truly dedicated to the conservation of animals, and has reached out his hand to now help elephants in Africa!
Earlier in the month, Prince Harry visited Malawi to assist with the 500 Elephants initiative, which aims to help reduce habitat pressures, ease human-wildlife conflict and boost elephant populations where a lot of poaching happens. During the mission, the royal helped move 262 elephants to safety!
“He is amazing and down to earth. He is very social but a respectable gentleman. We ate together at the camp and we camped in the same grounds – this is unique for someone of his status,” Patricio Ndadzela, country director for the nonprofit conservation body African Parks in Malawi said to PEOPLE.
Conservation is something that Prince Harry is very passionate about! He’s assisted with helping to move over 1,500 antelope and buffalo, as well as putting tagging collars on rhinos and lions in the hopes to keep them safe. It’s really cool that he actively gets involved versus just sending a check or tweeting something. Glad that he was able to be a part of this great initiative to help the elephants in Africa! | http://celebritiesdogood.com/2016/08/prince-harry-helps-save-200-elephants-africa/ |
"You're Just Gonna Be Nice": How Players Engage with Moral Choice Systems
by Amanda Lange
Abstract
Some data available from games with moral decision systems show that gamers are generally unwilling to play as evil characters. In a study, over 1000 gamers were surveyed to see how the average player interacts with a game system that allows the player to choose a "good" or "evil" path through a game story. The finding was that the average gamer prefers to be good or heroic in such games. Gamers are most interested in exploring a character whose moral choices closely match to their own. However, those players that experience a game for the second time are then more likely to choose evil. The article includes an exploration of which actions gamers felt particularly evil, and what kind of choices turn out to be more difficult for them.
My prediction is: all you guys, you’re just gonna be nice. Sickeningly, sycophantically nice to each other. And it makes me sick, because you know, in a game like Fable, we spent hours; we spent months, months and years crafting the evil side of Fable, and only ten percent of people actually did the evil side. Come on. You’re supposed to be gamers. (Peter Molyneux, 2013)
I am the ten percent.
And I find myself frustrated in conversations with gamers with similar tastes to mine in their absence of moral imagination. I know I am not my avatar in the game, so I like to experiment. Sometimes it's entertaining to me to see the results of a choice I would never make in reality. Sometimes it's just plain fun to be the bad guy. But it seemed to me that many other gamers I have spoken with have no interest in transgressing moral boundaries in story-based gaming. Their aversion to this, though I find it boring, poses some interesting questions for game designers.
A binary moral choice system has achieved a great deal of popularity in game design over the past decade. In the game Fable (Big Blue Box, 2004), as described above, there are certain segments based on player choice. The player may choose to do an explicitly labeled evil act, decreasing the in-game karma score of the player’s avatar, or a good act, which causes the character to gain in-game karma and become more heroic. This kind of system has appeared in different forms in game franchises, such as Mass Effect, inFamous, BioShock, Star Wars: Knights of the Old Republic, and Fallout. Other games, such as The Elder Scrolls V: Skyrim (Bethesda Game Studios, 2011) or Dragon Age, contained moral-decision elements that are labeled by character traits or present branching narrative without an overt karma score. Still other games, such as Spec Ops: the Line (Yager Development, 2012), contained hidden moral decisions that ask the player to commit an act in the heat of a moment. The avatar may then be considered more good, or more evil, based on the game’s judgment.
Though all games seem to define good and evil slightly differently, most games are explicit about indicating which choice was made after it was made or during that choice’s execution. A few judgments are common. Games generally consider non-violent solutions to be good solutions when they are available, and consider overtly violent solutions to be evil regardless of the context or amount of violence elsewhere in the game. Creating wanton property damage or subjugating people is evil, advocating freedom is seen as generally good, and the promotion of social equality and justice are considered good. Betrayal of former friends is considered an evil act, as is ignoring the pleas of an innocent when something can be done to help them. Games seem to generally consider actions done with a pure profit motive to be evil, but they may reward actions done altruistically in such a way that, while the roleplayed character has no profit motive, the player still makes an in-game profit on the good act. In these situations, the player is rewarded for an avatar's supposedly selfless behavior. The Project Horseshoe Think Tank referred to these choice moments as "ethical dilemmas" and did a broader set of case studies about how they are typically presented in games, including examining games not frequently considered as moral choice games and tracking how those games presented and "scored" such dilemmas (Schreiber et al., 2009).
The Status Quo
We already have some statistics for how players engage with these elements, through data-mining of the games directly. Molyneux (2013) claimed ten percent played evil in Fable. The Mass Effect 3 (BioWare, 2012) team reported a third of players chose Renegade (evil-flavored) versus two-thirds Paragon (good-flavored) (Totilo, 2013). The recent zombie adventure series, The Walking Dead (Telltale Games, 2012), not only tracked how many people made which decision in which branch, but also displayed this to players in a series of video trailers (Fogel, 2012).
However, there are a few things that I felt current data mining couldn’t adequately express. For example, a common sentiment among gamers in a game that can be played as good or evil is to play good in the first playthrough. Evil is held for a second, lower-priority playthrough after they’ve played the game “correctly.” Having total statistics does not take this propensity into account.
I was also interested in seeing how moral choices correlated with the player’s and avatar’s gender. Based on earlier research from Heidi McDonald (2013) about identity tourism in games, I already suspected that male gamers would be more likely than female gamers to choose an avatar different from their own gender. Her research also showed that even though women were more likely to create an avatar that looked like them, they were not as likely to play out that avatar’s romantic choices the same way they felt they would in real life. I wondered if this could be applied to moral choices as well. My hypothesis based on available data was that only a minority of male gamers would choose the evil path in a moral-choice game. I also expected that there would be a correlation with a female avatar and the evil path, since many gamers also switch genders on a second playthrough. I wondered if there might also be a correlation between female players and evil: If women are more likely to choose a dangerous romance in a game, are they also more likely to make darker moral choices?
The Study
Over the past few months I’ve been conducting a survey of video gamers, asking them how they approach this type of choice in video games. Participants were recruited via Twitter, a Reddit community dedicated to video games, and a news post to a games and game-culture blog. A total of 1067 gamers responded to this survey. In the survey, I asked different questions depending on if the gamers reported they only played a game once, or if they liked to play the games more than once. The questionnaire also asked if players ever played avatars that had different gender identities than their own, and whether they played a different gender in their main or secondary playthroughs.
The questionnaire also had open-ended questions about how players interpreted moral choices in games. I was particularly curious about which acts players felt were too evil or made them feel guilty. Did players feel more rewarded playing heroically, or villainously?
These results should be interpreted cautiously. This was an exploratory study with a self-selected sample that may not be representative of the broader gaming environment. Probably the biggest flaw in this first study is that some people didn’t know what kind of games this survey was asking about, so they were unable to fully complete the survey. I purposely did not lead with the titles of any particular games, hoping those who responded would draw their own conclusions about which games included a moral choice system. In the future, I may focus on particular games or types of choices. Respondents were also 88 percent male, 10 percent female, with the remainder choosing “other” or electing not to respond. Based on this demographic it’s hard to draw too many conclusions about how women engage with these games. Sampling of data from spaces populated by a higher percent of female gamers may be necessary in a future study.
Factors such as game skill and familiarity may also influence the choices that players make in games. This study did not screen for players' level of familiarity with games or ask about their favorite types of games. In a future study, gamers might be sorted based on their familiarity with games to see how this alters the data set.
Only 1 percent of responders claimed not to complete the games they played. This is obviously skewed, since statistically, most gamers do not complete games they play. Industry averages may be as low as ten percent for a very large game, such as Red Dead Redemption (Rockstar San Diego, 2010; Snow, 2011), or, for example, 42 percent of players of the aforementioned Mass Effect 3 (Phillips, 2012). Seeing this result on the survey may mean that a higher amount of people likely to respond to the survey are the sort of players that always finish games. Of these gamers surveyed, 60 percent claimed they play such a game more than once. That leaves 39 percent that claimed to only play the game once. Industry data-mining does not usually track or report replays, so it's difficult to say how close this matches the average gamer.
The One-Timers
39 percent of survey participants claimed to typically play a game only once. Within that subset of players, 59 percent of participants played the game as a good character. 39 percent of those who played the game only once did not expressly play good or evil, but claimed to make decisions “on a choice by choice basis.” Five percent of participants played only evil. A majority of these participants (55 percent) said that they “usually” tried to really do in the game what they would actually do in real life. An additional 10 percent said they “always” did in the game what they would in real life, and 23 percent answered “sometimes.”
Figure 1: The “Play only once” condition for moral choices.
| |
The book, To Kill a Mockingbird, written by Harper Lee takes place in Maycomb, Alabama in the 1930’s during the time of segregation between blacks and whites. Two of the main characters, Miss Maudie and Atticus, say it is a sin to kill a mockingbird. Miss Maudie is an old lady that lives down the road from Atticus and his two children: Jem and Scout. Several times in the novel they say this is a sin because of a mockingbird’s innocence. The title, To Kill a Mockingbird, is appropriate for this novel because it follows the meaning of the book and two of the main characters, Tom Robinson and Boo Radley, are innocent people.
First, the meaning of this novel comes from the title, which is To Kill a Mockingbird. In the book a mockingbird represents a creature that is harmless, like a mockingbird itself. All what mockingbirds do is eat, build nests, and live in trees. They do not do anything that is destructive. Atticus says to Jem, “I'd rather you shoot at tin cans in the back yard, but I know you'll go after birds. Shoot all the blue jays you want, if you can hit 'em, but remember it's a sin to kill a mockingbird.” (Lee 90) When Atticus says this to Jem, he means to not kill anything because it is not right to kill an innocent creature. Then Miss Maudie says “Mockingbirds don't do one thing but make music for us to enjoy. They don't eat up people's gardens, don't nest in corncribs, they don't do one thing but sing their hearts out for us. That's why it's a sin to kill a mockingbird.'’ (Lee 90) This is the explanation Miss Maudie gave to Jem about how mockingbirds are innocent.
Second, Tom Robinson was an innocent person that did nothing wrong, but the jury in the court case he was in believe that he was guilty. In the court case, Tom Robinson who is black was convicted of raping a young girl, Mayella Ewell, who was white. Segregation occurred during the time of the court case, which led to Tom Robinson being guilty. There was clear evidence that he was innocent. The evidence stated was that there were hand marks on the back of Mayella’s neck, but Tom Robinson had a broken arm so he could not have put both hands around her neck. Also she got punched in her right eye, but Tom Robinson was unable to... | https://brightkite.com/essay-on/to-kill-a-mockingbird-analysis |
BACKGROUND OF THE INVENTION
Field of the Invention
Description of Related Art
SUMMARY OF THE INVENTION
BRIEF DESCRIPTION OF THE DRAWINGS
DETAILED DESCRIPTION OF THE INVENTION
The present invention relates to applications utilizing mono-energetic gamma-rays (MEGa-rays), and more specifically, it relates to techniques that utilize MEGa-rays for characterizing isotopes.
MEGa-ray sources are created by the scattering of energetic (Joule-class), short-duration (few picosecond) laser pulses off of relativistic electron beams (several hundred MeV). The resulting scattered photons are forwardly directed in a narrow beam (typically milli-radians in divergence), are mono-energetic, tunable, polarized and have peak photon brilliance (photons/second/unit solid angle, per unit area, per unit bandwidth) that exceeds that of the best synchrotrons by over 15 orders of magnitude at gamma-ray energies in excess of 1 MeV. Such beams can efficiently excite the protons in the nucleus of a specific isotope, so called nuclear resonance fluorescence (NRF). NRF resonant energies are a function of the number of protons and neutrons in the nucleus and are thus a unique signature of each isotope. It has been suggested that NRF can be used to identify specific isotopes. It is has been further suggested (T-REX/FINDER) that MEGa-ray sources are ideal for this application and not only enable identification of isotopes but can also be used to determine the quantity and spatial distribution of isotopes in a given object. In order to accomplish these tasks one analyzes the MEGa-ray beam transmitted through a particular object. NRF resonances are narrow, typically 10E-6 wide compared to the resonant energy, e.g., 1 eV wide for a 1 MeV resonant energy. MEGa-ray sources on the other hand are typically 10E-3 wide relative to their carrier energy, e.g., 1 keV wide for a carrier energy of 1 MeV. A given amount of (e.g., grams) of a resonant isotope removes a corresponding amount of resonant photons from a MEGa-ray beam, according to Beer's law. Detection or measurement of the absence of resonant photons in a MEGa-ray beam transmitted through an object can be thus used to determine not only the presence of the material but also its location and its quantity. To do so requires a detector capable of resolving the number of resonant photons removed by the desired object from the MEGa-ray beam. Known gamma-ray spectroscopy technologies are not capable of resolutions better than 10E-3 in the MeV spectral region and are thus not able to accomplish the task. One method suggested by Bertozzi et al. (Bertozzi patent) envisions using a piece of the material under observation after the object in question to evaluate the removal of NRF resonant photons from the beam.
Figure 1A
Figure 1B
Figure 2A
Figure 2B
10
12
14
16,
foil 18
16
20
18.
22
10
24.
30,
32
34
36
Let us consider in some detail the Bertozzi suggestion using a specific example, namely the location of U235 hidden within a large container such as that used for a trans-oceanic commerce. The Bertozzi suggestion applies specifically to interrogation with a polychromatic gamma-ray beam such as that produced by a Bremsstrahlung source. Referring to , in his suggestion, the beam transmitted through the cargo container impinges upon two "detectors". Transmission detector is an energy collector that measures the total gamma-rays passing through the object and the first detector which consists of a piece (typically a foil) of the material/isotope that is being sought in the container, i.e., a of U235 in this example. The foil of U235 in detector is surrounded by a large area, gamma-ray spectrometer that measures the spectrum of the photons scattered by the U235 foil If U235 is present in the cargo container in quantities greater than a few grams, then the resonant photons will be removed from the interrogating gamma-ray beam and the gamma-ray spectrometer surrounding the foil of U235 will not see any resonant photons. As depicted in , light scattered by the interrogating foil will consist of non-NRF photons and particles such as Compton scattered photons, Delbruck photons and miscellaneous energetic particles. When beam does not propagate onto any U235 within the container, then in addition to the non-NRF photons and particles, spectroscopy of the scattered light will reveal NRF photons shows a cargo container that includes U235 material that is interrogated by a polychromatic beam that includes light resonant at the U235 line. As shown in , spectroscopy of the scattered light shows only non-NRF photons and particles and thereby reveals the absence of NRF photons and thus the presence of U235 material in the container. While this method in principle works, it has some significant limitations, in particular, it requires gamma-ray spectroscopy of the scattered photons to be effective. Gamma-ray spectroscopy is difficult and is accomplished in nearly all cases by collecting one gamma-ray at a time and analyzing the total energy of that photon. This can work for beams that have photons distributed evenly in time, e.g., those coming from a Bremsstrahlung source. However Bremsstrahlung sources have been shown to be ill-suited for transmission based NRF detection schemes due to their wide bandwidth and beam divergence which are both ill-matched to NRF detection requirements (Pruet et al. paper). MEGa-ray beams are well suited to transmission detection due to their narrow bandwidth and low divergence (100x smaller than Bremsstrahlung); however, these sources by their nature produce large bursts of photons, up to 10E10 per pulse at rates of 10's to 100's of times per second. MEGa-ray sources are ill-matched to single photon counting based gamma-ray spectroscopy.
Alternative methods that eliminate the limitations of the Bertozzi method are desirable.
WO 2007/038527
-3
discloses utilizing novel laser-based, high- brightness, high-spatial-resolution, pencil-beam sources of spectrally pure hard x-ray and gamma- ray radiation to induce resonant scattering in specific nuclei, i.e., nuclear resonance fluorescence. By monitoring such fluorescence as a function of beam position, it is possible to image in either two dimensions or three dimensions, the position and concentration of individual isotopes in a specific material configuration. Such methods of the present invention material identification, spatial resolution of material location and ability to locate and identify materials shielded by other materials, such as, for example, behind a lead wall. The foundation of the present invention is the generation of quasimonochromatic high-energy x-ray (100's of keV) and gamma-ray (greater than about 1 MeV) radiation via the collision of intense laser pulses from relativistic electrons. Such a process as utilized herein, i.e., Thomson scattering or inverse-Compton scattering, produces beams having diameters from about 1 micron to about 100 microns of high-energy photons with a bandwidth of ΔE/E of approximately 10E.
It is an object of the present invention to enable the efficient detection, assay and imaging of isotopes via nuclear resonance fluorescence (NRF) excited by laser-based, inverse-Compton scattering sources of mono-energetic gamma-rays (MEGa-rays).
This and other objects will be apparent based on the disclosure herein.
Figure 3A
Figure 3B
U.S. 2006/0188060 A1
Embodiments of the present invention alleviate the need for single photon counting spectroscopy with MEGa-ray based detection arrangements. shows an embodiment of the present invention including a detector arrangement that consists not of two detectors downstream from the object under observation but instead three. The latter detector, which operates as a beam monitor, is an integrating detector that monitors the total beam power arriving at its surface. This transmission detector can, e.g., be identical to the latter detector in the Bertozzi scheme, which is described in . The first detector and the middle detector each include an integrating detector surrounding a foil. The foils of these two detectors are made of the same atomic material, but each foil is a different isotope, e.g., the first foil may comprise U235 and second foil may comprise U238. The integrating detectors surrounding these pieces of foil measure the total power scattered from the foil and can be similar in composition to the final beam monitor. Non-resonant photons will, after calibration, scatter equally from both foils, i.e., Compton, Delbruck, etc., and are not, to first order, dependent upon the number of nucleons in the isotope but are a function of the atomic element. As shown in , if the object under interrogation has no U235 present and the interrogating MEGa-ray beam is tuned to the U235 resonant transition, then the first foil will produce resonant photons as well as the non-resonant photons and scatter and thus more energy will emanate from the first foil than the second foil. A variety of methods for diagnosing the content of an interrogated object are provided herein and are within the scope of the present invention. For example, the ratio of the energy scattered by each foil or the difference in energy scattered by each foil can be used to determine not only the presence of the material in the object under interrogation, but the exact ratio or difference is a function of the amount of material present, thus this detector arrangement can also provide quantitative assay information. By arranging the foils in small pixels it is also possible to use MEGa-ray beams and NRF to determine with high spatial resolution (microns) the location of specific isotopes.
a) sequential foils in which the attenuation of the first foil is calibrated and taken as part of the measurement;
b) a rotating foil arrangement in which the two (or more) foils are alternatively placed in the beam on sequential MEGa-ray pulses;
c) multiple arrangements of isotopes used with dual or multicolor MEGa-ray beams to detect and assay more than one isotope simultaneously;
d) detection and assay determined by the ratio of signals from both foils and the beam monitor;
e) detection and assay determined by the difference of signals from both foils and the beam monitor; and
f) no beam monitor used but only the scattering from the isotopes is used to determine the presence and amount of material.
This generic arrangement is referred to as a Dual Isotope Notch Observer (DINO) since it effectively identifies the depth of the 10E-6 wide notch in the 10RE-3 wide transmitted MEGa-ray beam. Since MEGa-ray sources are new (only a couple of years old), no one ever needed to consider how to make a DINO like detector. A number of DINO configurations are within the scope of the present invention, including but not limited to:
Figure 1A
shows a prior art configuration in which a beam transmitted through a cargo container impinges upon two detectors.
Figure 1B
illustrates light scattered by an interrogating foil.
Figure 2A
shows a cargo container that includes U235 material that is interrogated by a polychromatic beam that includes light resonant at the U235 line.
Figure 2B
illustrates a spectrum of scattered light showing only non-NRF photons and particles.
Figure 3A
shows an embodiment of the present invention including a detector arrangement that consists of three detectors downstream from a container having no U235 in the beam path.
Figure 3B
shows that for an object under interrogation that has no U235 present and the interrogating MEGa-ray beam is tuned to the U235 resonant transition, then the foil will produce resonant photons as well as the non-resonant photons and scatter and thus more energy will emanate from the first foil than the second foil.
Figure 4A
shows an embodiment of the present invention including a detector arrangement that consists of three detectors downstream from a container having U235 in the beam path.
Figure 5
illustrates an example of the invention where, after exiting an object under test, a probe beam passes through a U238 foil and then through a U235 foil before propagating onto the beam monitoring detector.
Figure 6A
illustrates an embodiment that uses a single rotating foil.
Figure 6B
Figure 6A
shows the two halves of the rotating foil of .
Figure 7
is an example where a portion of a finite area MEGa-ray beam simultaneously passes through a U235 piece and a U238.
Figure 8
illustrates a finite area MEGa-ray beam that propagates through pixels of U235 material in line with pixels of U238 material.
The accompanying drawings, which are incorporated into and form a part of the disclosure, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention.
Figure 3A
Figure 3A
Figure 3B
Figure 3A
Figure 4A
Figure 3A
Figure 4B
50
50
52,
52,
50
54,
56.
i
50
58,
60.
58,
50
62.
50,
52,
50.
54
50
52.
50
52,
50
54
58
50
54.
56
58
50
52.
50,
54,
foil 54,
54
50
52.
54
50
54,
50.
50
52
52.
11/528,182
U.S. Patent No. 7,564,241
shows an embodiment of the present invention where there is no U235 or U238 in the path of the beam. Specifically, MEGa-ray probe beam is tuned at a U235 NRF line. The path of beam as it traverses container does not intersect any U235 or U238 material. After passing though container beam propagates to and through a first foil which is surrounded by an integrating detector After passing through fol 54, beam propagates to and through a second foil which is surrounded by an integrating detector After passing through foil beam propagates onto an integrating detector Since beam which is tuned at the U235 NRF line, does not encounter any U235 as it passes through container there is no reduction of U235 resonant photons within beam Therefore, U235 foil produces a larger amount of NRF than it would have if beam had encountered U235 in its path through container If a sufficient quantity of U235 had been present in the path of beam within container such that all of the resonant photons in beam had been removed, then, after normalization of the signals at each detector to account for attenuation losses, the amount of non-resonant photons and scatter from foils and would have, to first order, been the same. In the example of , there is no U235 (or U238) within the beam path through the container, and therefore, in addition to the non-resonant photons and scattered particles, the integrating detector collects resonance produced by the interaction of probe beam with the U235 in foil depicts the signals produced by integrating detectors and in the example of . The elements of are identical in all respects to those of , except that a quantity of U235 material 64 is in the path of beam as it passes through container In this example, the amount of U235 is sufficient to remove all of the resonant photons from beam such that there is no production of U235 NRF from foil as depicted in . When absolutely no U235 NRF is produced by the quantity of U235 within the beam path cannot be surmised. Due to the magnitude of gamma-ray energies (in excess of 1 MeV) produced by the MEGa-ray sources used in the present invention, as disclosed, e.g., in application no. (), the present invention is capable of producing U235 NRF in foil even in the presence of U235 within the path of beam through container Thus, if the amount of U235 NRF produced by foil is less than that produced when beam encounters no U235 in its path through container the amount of U235 that is produced is indicative of the quantity of that material in the path of beam Further, by moving the path of beam relative to container , an image, both 2D and 3D, can be obtained of the U235 material within container Other techniques for obtaining a 2D and 3D image are discussed below, and still others will be apparent to those skilled in the art based on the descriptions herein. Although the present invention uses examples for determining the presence, assay and image of U235, the present invention can be used for the same purposes in applications with other materials.
Figure 5
72
90.
74,
72,
84
82.
76
77
78.
79
72
76.
illustrates another example where, after exiting an object under test (not shown), a probe beam 70, passes through U238 foil and then through U235 foil 82 before propagating onto the beam monitoring detector The integrating detector shown in cross-section, positioned near foil is substantially similar, in this example, to the detector positioned near foil Integrating detector 74 is formed of a scintillator and two photomultipliers and A Compton shield is positioned between foil and scintillator
Figure 6A
Figure 5
Figure 6B
Figure 6B
100
102
106.
104,
74
84
102
102.
102'
102"
100
102
100.
100
100
illustrates an embodiment that uses a single rotating foil rather that the dual foils described supra. In this example, after exiting a test object, MEGa-ray beam passes through rotating foil and impinges on the integrating detector An integrating detector similar to the detectors and of , is located near rotating foil , shows a front view of the rotating foil As shown in, one half of the rotating foil comprises U235 and the other half comprises U238. The beam is pulsed at a fixed rate and the rotating foil rotates at a fixed rate that is one half of the rate of pulses if beam At such a rotational rate, the beam will pass through the U235 portion in one pulse and in the next pulse, beam will pass through the U238 portion.
Figure 7
120
122
124.
120
122
126
120
124
128.
is an example where a portion of a finite area MEGa-ray beam simultaneously passes through U235 piece and U238 piece The portion of beam that passes through U235 piece propagates onto integrating detector and the portion of beam that passes through U238 piece propagates onto integrating detector
Figure 8
140
141-146
151-156
140'
148
148
140.
141-146
151-156
161-166
140
141
151
161.
141-146
151-156
140.
illustrates a finite area MEGa-ray beam that propagates through pixels of U235 material in line with pixels of U238 material. This beam will completely cover any U235 material that has a diameter of less than the beam diameter. For example, if the beam diameter is 1 cm and a U235 piece has a diameter of .5 cm, then U235 piece will be completely covered by beam A separate integrating detector (not shown) is positioned to measure U235 NRF and non-resonant photons and particles for each of pixels of U235 material and pixels of U238 material. In this example, a separate integrating detector of integrating detectors is positioned to measure the beam portion that passes through each pixel pair. Thus, the portion of beam that passes through pixel will then pass through pixel and will propagate onto integrating detector Note that although pixels and pixels are depicted as a two dimensional array, each pixel can be a part of an array of pixels extending perpendicular to the plane of the page to create a three dimensional pixel array. This exemplary configuration instantly provides a complete 2 dimensional image of any piece of U235 that is smaller than the diameter of beam Such a beam can be moved relative to a larger piece of U235 to obtain an image of such piece. The beam and the piece of U235 can be moved relative to one another to obtain a 3 dimensional image of the U235. The beam and its alignment to the pixel arrays can be held constant as a unit, and the whole unit can be moved relative to obtain an image of the piece of U235.
figures 3A through 4B
Figure 3A
To understand some exemplary methods for analyzing the data collected in embodiments of the invention, consider . In , the MEGa-ray, which is tuned to a U235 NRF line, passes through the container without encountering any U235. If the beam was not tuned to either the U235 or the U238 NRF line, the amount of signal collected by each integrating detector would be about the same. There would be some reduction in power by absorption and scattering as the beam propagates through the first foil. Therefore, the two signals are normalized. If the beam is then tuned to a NRF line of U235, the difference between the signal levels in each detector is produced by the NRF from the U235 content of the first foil. This difference will not change if the entire container is removed. This is significant in one respect because the detection system can be set up and aligned and then targets, such as shipping containers, can be moved into the beam path. If the beam path does intersect material having U235 content, then the signal difference will be determined by the amount of U235 through which the beam passes. Placing this difference on a logarithmic scale will reveal small changes in the amount of U235 NRF produced signal collected by the integrating detector proximate to the U235 foil. This enables a variety of data analysis methods of varying degree of precision. For example, simply measuring the amount of signal collected by the U235 detector on resonance before and during interaction with a U235 target will show a signal difference that is dependent on the amount of U235 encountered. In another example, as discussed supra, a measurement is made of the amount of signal collected by the U235 detector and the U238 detector, on resonance, before and during interaction with a U235 target. The signals can first be normalized a variety of ways, including the substitution of a U238 foil for the U235 foil, or by tuning the MEGa-ray beam to be off resonance. The difference between the two signals on the U235 resonance is dependent on the amount of U235 encountered. In another method, a difference is determined between (i) the ratio of the U235 detector signal to the third detector and (ii) the ratio of the U238 detector signal to the third detector. In each method, depiction of the results on a log scale reveals much smaller changes than on a linear scale. Other signal analysis methods will now be apparent to those skilled in the art based on these examples.
Variations and uses of the invention will now be apparent to those skilled in the art. For example, the techniques disclosed here can be used for materials other than the ones disclosed. Other configurations of integrating detectors can be employed within the scope of this invention. Embodiments of the invention can be used to rapidly determine the isotope content of moving targets such as those as to be used in the Laser Inertial Fusion-Fission Energy (LIFE) Project at the Lawrence Livermore National Laboratory or in a pebble bed reactor. If cases where the object is moving, configurations of the invention are usable to measure the Doppler shift for determination of velocities
The foregoing description of the invention has been presented for purposes of illustration and description and is not intended to be exhaustive or to limit the invention to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. The embodiments disclosed were meant only to explain the principles of the invention and its practical application to thereby enable others skilled in the art to best use the invention in various embodiments and with various modifications suited to the particular use contemplated. The scope of the invention is to be defined by the following claims. | |
Dr. Kirin is an Australian-trained Psychologist, with 9 years’ experience in the field. She has extensive expertise in the management and treatment of mental health and psychological issues. She is also Assistant Professor of Psychology at Heriot-Watt University Dubai Campus.
Prior to joining Openminds Psychiatry, Counseling and Neuroscience Center, Dr. Kirin worked in government entities, public healthcare institutions and private clinics in Australia and the UAE. Her experience has traversed across forensic, clinical, and organizational psychology. Passionate about empowering and enabling clients to better understand and take control of their own lives, she delivers individual, couples and group treatment programs, as well as training courses, community outreach, and psycho-educational seminars and workshops.
Dr. Kirin earned her Psychology specialization from the University of New South Wales (UNSW) in Sydney, Australia. She is licensed locally by the Community Development Authority (CDA) of Dubai, as well as in Australia by the Australian Health Practitioner Regulation Agency (AHPRA). She is a member of the Australian Psychological Society and British Psychological Society.
Dr. Kirin works with adults and older adolescents (aged 16 years and above) to help treat and manage a wide range of mental health issues such as Mood Disorders, Personality Disorders, Stress Reactions, Adjustment disorders, Obsessive Compulsive Disorder, Trauma-related Disorders, and more. She also works with couples to address couples’ difficulties and challenges.
Driven by the aim to empower clients with the knowledge, skills and self-awareness to make their lives what they want it to be, Dr. Kirin works with individuals and couples in areas such as mood, stress, trauma, personality difficulties, adjustment and more.
Specializations
Qualifications
Languages
I can't say enough good things about her. she is a Top-notch psychologist, helped me a TON. She totally changed how I see things in ways that have helped me for years going forward . I would recommend her to anyone. Do yourself a favor and go see her! Thank you Dr. Kirin Fiona Hilliar!, | https://www.openmindscenter.com/team/dr-kirin/ |
Hospitals are bound to endure the consequences of the increasing demand for better health and emergency care services. Administrators and management teams are dealing with the increased pressure to meet the needs and expectations of patients. Having a clinical performance improvement program will help assess how the medical staff and the organization as a whole is doing. It will also allow for the creation of plans that aim to improve the quality of services provided to clients.
A good program needs careful structuring to ensure great results. The team should consider investing in a well-designed program that focuses on improving different areas of the practice. All of your clinical performance improvement initiatives should revolve on the following important aspects:
1. Data Analysis
As a medical institution, your hospital gathers a lot of patient information every day. You need a solid IT infrastructure to accommodate this large amount of data. Collecting, processing, and analyzing data is easy when you have the most reliable and secure tools. Industry veteran North American Partners in Anesthesia suggests partnering with a certified specialty anesthesia management organization to gain access to the technical support and resources you need in your practice.
2. Benchmarking
Assessing your organization’s current performance is important to identify areas that need improvement. Benchmarking allows you to compare your existing programs and standards with those of other organizations. It will also help you learn how industry leaders achieve high clinical performance levels and use the information to boost your team’s performance.
3. Creating the Best Practice Standards
By aligning your goals and standards with other practitioners, you’ll learn how to devise your own strategies. You can even come up with better ideas on how to improve the quality of your services. Transitioning your anesthesia department to a trusted specialty group, for example, can be helpful. This will help you focus on other aspects of your practice while letting the experts handle your technical requirements.
Improving your clinical performance will help you achieve your goal of providing the best quality patient care. Focus on these things and work with the right organization to ensure the highest level of customer satisfaction. | http://www.unitythroughdiversity.com/3-pillars-good-clinical-performance-improvement-program/ |
O Território Nacional Brasileiro é Nosso?
Até onde vai o dólar?
The operation of a nuclear power plant is facilitated by a control panel, which allows technical system parameters to be monitored. These indicators often use colors to show the state of the several power plant systems.
Based on this kind of panel, an Indicators Panel for monitoring the production, productivity, costs, safety, investments, economical sustainability and market conditions and trends was developed. The Indicators Panel shows the general situation of the company in many ways and is intended to assist the executive management decision making as well as promote a deeper analysis of specific sectors of the company.
Control Panel, Indicators, Eletronuclear, Nuclear Energy, Global Performance.
The benchmarking technique has been used successfully in various economic activities. A comparison of the performance over time requires, on the one hand, the availability of up-to-date indicators of the company studied and, on the other hand, the availability of these indicators for other companies or group of companies that constitute the “mirror” for comparison.
The indicators are a set of technical and economic data allowing the company, in this specific case – Eletrobras Eletronuclear – ETN to monitor its performance in the various areas of its activities and comparing them with those of other companies from same industry sector.
Eletronuclear has the monopoly in exploitation of nuclear power plants in Brazilian Territory. The Indicators system here discussed takes care fundamentally of the energy generation area. It covers Angra1 and Angra2 nuclear power plants performance. The construction of new plants as Angra3 does not integrate the costs and incomes raised.
The Eletronuclear has participated in an international effort to develop indicators such as the Nuclear Economic Performance International System (NEPIS), developed by the IAEA and has been accompanying the Company’s performance for years using the proposed methodology. The ETN also participates in other international comparisons in the areas of: technical performance (WANO and IAEA), operational safety (WANO and IAEA) and has recently joined the Electric Utility Cost Group – EUCG which makes comparisons of economic and operational performance of utilities from the US and some other countries.
The ETN was the Brazilian representative in the International Atomic Energy Agency (IAEA) group that proposed a comprehensive system that provides a general idea of the performance of a generating company or of a specific plant. The IAEA paper TRS437-web, titled Economic Performance Indicators for Nuclear Power Plants, was discussed by representatives of several countries and presented a set of indicators, focused on economic performance, but rather comprehensive. The idea of the IAEA group was to provide a benchmarking between organizations, but there is no previous experience of implementing the system in a company or nuclear power plant. The Eletronuclear, being the pioneer in the implementation of the monitoring of the set of indicators suggested by the IAEA does not have yet the “mirror” that would allow a comparison in all aspects involved.
Some of these indicators, however, are already part of specific international programs from which the Eletronuclear is part; other indicators suggested by the IAEA are published by specialized national or international organizations. There is also information from other companies, released on a regular basis, that allow comparisons for some of the indicators.
The developed system also allows monitoring the behavior of the indicators over time, which permits the evaluation of the current performance of the company based on the historical behavior of these indicators. In this case, the past becomes the “mirror” that makes possible to evaluate the current situation, establish follow-up goals and whenever necessary, adopt corrective measures.
The performance of each indicator is evaluated based on the achievement of the goals established for each of them. The evaluation of the indicators, according to the achievement of the goals, is also presented by colors, informing the last evaluation available.
The Panel assembles the indicators recommended by the IAEA using the results presentation according to the methodology developed by the operating area of the plants. The performance of each indicator is evaluated based on the achievement of the goals established for each of them. The color of the Indicator on the Dashboard indicates the last available valuation for each indicator.
In Figure № 1, it is possible to see the set of indicators with colors that represent the situation in the last evaluated period. The colors vary between green and red, in some cases going through white and yellow. Some indicators in blue are only for follow-up, so they have no limits or goals.
The indicators were grouped into eight sets with a composition similar to that suggested by the IAEA. They correspond to Specific Panels (SP) that compound the main Panel. The Indicators list from the IAEA shows a great concern with the capacity of capital rewards and new investments.
In addition, three other Panels have been created. The International Comparison Panel (PE9), with the purpose of facilitating international comparisons; the Panel of CMDE Indicators (Enterprise Performance Targets – Eletrobras System Indicators) and the Full Panel of Technical Indicators. The Figure №1 shows the Indicators Panel with the 68 indicators compiled, two of which are duplicate, because they are part of more than one Specific Panel.
SP 11. Technical Indicators Panel.
The Figure № 2 presents SP 9, which allows direct comparison with international companies, “mirrors” available for thirteen indicators and Figure № 3 presents SP 10, with the Eletrobras System Indicators – CMDE.
The Panel compiles all the indicators chosen for follow-up, their specificities, limitations and historical data.
The data collection is gathered together in the Panel from 2008 to 2015, in the shortest period available: monthly, quarterly or annually. These data allow a historical monitoring, which assists in analyzing the behavior of the indicators.
The comparison with data from the past or from other countries implies, for monetary data, its updating and conversion. The general rule adopted by the Panel is the correction of inflation with local general indexes and the conversion into another currency (in general dollar) at the exchange rate of one year (average of the previous year, average of the current year or index accumulated until June of the current year).
The “Equilibrium Exchange” which considers the historical behavior of the inflation-adjusted exchange rate in the two countries is also used.
In the Panel’s case, monetary data can currently be expressed in current Real, Real of 2015, Dollar of 2015, Dollar of 2016 or Balancing Dollar. The correction for Real of 2015 is made using the IGP-DI. The correction for the Dollar of 2015 is made by multiplying the value in Real of 2015 by the average rate of the Commercial Exchange for Sale for that year. It is also possible to use Real for the current year (in this case, 2016). For that, the conversion is made from the average value of the currency of the previous year (in this case, 2015), and the correction is made for the current year using the IGP-DI for Brazil.
The correction using the Equilibrium Exchange is similarly made by multiplying the average dollar value of 2015 by the appropriate factor, based on the IPC indexes for the US and IGP-DI for Brazil.
It should be noted that the targets for each year that involve monetary values are established in nominal Real which means that the goal in Real value (corrected for inflation) is decreasing in constant currency (Real or Dollar).
The process of data feeding is still being systematized, which results in different dates and reference documents. The input data are organized by periodicity (month, trimester and year) and collected directly from data basis and reference documents, mostly provided by Eletronuclear Company. Some input data come from external sources or result from data composition by specific explicit formula.
Monthly and quarterly data are usually revised when presented on a consolidated annual basis.
The Company already has a monitoring system with indicators in the areas of Productivity and Safety that were incorporated to the system using the same goals they use.
It was also incorporated to the Panel data from Enterprise Performance Targets – Eletrobras System Indicators (CMDE) which is a system used by the Holding Eletrobras Company for follow-up. The data from Eletronuclear’s Budget and Costs Monitoring System are also used, the same as its Annual and Quarterly Balance.
The evaluation of Operation and Maintenance costs used the Eletronuclear system from the NEPIS database, adapted to make the data compatible with the Annual Balance. Indeed, focusing on the international comparison, the NEPIS base does not incorporate some local costs such as taxes, financial costs and so on.
Some data are compiled separately for Angra 1 and Angra 2, but the majority can be consolidated for the Almirante Álvaro Alberto Nuclear Power Plant (CNAAA) using some specified criteria for each consolidation case. In some cases, can be sum (for example, the energy produced), in others the weighted average using the power of each plant. There are, however, some cases where such consolidation does not make sense.
The data provided separately for each plant are mainly technical. The economic and financial data are, in most cases, already consolidated for Angra 1 + 2. Data for Angra 3 are generally excluded. There are, however, cases in which only the consolidated value is available that is, including Angra 3.
Some available consolidated economic data are computed in the NEPIS system, and the values among the plants are apportioned in proportion to their power. Others have their own costing system, which allows them to specify the cost associated with each plant. In this paper, the cost share is consolidated.
The consolidated annual data from Control Panel resulted in a report entitled Indicator Copybook, which compiles all indicators chosen for follow-up, their specificities, limitations and analysis.
This Report complements the Panel’s work by presenting a printed picture of the main data that are essentially and that are available on the Panel, that is, those recommended by the IAEA, in addition to the other complementary indicators that are considered important by the ETN.
The Indicators Copybook also aims at detailing and clarifying the methodology used for the economic and technical indicators calculation, compiling all indicators used as a subsidy for comparisons with Mirror Companies. Additionally, the analysis of these same indicators seeks to show to the managers and decision makers of the nuclear sector the situation of nuclear generation in Brazil comparing to the world’s nuclear power plants and the competitive conditions of nuclear energy in the coming decades.
Specific Panel 1: Cost Indicators (Revenue Requirements). This Specific Panel tracks Costs and confronts them with Revenues. It has been adapted to provide the updated data that were the subject of ECEN Report № 3 and which serve as a basis for comparison with other countries. This group of indicators corresponds to the IAEA indicator group entitled T8: Measures of the Cost of Service (Revenue Requirements).
Specific Panel 2: Indicators of Profitability (Profit remaining after payment of commercial expenses). Traditionally, in a typical regulated power plant, profit and loss statement of income and financial measures are prepared and evaluated at the operational level of the company or holding company and do not include detailed information at the level of the nuclear power plant. These indicators are of interest to shareholders. This group of indicators corresponds to the IAEA indicator group titled T2: Measures of Profitability.
Specific Panel 3: Security Indicators. It deals with the evaluation of safety and reliability of the plant as ionizing radiation doses, performance of the safety system, performance of chemical indicators etc. Some indicators in this group are not from the IAEA as some from WANO. Most of them were already in use by ETN. This group of indicators corresponds to the IAEA group titled T3: Measures of Safety.
Specific Panel 4: Capitalization Indicators. This Panel gives an overview of how the company uses external and own resources to capitalize. It deals with the total value of the material and inventory of the nuclear plant, the investments made in equipment and facilities, etc. Nuclear Fuel is not included, being a separate item from the Balance Sheet. This group of indicators corresponds to the IAEA group entitled T6: Measures of Capitalization.
Specific Panel 5: Market Condition and Orientation. Economic performance indicators can be separated into two broad areas – the plant indicators and market indicators. The plant indicators are those that are typically under the control of nuclear power plant management. This Specific Panel gather the market indicators that can have a significant impact on the financial success of a nuclear unit, but which are typically beyond the control of plant managers and operators.
The distinction between economic measures at plant and market levels is very important to emphasize to nuclear plant managers the deep relationship between market conditions and the economic performance of nuclear power plants, a relationship that in the past was rarely considered. This group of indicators corresponds to the IAEA group entitled T7: Measures of Market Condition and Orientation.
Specific Panel 6: Operating Costs per MWh. This Specific Panel demonstrates operating costs per MWh, facilitating comparison with external companies and internal energy cost analysis. Some of these indicators are already in other Specific Panels, but in this one they are in the cost/MWh unit to facilitate comparison with values of other countries and organizations and cost of electricity from different energy sources. This group of indicators corresponds to the IAEA group entitled T5.
Specific Panel 7: Productivity Indicators. The Panel monitors how the company’s productivity is and if it could be producing/profiting from more efficient resource management. Contains Indicators linked to the generation of energy, availability, capacity factor, availability losses, fuel recharging outages and their duration, backlog etc. These indicators were already used by Eletronuclear in its reports. Eletronuclear’s calculation methodology was largely defined from the WANO description, according to the WANO Performance Indicators – 2013 document. This group of indicators corresponds to the IAEA group entitled T1: Measures of Productivity.
An example of Specific Panel is presented in the Figure № 4. For each indicator, it is presented the result for the last four measurements. The colors change from green to red passing through yellow and white if a tolerance value is established.
Specific Panel 8: Valuation. Indicators that involve Valuation provide information on the adequacy of the tariff for the remuneration of the assets and for the expected expenses and revenues. This group of indicators corresponds to the IAEA group entitled T4: Measures of Valuation.
Specific Panel 9: International Comparison. This Panel has four specific panels containing indicators already organized to make international comparisons. The Specific Panels are: Productivity Indicators, Operating Indicators per MWh, Safety Indicators and Cost Indicators. The goal of this Panel is to facilitate international comparisons. The indicators that make it up are indicators in the Scoreboard that have data available for comparison in the IAEA, WANO or EUCG.
As stated earlier, each indicator has its own methodology used for its calculation. Therefore, it will be detailed the method of calculating the indicator (ID7.8), Factor of Capacity, of the Specific Panel Productivity in order to exemplify one calculation using the methodology mentioned.
This Indicator represents the percentage of net production capacity that was actually produced. The same was chosen as an example because it synthesizes the operation of the central plant or the individual one. This indicator is defined as the ratio of the energy that a power reactor unit produced in a given period divided by the energy it would have produced in its reference power capacity in the same period. Its coverage is global (central) and per plant and its calculation formula given by the ratio [(Generated Energy) / (Reference Energy)] x100, its unit being thus in percentage (%) with two decimal places. The established goal is 88.7% per month and the margin of tolerance is 5%. It is shown in Figure № 5.
A graph showing the historical annual results. This tool is very important for the users to have a broad knowledge of the historical values of the indicator under research and permits to make an evaluation over a long period of time about the performance of their values.
A graph showing the values of the indicator searched in accordance with its specific periodicity that can be monthly, quarterly or annually. Thus, it is important to note that the stipulated target for the indicator is shown in the Values Chart, facilitating the operator to identify the periods in which the results were above or below the goals set by the company.
The indicator warning light is a tool inspired in the control panels of the reactors and which seeks to give the operator a visual signal of easy interpretation. This tool is a form of direct presentation of the indicators performance since it reflects the fulfillment of the established goal for each of them. The indicator color signed in the Panel informs the last evaluations available, however, the Specific Panels allow the visually monitoring of the last four information available for each indicator. The colors remain between green and red and in some cases can be white or yellow. It can be observed that some indicators are shown in the blue color. This means that these indicators are only for monitoring and don’t have limits or goals.
The Polarity of the Indicator. This visual element that can be seen in the results presentation is represented by an arrow, indicating the direction of optimization of the result. As an example, an upward-pointing arrow represents that the higher the indicator values, the better their performance; while the down direction proposes that the smaller the values the better is the situation.
Indicator Calculation Formula. The formula used to calculate the result values is displayed on the specific presentation screen of the results. To exemplify let´s use the indicator 9.1 (Energy Availability Factor) where its formula (Installed Production Capacity-Unplanned Losses-Gross Planned Losses-Losses by Extended Stops)/Installed production capacity x 100), is shown on the screen above the warning light sign.
As an example, it is presented the Specific Panel 9 – International Panel. This group of indicators has already been approached in other Panels and was created aiming at facilitating the international comparisons. A small part of the table for the International Comparison Indicators can be seen in Figure № 6.
As an example, consulting the Page of the panel corresponding to the item ID9.1 – Energy Availability Factors (Figure № 7a) it is possible to see the historical values from ETN (consolidated for Angra1 and Angra2) and values from PRIS from 2011 to 2015.
The performance from ETN is very good when compared with PRIS data for these years. It is possible also to see the data since the Angra 1 plant started to produce energy. Of course, as at the beginning of the operation a lot of adjustments had to be made, the data from PRIS are better than the ones from CNAAA (Central Nuclear Almirante Alvaro Alberto), but not so much (72.7 for ETN to 76.9 for PRIS). The next figure shows the world average data from PRIS from 1995 to 2015.
In the tables from Figure № 7b below it is possible to see the data from ETN and PRIS used to perform the graphs.
Using the panel all this information (data and graphs) are in the same page and there are points to be clicked that permits one to return to the Panel Control, to the Specific Panel in use to see other indicators and to see the data for each of the units and not the consolidated.
In the observed period from 2011 to 2015 the ETN performance was above the world average value. From the data shown the performance from the CNAAA was higher, presenting in average the value 90% when the world average was 75%. The performance of ETN in the period was 15 percentage points above the results of the average of the reactors included in the system PRIS/IAEA (practically all).
The world results however had, from 2011 on, a reduction effect of the shutdown of energy production from Japanese reactors, after the Fukushima accident. That’s why the international reference was taken from years before de accident (80.8 %). Even though, the ETN result is superior in all the years shown in the historic graph.
In the graph PRIS Trend – World Weighted Average in % it can be seen a drop in recent years, that is partially due to the containment of demand, but one should also consider the Fukushima effect, that practically paralyzed the Japanese nuclear power plants.
This paper deals with the use of indicators to analyze the current situation of a company. In the particular case, the work was done for Eletronuclear, since the indicators, goals and limits used were supplied by ETN direct or indirectly.
The managing of a company is essential not only for its survival, but mainly for advancing its ability to solve internal problems, increasing its capacity to respond to market demands and changes, optimize its efficiency and agility. It is also important to facilitate its fitness and implementation of its strategies, besides to improve its capacity to offer new products and services. The use of tools that assist the manager in the coordination and management of a company, whether in specialized technical areas or in the scope of the market becomes a very important instrument of assistance.
The Control Panel comes precisely with the objective of compiling the technical and strategic data of the Eletronuclear, presenting them in an easy-to-use platform and using a series of graphical and financial tools, in order to assist the manager in the decision-making. It is important to highlight that in addition to compiling and presenting the data, the Panel has the ability to compare several parameters of the nuclear area with those of “mirror” companies, thus permitting clear and easily to compare the efficiency of the Company with others of the same industrial sector. In this way, the effectiveness of the Control Panel is visible as a managerial tool and can be used by any enterprise, public or private, and even by entrepreneurs who want to improve and optimize their management through the use of their indicators and goals.
The annual indicators can be used to make an analysis of Eletronuclear’s performance and to guide possible corrections in the policy of the company and of the sector.
A preliminary analysis of the ETN performance based on the number of red indicators was performed. The economic situation was stressed by the indicators. For Productivity and Safety, the performance of CNAAA was still good compared with international results but not so good in 2015. It must be remarked that as compared with international nuclear utilities the cost of CNAAA electricity was competitive but the annual tariff fixed by ANEEL was not enough to pay the production costs.
INTERNATIONAL ATOMIC ENERGY AGENCY (2006). Technical Reports Series № 437 – Economic Performance Indicators for Nuclear Power Plants. | http://ecen.com.br/?page_id=334 |
Sweden has been named the world's second most innovative country in a new survey.
The Global Innovation Index 2016 (GII), released on Monday, is now in its ninth year and is published by Cornell University and UN agency the World Intellectual Property Organization (WIPO).
It ranks 128 countries according to their innovation capabilities and results and aims to be a "benchmarking tool for business executives, policy makers and others seeking insight into the state of innovation around the world".
Sweden climbed to second place from third in 2015 in a top-five table that included Switzerland at the top, followed by the UK in third place and the US and Finland in the third and fourth spots.
It said that Sweden regained the second highest position, a rank it held from 2011 to 2013, largely thanks to gaining in the sub-categories of investment and creative goods and services.
Sweden came in eighth place in a sub-category focusing on the quality rather than quantity of innovation, but the report said the country's performance was improving: "Like Japan, the Republic of Korea and Sweden are high-income economies that have improved their ranking on this combined innovation quality indicator. (…) Although Sweden shows marginally lower scores in the quality of universities than last year, a stronger score in patent families drives its upward movement."
"Europe benefits from comparatively strong institutions and well-developed infrastructure, while room for improvement is found in business sophistication and knowledge and technology outputs," read the report.
However, China's entry into the top 25 this year marks the first time a middle-income country has joined the leading group. But despite its rise, an "innovation divide" persists between developed and developing countries, said researchers.
"Investing in innovation is critical to raising long-term economic growth," said WIPO Director General Francis Gurry.
"In this current economic climate, uncovering new sources of growth and leveraging the opportunities raised by global innovation are priorities for all stakeholders."
Sweden, which is the birthplace of startups including Spotify, Skype and gaming leader Mojang, has a long-standing reputation for innovation. The Global Innovation Index comes a month after Sweden topped the 2016 edition of the European Innovation Scoreboard, which named the country as the EU's innovation leader, followed by neighbours Denmark and Finland, then Germany and the Netherlands. | https://www.thelocal.se/20160816/sweden-worlds-second-most-innovative-country |
Burberry hosted an event recently to celebrate the opening of its first Brit store in New Delhi, India. Burberry is an iconic global luxury brand synonymous with innovation and craftsmanship. Based in London, under the direction of Chief Creative Officer, Christopher Bailey, the brand has a global reputation for pioneering design and fabrics. By exploring unique brand innovations such as Burberry Acoustic, Art of the Trench and fully immersive runway shows, Burberry continues to connect heritage with cutting-edge technology and digital media.
The new Burberry Brit store, which opened in Select Citywalk in Delhi reflects the global store design concept developed by Burberry Chief Creative Officer, Christopher Bailey.
Features of the store include: | https://stylecity.in/2013/12/17/luxe-and-stylish-burberry-brit-saket-new-delhi/ |
Money is an interesting subject. On the one hand it’s just a made up concept, a worthless piece of paper that has no intrinsic value whatsoever. Neanderthals knew nothing about money and they probably were the better for it. But like it or not, in modern society, money is a tool that each of us uses every single day of our lives. If we have a surplus of it then we can enjoy life a bit more because we aren’t constantly trying to figure out how to pay the next bill. But if we have a shortage of money, or are deeply in debt then we’re either depressed or running around with our hair on fire in a constant state of stress trying to figure out how to keep the lights on or the car running.
So although money is literally worthless, having it makes our lives better and not having it makes them worse. That’s the bottom line.
Since money is such an important tool, why do only a small percentage of people understand how it works, how to earn more, how to save more, and how to multiply it? They certainly don’t teach that in the school systems, and I don’t know about you, but my parents weren’t talking about it at the dinner table either. It’s left up to us as individuals to sort it out, and quite frankly, most of us never quite figure it out—although a lot of people think they have.
Here’s a good self-test: if you are reliant on a paycheck to pay your bills every month then you haven’t figured out how money works.
A lot of people also have the mistaken idea that getting rich is somehow a function of luck, or inheritance, or a random event that happens to someone else. But that’s not what the research shows—not at all. Thomas Stanley, a PhD in business administration, spent his entire career studying millionaire’s and wrote a great book called The Millionaire Next Door. One of the key things he discovered was that 80% of millionaires didn’t inherit their wealth, they built it on their own, painstakingly and over a long period of time.
If becoming wealthy isn’t about inheritance or luck, then how the heck do we do it? Two words: financial education. Money is all math and philosophy, and once you understand how it works and you take sustained action month after month, year after year, wealth is almost inevitable. Once you figure out how to create a money machine for yourself it just cranks out more, and more, and more. But if you never make the investment in yourself to learn how to get your money right, then you’re probably just going to bounce around like a pinball from week to week, in a never-ending cycle of work, spend, broke. Work, spend, broke. Work, spend, broke. I’ve been there, so I know from experience that cycle is not a lot of fun.
So you need to make a decision, are you going to invest in yourself and start learning about how money works? Or are you going to spend the rest of your life living paycheck to paycheck in a constant state of money stress? You can continue to be someone who consumes all they earn, or you can be an investor that builds wealth, but you can’t be both.
One of these days you’re going to need to make this choice for yourself. But if you choose to remain ignorant about how money works, and spend all of your life-force trading time for consumer goods, then don’t complain about the 1% or the 10%. Most of them didn’t get lucky, they simply put the time in to figure out how to get their money right, and then they took action to make it happen.
So what are you doing to do?
This topic is covered in detail in my Lecture #101: Get Your Money Right. | http://crypticmoneyguy.com/why-build-wealth/ |
At its meeting on 17 January 2008, the Committee for the Environment considered a public consultation paper on the draft all-Ireland Species Action Plan which was sent to the Committee on 5 th December 2007.
The Members have agreed the following comments:
The framework for the development of this action plan appears to be sound, with clearly defined objectives, and targets which are SMART (Specific, Measurable, Achievable, Realistic and Time-Bound). This draft plan is the fifth all-Ireland Species Action Plan and the fact that the plan has been developed at the biogeographic scale of Ireland will maximise the probability of Action Plan objectives being met. Given the continued range expansion and increase in the non-native Grey Squirrel, and the contraction of the range of the native Red Squirrel to the 26 counties, conservation of this species in Northern Ireland and maintenance of the genetic diversity of the population will depend on a robust all-island approach.
Attainment of Action Plan targets is likely to be best achieved through early establishment of the Red Squirrel Action Plan Steering Implementation Group (action 5.6.1) and frequent review of progress towards implementation of measures within the plan.
Whilst a combination of approaches as specified in the plan are most likely to meet Action Plan objectives, the development of careful management of forestry practise and selective planting of different tree species may have more beneficial effects on Red Squirrel conservation than any attempt to control Grey Squirrel populations. Such management should be underpinned by the necessary research in Ireland and elsewhere and may including development of studies looking at inter alia feeding ecology, red/grey interactions, and habitat manipulation (in addition to that specified in Section 5.6).
An additional area of research which may be worthy of consideration in the context of the all-Ireland population is the development of a landscape scale spatially-explicit population model (integrating population dynamics and GIS) which would underpin implementation of aspects of the plan. | http://www.niassembly.gov.uk/assembly-business/committees/2011-2016/environment/environment/responses/all-ireland-species-action-plan-for-the-red-squirrel-sciurus-vulgaris/ |
Few days ago I see a debate involving Patrick Artus, an economist.
http://bfmbusiness.bfmtv.com/m…perts-22-1802-448459.html
I found an article presenting his vision in French (translated in English ).
Far from ideology he explains that there is a growing consensus that potential growth is decreasing with no apparent hope to grow again.
The specific problem countries like France, of obstacle to growth, unemployment, is not his subject, and he states that this have to be solved in locked economies like france, not to waste the tiny potential... This is not our question .
He explains the problem of decreasing growth is based on the decreasing efficiency of research and development, far from 1960s growth and innovation.
In all domain, but IT, the cost of creating an innovation some growth, grow exponentially. the cost of creating a new drug is growing exponentially. Same for a new microprocessor.
Some says we have taken all the "low hanging fruits", the technology progress that were easy to get.
He explain that Internet is not as important as we imagine, as between france and USA, the share of jobs is just from 4 to 3%, while Internet economy in France is very late, and we have no Internet giant...
Today creative-destruction is no more creating new better job, but transforming good jobs into low qualification service jobs.
He explains that current progress is much less important that was the steam engine, the electric engine, fire....
As on of the guest ask, would you abandon your iPhone, or tap water ?
Tap water, like most technology improvement of the pas industrial revolution gave us huge improvement of life, longevity, and allowed huge productivity improvement.
To that sad observation the techno-optimistic propose that Internet technology have an impact which is growing exponentially, self catalytic... Problem is that today we rather observe that cost of research per unit of innovation is growing exponentially.
I confirm that with the fact that for microelectronics, the moore law is endangered by a similar moore law on the cost of semiconductors factories.
They then discuss about the nature, the strange nature, of modern Internet/mobile new economics. It seems that Internet is really improving our life, but it is not creating so much taxable sales, nor jobs... It allows us to exchange services, without paying.
It joins what some economist state, that internet economy is deflationist, making us consume less in money, yet being as much as satisfied as before.
Some propose that Internet unlike what it seems, is not mature enough to revolution our lifestyle.
Internet is not so useful, at least much less than our car, our tap water, electric plug, our boiler or fridge, and as all what those technology allows, like food you can eat now, appliance you can use, clothes you can wear.
They propose the Internet, like electric engine, will take 30 years to have a real impact on useful productivity.
IT for example allows to increase administrative complexity, or decrease administrative costs, but does not yet create growth of the same size as did the car or the electricity.
Now it is time to add LENR in that equation ?
On the paper LENR is only reducing cost of Energy, which is 10% of GDP... the growth implied is the one we had every year in the 1960s.
more interesting, LENR may create new possibilities, like the fridge which allowed to eat new food in summer.
One key to innovation is competing on the non-consumption.
So maybe will LENr create more jobs where today no energy is used ?
Another remark is when I link LENR and computers, with the ideas of Jeremy Rifkins, proposed in "The 3rd industrial revolution".
I disagree with his detailed conclusions (because of LENR and others data I have), but his arguments are good.
An industrial revolution is the mix of an Information technology revolution with an energy revolution.
Steam engine and printing did the 19th century revolution. The 20th century revolution was oil and electronics... Nuclear energy did not ally with computers, so this was not a real revolution...
however LENR with internet may be an explosive cocktail for the economic growth.
This idea is proposed in an interesting way by Jed Rothwell in Cold Fusion and the Future.
Jed Rothwell proposes that LENR will create a boom of AI and robotics, because of the autonomy that LENR allows.
It seems very rational, but it is not the only synergetic revolution to expect.
LENR is naturally a local energy, and IT technology, big data, will allow smart grid to be efficient.
In a way smart car, based on LENR, AI, robotics, but also big data, smart cities concepts, may revolution the transportation system, with both increased comfort, but also reduced cost, and increased offers.
Is it too optimistic ?
the first reason to cool our optimism is the time factor. Every time there is a revolution, like Internet, people imagine that it will be faster, it will be like never before, and every time it takes 30 to 60 years to be integrated into the society.
Anyway there is really new opportunities to accelerate the engine of growth, the innovation !
The idea of LENRG by LENR-Cities is for example to accelerate the innovation by exploiting the huge desire of all actors, investors, scientists, engineers, industrialists. This is really new as old-revolution was based on hierarchy and salary, not on entrepreneurship and shared goals. can it accelerate the revolution ?
Can crowdfunding helps too ? what about crowdsourced science ? open peer review ?
It looks promising but the obstacle to those acceleration opportunities are huge too. Finance is regulated to protect dumb citizen and economic rents, science is structured by old strates of power, and scientific publication is structured by academic practices and regulations.
How innovation "works" is really the key question.
This article propose a vision of how work and entrepreneurship is evolving, and how the industrial economy is being replaced by an economy where the infrastructure to capitalize is "the multitude", users, providers, supporters, peers, and not the machine capital...
http://www.paristechreview.com…2/19/new-value-proposals/
The innovators dilemna by Clayton Christansen gives also interesting vision on innovation
http://www.claytonchristensen.…s/the-innovators-dilemma/
as the French author of "Effectuation", Philippe Silberzahn explains on his blog:
http://philippesilberzahn.com/
Finally There is more questions, and subjects to studies than definitive conclusions .
Once you share the knowledge on LENR there is are much fear as there is hope, of a future economic growth or stagnation.
Anyway it seems that with or without growth, our comfort will increase on average, and that if economic measured growth stall, we will have to find a way to distribute the deflationist growth to the population.
It seems that old mechanism that we use in Europe will no more work, but maybe LENR CHP, Uber car, shares of Botcars, AirBnB pocket money, crowdbuilding, 3D Printers, crowd-design may redistribute wealth in a better way than social security...
Maybe I am too optimistic ? | https://www.lenr-forum.com/forum/thread/1134-patrick-artus-see-a-decreasing-growth-by-decreasing-efficiency-of-technology-pro/ |
Life in the deep ocean could be headed for deep trouble. The food supply could shrink dramatically, starving many of the organisms that live on the ocean floor.
Living in the deep ocean is tough already. It’s cold and dark there, with far less food than at shallower depths. But our changing climate could make things even more difficult.
A recent study looked at various models of how the atmosphere and oceans could change by the end of the century. The study team then projected what those changes could mean for life in the deep ocean -- anything below about 650 feet.
Depending on the exact location and depth, temperatures in the deep ocean are projected to rise by about two degrees to eight degrees Fahrenheit. The level of acidity is expected to go up as well, while oxygen levels are expected to go down. Those changes mean that organisms at those depths will need to expend more energy to survive.
Unfortunately, though, they’ll have less food to fuel them. Most of the food in the deep ocean comes from above -- dead animals and other organic matter that falls to the bottom.
But the level of nutrients reaching the surface is expected to fall, which means there will be fewer of the tiny organisms that are the first link in the ocean food chain. With less organic material at shallower depths, there’s less to fall to the bottom. For some regions, the amount of material reaching the bottom is expected to go down by as much as half -- creating deep trouble for anything that lives in the deep ocean. | http://www.scienceandthesea.org/program/201707/deep-trouble |
Forests of three distinct areas exist in the state. These are the forests of the north which include the mountain temperate forests and the tropical forests of the Duars, the deciduous forests of the plateau fringe and the mangrove forests of Sunderbans. Of these the northern forests are the most important.
These forests are related to altitude and aspect. Below 1000 metres there are tropical evergreen forests. Above 1000 metres the effect of altitude is definitely felt. Subtropical forests are found in between 1000 and 1500 metres. Terminalia, Cedrela, Michelia, Various laurels and Bamboos are found in this belt.
Temperate forests are found from 1500 to 3000 metres. They contain some varieties of oaks and conifers. Magnolia campbellii and large rhododendrons tree are also found in this belt. Much of this forest area has been cleared for tea gardens around Darjeeling and Kurseong. Beech and birch are found in many areas. Conifers are found in slightly higher situations. There are dense forests of deodars nearly all along the Dow Hill ridge which continue up to Senchal, and clothe the entire Tiger Hill. Birches are found all round Darjeeling. There are few deodars on the Ghoom ridge, where oaks are more common. Due to the occurrence of mists on the southern slopes, the trees are covered with mosses and orchids. Many kinds of sweet temperate berries are also found in the undergrowth. Magnolias and oaks occur around Kalimpong while conifers cover higher slopes and peaks. Above 3000 metres, silver fir is very common. It is common in the Singalila Range. Dwarf rhododendrons also occur here. Higher up are Alpine meadows, smell bushes and flowering plants.
Some of the most dense forests of West Bengal occur in the foothills of the Himalayas. Many of them are protected. They are generally well managed and properly exploited. Much of this forest is moist deciduous and here sal (shorea robusta) is the most common and valuable tree. Other common tree associated with sal forests are Champa (Michelia Champaea) and Chilauni (Schima Wallichii), Khair, Gamar and toon. Bamboo is also found here. Vistas of tall grasses grow along the rivers. Evergreen laurels and other moisture loving plants are found mixed up with the deciduous forests.
A broad belt of these forests stretches along the entire length of the northern districts. It is broader towards the east in the Duars. Here low-level tea gardens have taken a heavy toll of the forests. Corridors of these forests penetrate the hills along the river gorges of Mechi, Balason, Mahanadi, Tista, Jaldhaka and many other smaller streams.
This forest is very dense. There is much undergrowth of shrubs and bushes. Orchids cling to the trees and giant creepers form a tangled mass of impenetrable vegetation. Wild animals abound in the jungles which include the rare one-horned Indian rhinoceros, the elephant and the Bengal tiger. Sanctuaries have been provided for them at Mahananda, Gorumara (National Park), Chapramari, Neora Valley (National Park), Jaldapara and Buxa (Tiger Reserve).
Soils of these forests are naturally rich in humus. Along the river beds the soils are found in broad belts of sterile sands and pebbles. At some places high banks of these gravels are found. | https://www.webindia123.com/westbengal/land/forest.htm |
The latest instalment of the Health and Social Care Information Centre’s (HSCIC) ongoing survey of young people sheds light on several issues that continue to whirl around media and public opinion. Since the 1980s, the Smoking, Drinking and Drug Use Among Young People in England series has been a valuable indicator of current and emerging trends in young people’s attitudes towards drug use. This year’s report confirms the continuation of a number of positive trends, highlights areas for improvement, and, for the first time, provides useful insight into the scale of the NPS problem among young people.
The broad trends are overwhelmingly positive. The number of 11-to-15-year-olds who have tried alcohol is at its lowest level (38%) since the survey began, and only 8% drank in the last week. There are a number of potential reasons for this ongoing decline – DEMOS recently reported that social media is cited as a distraction and/or a deterrent to heavy drinking for as many as 4 in 10 young people – but it appears that the trend is due to a mix of changing attitudes towards health and drunkenness, as well as the impact of migrants from non-drinking cultures.
But the numbers should still be treated with caution: HSCIC estimates suggest that 240,000 11-to-15-year olds drank in the last week, representing a significant amount of underage drinking; and almost one in ten young people drank 15 units or more. Further, these cases of heavy underage drinking are linked to other risky behaviours, including smoking, drug use and truancy, suggesting that there is a need to target prevention initiatives at a significant minority of vulnerable young people.
The survey also highlighted the profound influence of parents on young people’s drinking behaviour. Only 2% of pupils who said their parents did not like them to drink had drunk alcohol in the last week, compared to 44% of those whose parents did not mind. Along with the fact that families are one of the main sources of procuring alcohol, this strengthens the evidence that parents can be one of the most important protective factors in young peoples’ lives.
HSCIC findings with regard to other drugs were similarly positive: the number of 11-to-15-year-olds who have ever smoked (19%) is as low as it has ever been; and, although the decline has slowed, fewer school-aged children have ever taken illegal drugs. Given the tenor of media reporting – headlines such as, ‘Will your child die from a legal high?’ and ‘Primary school kids “taking legal highs”’ – data on NPS is particularly intriguing. 2.5% of young people had tried an NPS, compared to 15% who had taken illicit drugs, most commonly cannabis; and despite being the ‘legal high capital of Europe’, only half of respondents had heard of ‘legal highs’.
Finally, the survey elicited insight into the status of drug education in schools. Echoing Mentor’s findings in 2013, HSCIC report that the vast majority of schools provide one lesson per year on smoking, drinking and drug use, with fewer than one in ten schools offering lessons more than once a term. Consequently, satisfaction with drug education has decreased in recent years: today, 60% of young people think schools gave enough information about smoking, 56% about drinking and 54% about drug use; and almost half of young people could not recall learning about any of these.
Therefore, despite a continual downward trend in drug use and some improvements in drug education, there are still some areas of concern. In particular, there is a need to target the most vulnerable young people, who are often susceptible to a range of interlinked risky behaviours. The report also highlights certain widely reported problems that are perhaps not as serious as popular opinion suggests. Although NPS remain a concern, their use is not prevalent among 11-to-15-year-olds, which suggests that a holistic approach to drug education and prevention at an early age remains the best way to protect young people from a range of interconnected risks. | http://mentor-adepis.org/smoking-drinking-and-drug-use-new-trends-and-what-they-mean/ |
How to live your best life when it’s cold outside: Throw on a pair of sweatpants, make this spin on chicken pot pie, and spoon it into your mouth as you cuddle up on the couch. I’m speaking from experience here.
When it comes to making dinner for myself (that is to say, without the intention of whipping out the cameras and writing up a post about it), a lot of people assume I’m whipping up creative and labor intensive meals from scratch, donning my apron, flittering about like some sort of dinnertime Disney princess (which I can assure you is never, ever the case).
In reality, I’m gravitating towards the raid-the pantry-for-quick-fix-ingredients, use-up-the-fridge-stragglers, and throw-it-all-together-and-see-what-you-get side of things. And that’s exactly how this Skillet Cheddar-Cornbread BBQ Chicken Pot Pie began.
What it turned into, though, is a recipe that I intentionally buy ingredients for week in and week out, and one that I realized I absolutely had to share with you guys.
This recipe is everything you love about comfort food staples, but made new and easier and better together.
Using shortcut ingredients like corn muffin mix, frozen vegetables, and premade BBQ sauce keeps this dish weeknight-dinner friendly and inexpensive, while flavorful add-ins like buttery chicken breasts, onion, garlic, cheddar cheese, and jalapeño slices make for a filling, delicious meal. Skillet Cheddar-Cornbread BBQ Chicken Pot Pie is exactly the kind of meal you crave after a long day of work– simple and satisfying– but also the kind you’re going to want to make for friends when they come over on football Sundays.
You know what I mean? That kind of comfort food. AKA the best kind.
I had a hard name deciding what to name this recipe because no one part is better than the other. So I just put all of the info in there.
I love that this recipe is made in a skillet because it makes it so easy to go from the stovetop to the oven and whip it all together in one pan, which is essential to any easy meal. I don’t want to be fussing with a dozen different pots and pans and baking dishes, I just want it all to cook and bake together with as little messy cleanup necessary afterward. Who’s with me there? I know you are.
And then there’s the cheddar-spiked cornbread crust, which, in my humble opinion, is so much better than any traditional, roll-out pre-made crusts, and far faster and less frustrating than homemade ones. Plus, it’s cheesy cornbread instead of pie crust. I mean, I like pie crust and all, but cornbread isn’t just a topping, it’s a side-dish re-imagined. And did I mention all of the cheese? There’s a clear better choice here.
Oh, and you can press some jalapeno slices into it too, if that’s your sort of thing. It’s definitely my sort of thing.
Lastly but not least-ly, we need to talk about that BBQ chicken filling, which is so saucy and chock full of vegetables that you wonder why you ever considered making pot pie any other way. You definitely want to make sure that your frozen vegetable mix has corn in it, by the way, because the little kernels of corn sprinkled throughout the barbecue sauce just tie that cornbread topping in even more and really make this dish feel like a happy reminder of warmer days while you try to survive the chillier months.
When it comes to BBQ sauces, use your favorite. As long as it’s relatively thick (because no one wants a runny pot pie), you can pick any one your heart desires. That is, unless you happen to live near Oneonta, NY. Then you absolutely have to use Brook’s BBQ sauce so I can live vicariously through you.
Okay, you don’t have to do anything. Just as long as you hurry up and get started so you can have this skillet for dinner tonight. 😊
Skillet Cheddar-Cornbread BBQ Chicken Pot Pie
- Total Time: 45 minutes
- Yield: 6 servings 1x
Ingredients
- 3 tablespoons butter
- 1 pound boneless, skinless chicken thighs or breast, cut into chunks
- Kosher salt and black pepper, to taste
- 1 small onion, chopped
- 3 cloves garlic, minced
- 10 ounces mixed frozen vegetables
- 1/4 cup all-purpose flour
- 1 cup chicken broth
- 1 cup barbecue sauce
- 1 (8.5-ounce) package cornbread and muffin mix
- 1 large egg
- 2/3 cup whole milk
- 6 ounces (1.5 cups) cheddar cheese, divided
- 1 jalapeno, sliced, optional
- Scallions, chopped, to top
- Cilantro, chopped, to top
Instructions
- Preheat the oven to 400°F.
- Melt the butter in a 9-inch cast-iron skillet over medium-high heat. Add the chicken, season generously with salt and pepper, and sauté until cooked through, about 5 minutes. Transfer the chicken to a plate and set aside.
- Add the onion to the skillet and sauté until tender and golden, about 4 minutes.Mix in the garlic and frozen vegetables and cook 1 additional minute. Sprinkle the flour over the vegetables and stir until pasty. Stir in the chicken broth and bring to a simmer. Continue to stir until thickened, about 10 minutes. Mix in the barbecue sauce and then add the chicken back to the skillet, then stir to combine. Turn off the heat and set aside.
- In a medium-sized bowl, combine the cornbread mix with the egg and milk. Once smooth, mix in 1 cup of the cheddar cheese. Spoon the cornbread mixture over the skillet, and spread with the back of a spoon. Press the jalapeno slices into the top of the cornbread batter, if using, and sprinkle with the remaining cheddar cheese. Bake until the cornbread has browned and cooked through, about 25 minutes.
- Top with scallions and cilantro and serve warm. | https://hostthetoast.com/skillet-cheddar-cornbread-bbq-chicken-pot-pie/ |
In collaboration with stakeholders from the Maasai, we have initiated plans to establish a mutually beneficial and sustainable partnership with the goal of showcasing the Maasai culture. In order to manufacture a cultural hub/museum to preserve and promote the Maasai society, culturally immersive tours will be created.
Tours will include both cross cultural connections as well as wildlife exploration through Maasai-led safari tours, with in-home stays that encourage authentic human connections. We also hope to incorporate academic style programs into these trips, allowing U.S. students to connect with professors and local institutions to foster a sharing of knowledge. The final realm of tours that we will offer are volunteer and service trips, where individuals of all ages can travel to Kenya and assist the community with infrastructure or various project needs.
Within the cultural museum, Maasai women will display and sell authentic crafts to both visitors of the museum as well as on the Stone & Compass Global Artisan Market. | https://www.stoneandcompass.com/project-kenya |
This link is for anyone that wants to help out with initiatives to support with the COVID-19 crisis. Warwickshire CAVA are recruiting volunteers who can help at Targeted Testing Sites (also known as Mass testing or community testing sites). We particularly need people to help in the north of the County in Nuneaton, Bedworth and Water Orton.
Community Testing Sites are now established across Warwickshire. If you’re great with people with a friendly and approachable manner and want to play your part in helping to keep communities safer, then this is the role for you.
The sites use the fast turnaround lateral flow test kits, which can deliver results in around an hour. Volunteers are needed for front facing roles to support with the following:
-
Help people to register on arrival
-
Provide reassurance and a friendly face
-
Give out information and answer any questions
-
Help to manage the flow of people and guide them around sites, keeping everyone safe and ensuring the sites are well run.
Volunteers will not be asked to get involved with the testing procedure itself, this will be undertaken by the staff on site.
The health and safety of all volunteers, staff and visitors will be of paramount importance. COVID-19 safety measures and guidance will be applied at all times. Appropriate PPE will be issued to volunteers.
If you would like to play your part please fill out the Application Form.
Site categories: | https://www.wcava.org.uk/news/2021/03/22/volunteer-targeted-testing-north-county-urgently-needed |
George Pepper Middle School in Eastwick is not a building on most people’s radar. The Brutalist pile of rigid, concrete geometry is set back a ways from the road and almost sinks into the periphery as one travels down 84th Street towards John Heinz National Wildlife Refuge. It is a disorienting feat for such a big, commanding structure. To truly appreciate the school’s powerful presence you have to walk onto the campus and see the place up close. What is striking today is how muscular and even dignified the building still looks after a steady barrage of vandalism and five years of vacancy.
The school was designed by architecture firms Caudill Rowlett Scott and Bower & Fradley in 1969. After the original plan was panned by the Art Commission for lacking “the humanity element” and being “entirely too concentrated,” construction was stalled for two years. Another row with the Art Commission in 1973 delayed the project even further and the school didn’t officially open until 1976.
The School District of Philadelphia closed Pepper in 2013 following the vote to shutter 23 other public schools across the city. The building and surrounding property has since become a persistent public nuisance. Windows have been smashed, frayed wiring dangles like exposed arteries, crude graffiti mocks the breezeways and facade. And then there’s the trash. Old mattresses, piles of clothes, boxes upon boxes of adult diapers, and pornographic DVD cases cover the school’s grounds. This past January the Streets Department removed over 26 tons of garbage and 65.32 tons of used tires (approximately 5,000 tires in all) from the property. To make matters worse, the entirety of the Pepper parcel is listed as a FEMA Special Flood Hazard area. The building is also contaminated with asbestos and mold. Without a plan for reuse in the near future the abandoned school is doomed.
Eastwick Public Lands Strategy. The Philadelphia Redevelopment Authority, the Eastwick Friends & Neighbors Coalition, and Councilman Kenyatta Johnson are putting the final touches on a comprehensive plan for the neighborhood’s vacant land that updates the 50-year-old Master Urban Renewal Agreement that was terminated in 2015.
“Obviously, Pepper Middle School does not serve the best interest of the community as a vacant building,” said Ramona Rousseau-Reid, Interim President of EFNC. “As an eyesore, it serves no meaningful purpose, its deteriorated exterior diminishes the aesthetic beauty of Eastwick, its vacancy is a symbol of municipal disinvestment, and it is a constant reminder of how vacant buildings impact the property values in the neighborhood. This is why residents and stakeholders want to participate in the decision making process for what occurs in Eastwick.” Rather than considering future proposals for Pepper and other vacant properties on a case-by-case basis, Rousseau-Reid said EFNC decided to include significant vacant properties within the context of the entire neighborhood and its vacant land issues.
Until the Eastwick plan is complete, the sale of Pepper remains on hold. The Streets Department continues to patrol the surrounding property for illegal dumping and Philadelphia Parks and Recreation keeps an eye on the athletic field. As for the school itself, its only defense against further degradation is its indomitable character and colossal concrete bones. | https://hiddencityphila.org/2018/04/battered-brutalist-school-awaits-neighborhood-plan/?replytocom=204497 |
Contact dermatitis is the most common form of skin condition reported amongst healthcare workers, and nurses are most at risk. Nurses are particularly prone to developing contact dermatitis due to important infection prevention and control measures such as frequent hand hygiene measures and wearing gloves as personal protective equipment.
The term dermatitis simply means inflammation of the skin, but the root cause of dermatitis can come from different sources, including infection, allergies and exposure to an irritating substance. Contact dermatitis occurs when the skin comes in contact with a substance that causes a delayed allergic reaction (allergic contact dermatitis), or when there is an injury to the skin's surface (irritant contact dermatitis) (Cleveland Clinic, 2019). Symptoms of contact dermatitis include red, inflamed skin that is often hot or itchy, swelling, dry skin, tender skin, tight skin, blisters with or without oozing, and it can be a very painful condition especially when it occurs on the hands where the skin needs to move a lot.
Severe cases of contact dermatitis may result in a nurse needing to take sick leave to allow skin to heal. This can affect the nurses’ personal occupational health record, as well as organisational budgets, unit staffing levels, and ultimately patient safety.
The reduction and prevention of the incidence of healthcare acquired infections (HCAIs) is a global healthcare goal, with a big emphasis on the importance and efficacy of hand hygiene measures. Some worldwide campaigns to increase the understanding of the importance of hand hygiene practices and to increase these practices in the clinical area have included “Clean Your Hands” (UK, 2004), “Clean Hands Count” (USA, 2016), and The National Hand Hygiene Initiative” (Australia, 2009). There is also a Global Handwashing Day that occurs annually on the 15th October since 2008, and the World Health Organisation created the World Hand Hygiene Day that is held in May. It is thought that this increase in requirements of hand hygiene and PPE measures are contributing to the problem of contact dermatitis experienced by nurses and other healthcare workers. However it is still vital that nurses perform these measures in order to prevent the transmission of infection.
A National UK study that spanned over 17 years aimed to evaluate whether interventions used to decrease the incidence of HCAIs have coincided with an increase of work-related contact dermatitis attributed to hand hygiene measures and other hygiene measures in healthcare workers. The whole study can be found on the following link: https://onlinelibrary.wiley.com/doi/full/10.1111/bjd.13719
It analysed voluntary data provided from dermatologists within the UK to a voluntary organisation called The Health and Occupation Research Network (THOR), which was opened in 1996. During 2005-2007 60% of eligible UK dermatologists participated. Dermatologists were asked to report cases of Irritant Contact Dermatitis (ICD) that were likely to have been caused or aggravated by work, as well as the patient’s occupation and the suspected causal agent. A total of 7,138 cases of ICD were reported between 1996 and 2012. Of which 1,796 were healthcare workers, and 5,342 were in other occupations. This incidence amongst healthcare workers had grown steadily over time, as opposed to the decline in incidence amongst other occupations.
It states that when the study finished in 2012, there was around 4.5 times as many reports of ICD attributed specifically to hand hygiene in healthcare workers as there was in 1996 when it began. The rate of increase in ICD amongst healthcare workers attributed to hygiene procedures was at it’s steepest between 1996 to 2003. The National “Clean Your Hands” campaign was undertaken between 2004 and 2008, which promoted the importance of hand hygiene in NHS trusts by using media and poster campaigns aimed at healthcare workers and patients, as well as increasing the availability of hand rubs at the bedside and other key clinical areas. The incidence did start to decline in 2011, and this is likely to be due to the increased focus of healthcare organisations about the importance of skin care whilst still practicing effective hand hygiene practices in the clinical area.
However, this study only included cases with ICD as the sole diagnosis, and excludes data where allergic contact dermatitis may exist also. It also only requests data from dermatologists, however most people who experience ICD may manage their condition themselves or with the assistance of their GP, suggesting that the incidence is likely to be much higher.
In the Netherlands, a single-site randomised control trial is aiming to examine whether an intervention program based on the provision of hand creams and regular feedback on consumption can improve the skin condition of nurses engaged in wet work when compared to a “care as usual” control group. A link to the details for the statistical analysis of the “Healthy Hands Project” can be found here: https://trialsjournal.biomedcentral.com/articles/10.1186/s13063-018-2703-7
This article is aiming to provide statistical data on recommendations made by both The Dutch Society of Occupational Medicine (in 2006) and the Netherlands Society of Dermatology and Venereology (in 2013). They both recognise the importance of the skin barrier, and promote the regular use of emollients and ointments in the prevention of irritant contact dermatitis.
Obviously it is important that you follow organisational policy and comply with all of their infection prevention and control measures. However, if you experience any of the symptoms of dermatitis named above, please visit your healthcare provider. They may suggest an allergy test for you if they suspect you have allergic dermatitis, or they may prescribe you some emollient creams or steroid creams or ointments to ease your symptoms and prevent further breakouts. Remember, open wounds are an infection control risk in the healthcare area, and it is important that you help to remain as healthy as possible in your job. Treatment for dermatitis is often more effective and quicker when started early. Often for people prone to or at risk of contact dermatitis, preventative measures (emollients and avoidance of contact with known allergies or irritants) are necessary. | https://www.conexusmedstaff.com/blog/2019/08/contact-dermatitis-among-healthcare-workers |
The Specialized Analytics Grp Mgr is accountable for management of complex/critical/large professional disciplinary areas. Leads and directs a team of professionals. Requires a comprehensive understanding of multiple areas within a function and how they interact in order to achieve the objectives of the function. Applies in-depth understanding of the business impact of technical contributions. Excellent commercial awareness is a necessity. Generally accountable for delivery of a full range of services to one or more businesses/ geographic regions. Excellent communication skills required in order to negotiate internally, often at a senior level. Some external communication may be necessary. Accountable for the end results of an area. Exercises control over resources, policy formulation and planning. Primarily affects a sub-function. Involved in short- to medium-term planning of actions and resources for own area. Full management responsibility of a team or multiple teams, including management of people, budget and planning, to include performance evaluation, compensation, hiring, disciplinary actions and terminations and budget approval.
Responsibilities:
Incumbents work with large and complex data sets (both internal and external data) to evaluate, recommend, and support the implementation of business strategies
Identifies and compiles data sets using a variety of tools (e.g. SQL, Access) to help predict, improve, and measure the success of key business to business outcomes
Responsible for documenting data requirements, data collection / processing / cleaning, and exploratory data analysis; which may include utilizing statistical models / algorithms and data visualization techniques
Incumbents in this role may often be referred to as Data Scientists
Specialization in marketing, risk, digital and AML fields possible
Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency, as well as effectively supervise the activity of others and create accountability with those who fail to maintain these standards.
Qualifications:
10+ years of experience
Financial/Business Analysis and/or credit/risk analysis with ability to impact key business drivers via a disciplined analytic process
Provide analytic thought leadership
Manage project planning effectively
In-depth understanding of the various financial service business models, expert knowledge of advanced statistical techniques and how to apply the techniques to drive substantial business results
Creative problem solving skills
Education:
- Bachelors/University degree, Master’s degree preferred
This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required.
Job Family Group:
Decision Management
Job Family:
Specialized Analytics (Data Science/Computational Statistics)
Time Type:
Citi is an equal opportunity and affirmative action employer.
Qualified applicants will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Citigroup Inc. and its subsidiaries ("Citi”) invite all qualified interested applicants to apply for career opportunities. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi (https://www.citigroup.com/citi/accessibility/application-accessibility.htm) .
View the "EEO is the Law (https://www.dol.gov/sites/dolgov/files/ofccp/regs/compliance/posters/pdf/eeopost.pdf) " poster. View the EEO is the Law Supplement (https://www.dol.gov/sites/dolgov/files/ofccp/regs/compliance/posters/pdf/OFCCP_EEO_Supplement_Final_JRF_QA_508c.pdf) .
View the EEO Policy Statement (http://citi.com/citi/diversity/assets/pdf/eeo_aa_policy.pdf) .
View the Pay Transparency Posting (https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf)
Citi is an equal opportunity and affirmative action employer. Minority/Female/Veteran/Individuals with Disabilities/Sexual Orientation/Gender Identity. | https://campuspride.jobs/new-york-ny/specialized-analytics-grp-mgr/102CBA30D71A4FAE80F870B660E01536/job/?vs=28 |
Our mission is to educate students for professional counseling practice and leadership in local, national, and international domains. Mindful that education extends beyond coursework, faculty and students collaborate with schools, communities, agencies and other professionals, to conduct research, and provide services in accord with the highest ethical and professional standards and values in response to the personal, educational, and vocational needs of individuals and families living in diverse and multicultural environments including persons with disabilities. Faculty aspire to produce new knowledge and relevant research, create dynamic atmospheres for learning, and inspire students to actualize their potential, all with the goal of achieving just solutions to human concerns. | http://www.fau.edu/education/academicdepartments/ce/mission/ |
Last week I had the opportunity to be a member on a panel of athletic trainers in the physician practice setting discussing our practices with students from Moravian College. I am always excited to share my experiences with students, especially experiences within our specialized area of practice, however this discussion was particularly exciting to me. Between the students and panelists we were able to discuss many topics that got me thinking about the future of our practice setting….which is bright! All of our roles in the physician practice are diverse and continue to become more diverse with time as we are able to expand our scope of practice to new areas, which is refreshing and particularly exciting to many students.
However, in order for the future to continue to be bright for us, we need to continue offering our support and mentorship to students (and even our colleagues in the profession), as I feel there are many questions and insecurities from individuals wanting to work in physician practice settings. There are several outlets for us to provide our support – offering to be a clinical placement site/preceptor, presenting at conferences and in classrooms, actively being involved in practice groups promoting and improving our practice setting, and the list goes on. As we are going about sharing our experiences, I do believe there are certain areas that we should highlight. I’ve provided three of my top suggestions below, based on what I gathered in my recent panel discussion. These suggestions are only the beginning and may vary from place to place and discussion to discussion, however I think they are important to keep in mind.
- Reiterate the importance of building strong relationships with coworkers and interprofessionally – We come out of school with a whole breadth of information and we want to use it all immediately, however before we can truly use all of this and apply everything into practice we need to start with a strong foundation. This all centers around building relationships. We need to gain the trust and respect of our coworkers and interprofessionals that we work with within physician practice settings, and this starts by being human and making a connection.
- Provide tangible examples of your value – Advocating for yourself by demonstrating value is highly important, but you do need examples to back this up. Don’t just say that you increase efficiency and increase revenue opportunity, find research to back that up, or even better begin by marketing yourself as being able to do these things then gather your own data or information and keep track in a spreadsheet everything you are adding to a physician’s clinic.
- Approach situations with confidence – There will be several times where you may not know the answer to something or other healthcare providers or patients will make comments out of not having an understanding what role athletic trainers play as healthcare providers, both of which can be defeating if you let them be. If you hold your head high and flip the situation to be an opportunity to educate others and yourself, the whole situation will become more positive and the response from all parties will be much better.
As I said, this is only the beginning…what are some of your suggestions for a brighter future? Please share, as we can all benefit from the shared experiences and mentorship! | https://atpps.org/bright-future/ |
This course will introduce the core data structures of the Python programming language. We will move past the basics of procedural programming and explore how we can use the Python built-in data structures such as lists, dictionaries, and tuples to perform increasingly complex data analysis. This course will cover Chapters 6-10 of the textbook “Python for Everybody”. This course covers Python 3.
From the lesson
Unit: Installing and Using Python
In this module you will set things up so you can write Python programs. We do not require installation of Python for this class. You can write and test Python programs in the browser using the "Python Code Playground" in this lesson. Please read the "Using Python in this Class" material for details.
| |
Vacation care program offers a wide variety of exciting play based educational activities that keep the children interested and involved. Our qualified educators at Splash provide both structured and relaxed programs that consider the skills, interest and needs of the children and offer, a variety of arts, crafts, cooking, indoor and outdoor play as well as many special excursions outside the Centre.
It is a program that primary aged children (5-12 years) enjoy spending their days with us where they are treated to a variety of activities, including excursions. Many activities present valuable lifelong learning opportunities, particularly enhancing your child’s social and communication skills as they learn through play.
The program incorporates fun play, leisure and learning activities with children being able to participate in a combination of supervised and structured learning activities. Children are always given the opportunity to rotate through activity stations in accordance with their individual preferences, social interactions and their support needs.
Educators take every opportunity to discuss program content with all children, both before, after and during each vacation care session. This is much like an office debrief, educators are conscious that all children need to be active in decision making in regard to the program being inclusive of all children and their interest. | https://splashcentre.org.au/2018/10/24/splash-vacation-care-learning-story-2018/ |
National Science Foundation.
“The stratosphere is an active player in providing memory to
the climate system,” said Dr. Mark P. Baldwin, Senior
Research Scientist at NorthWest Research Associates,
Bellevue, Wash. He is lead author of a paper in the August 1
issue of Science.
Baldwin and his co-authors suggest, although the
stratosphere is mostly clear and weather free, it appears
changes to the stratospheric circulation can affect weather
patterns for a month or more. Wind patterns in the lower
stratosphere tend to change much more slowly than those near
the surface.
Once the winds in the lower stratosphere become unusually
strong or weak, they tend to stay that way for at least a
month. “This is the key,” Baldwin said, “to understanding
how the stratosphere can affect our weather.” Large-scale
waves that originate in the troposphere, the level of the
atmosphere closest to the Earth’s surface, appear to be
sensitive to the slowly shifting winds in the stratosphere.
The waves allow stratospheric changes to feed back,
affecting weather and climate on the Earth’s surface.
Knowing the stratosphere plays this role could be helpful in
predicting weather patterns well beyond the seven-to-10-day
limit of current weather prediction models. The
stratospheric effect could be compared to the effects of El
Nino in that they both provide predictability of average
weather patterns. However, the stratospheric effects last
only two months at most, and the effects only occur from
late fall to early spring.
A better understanding of the stratosphere’s effect on the
troposphere could also be useful in gaining additional
insight into the climatic effects of stratospheric ozone
depletion, solar changes and variations in aerosol amounts
associated with major volcanic eruptions.
The stratospheric wind shifts can be thought of as changes
to the strength of the belt of westerly winds that circulate
around the globe at high latitudes. Scientists call these
winds the “stratospheric polar vortex.” The waves from the
troposphere first create fluctuations in the strength of the
polar vortex, and then the changes in the vortex strength
feed back to affect a hemispheric-scale weather pattern
known as the Arctic Oscillation.
When the Arctic Oscillation, also known as the North
Atlantic Oscillation, is in its positive phase, there are
stronger westerly winds at mid-latitudes, especially across
the Atlantic. Northern Europe and much of the United States
are warmer and wetter than average, while Southern Europe is
drier than average, according to Baldwin. “In effect, the
stratosphere can act as a predictor of the state of the
Arctic Oscillation,” he said.
NASA funds this research through its Earth Science
Enterprise, a program dedicated to understanding the Earth
as an integrated system and applying Earth System Science to
improve prediction of weather and natural hazards using the
unique vantage point of space.
For information about NASA on the Internet, visit:
For information about NASA’s Earth Science Enterprise on the
Internet, visit:
For information about NOAA on the Internet, visit: | https://spaceref.com/press-release/whither-comes-weather-scientists-suggest-stratospheres-role/ |
Delaware’s 2 largest animal shelters refusing strays
WILMINGTON, Del. — After years of political infighting, the state's two largest animal shelters — First State Animal Center and SPCA and Delaware SPCA — are no longer housing stray animals picked up by animal control.
Instead, both nonprofit organizations have kept dozens of cages empty as they focus on revenue-generating operations, such as building dog day care facilities.
Unless the situation changes, the state could see more animals euthanized, warned Adam Lamb, executive director for the Brandywine Valley SPCA. The organization, based in West Chester, Pa., handles sheltering and caring for Delaware's stray and abused animals under a $6.5 million three-year contract approved by the state last year.
Brandywine Valley has had to place some of Delaware's stray animals with rescue partners in Pennsylvania and other states, Lamb said.
"We can't maintain this long term," he added. "We're not always going to be able to count on other states to save the day."
40 abandoned dogs rescued from N.J. home
First State director Kevin Usilton, whose organization previously handled animal control and enforcement for Delaware's individual counties, said his board placed a one-year moratorium on sheltering animals picked up by animal control. First State still accepts cats and dogs through owner surrenders, along with horses, goats and chickens, he said.
"Since the state essentially fired us from doing the work, why would we volunteer to do it?" he wrote in an email.
On Wednesday, First State sent out a press announcement for its new dog day care and boarding facility, located in a building behind the Camden shelter that used to hold stray animals. The newly renovated area can accommodate 15 dogs in day care and 20 dogs or cats for boarding.
First State officials would not disclose the renovation cost. The organization now has 54 animals up for adoption, less than half the number advertised two years ago.
The nonprofit is moving away from "exclusively serving the underserved" to catering to a larger population, Usilton said. He added that First State still works with rescue partners in Delaware and in other states, but he declined to name them.
Delaware SPCA director Andrea Perlak did not respond to a request for comment. The Delaware SPCA's website lists fewer than 50 animals available for adoption between its two shelters, less than one-third its total capacity, according to state records.
Plagued by budget deficits and staff layoffs, the organization facilitated more than 1,800 adoptions last year, according to its statistics. Delaware SPCA bid on the statewide animal sheltering contract last year, but lost out to a lower bid from Brandywine Valley SPCA.
To raise revenue, Delaware SPCA had planned to open a $1.4 million dog day care this summer, along with a future dog park and retail development on SPCA-owned land. The future of those plans is unclear.
Meanwhile, Brandywine Valley SPCA is near-capacity, housing 80 dogs and 70 cats at its shelter.
Lamb meets regularly with the state's other two no-kill shelters, Faithful Friends Animal Society and the Delaware Humane Association. Both organizations have pulled a total of 76 animals from Brandywine Valley this year. | https://www.usatoday.com/story/news/nation-now/2016/03/30/delawares-largest-animal-shelters-refuse-strays/82453570/ |
Goal Setting Facts Need Faith
When we think of goal setting facts we tend to think of systematic approaches to planning. Accepted goal setting techniques may include the need to analyse, choose, justify, implement, monitor, refine, etc.
However, important as these are, we should never overlook the importance of faith. Not blind faith, but that which inspires commitment and enthusiasm.
Goal Setting Facts is the last in our series on Business Goal Setting : Using the “F-Plan”. The series consists of a structured process designed to help you improve your business planning and goal setting.
Think about goal setting in terms of:
1 – Future: Company Goal Setting: Two Kinds of Future.
2 – Filter: Goal Setting in the Workplace: Filter to Make the Right Choices.
3 – Frame: Frame Your Goal Setting Plans.
4 – Focus: Goal Setting Strategies are Underpinned by Focus.
5 – Fast: Goal Setting Exercise – Are You Fast Enough?
6 – Faith: Goal Setting Facts Need Faith.
Goal setting facts and faith?
Even goal setting facts are often closely linked to acts of faith. Whenever goals are set, they are usually based on an expectation that worthwhile outcomes will be achieved.
The decision to set the goal may have been based on facts drawn from the organisation, its environment, its markets. However, whilst we may have detail about the past, indicating how things have performed, indications of the future always hold an element of faith.
We may believe our forecasts will be accurate, but we can’t be certain.
As Peter Drucker famously once said:
“To make the future demands courage. It demands work. But it also demands faith.”
Drucker wasn’t referring to blind faith – no idea is foolproof. However there must faith in your decisions, and in the people you manage to achieve your goals. Without such faith, commitment to your ideas, and enthusiasm, it’s unlikely that the necessary efforts will be sustained.
The trick for effective managers and leaders though, is to balance their faith with practicalities. Believing in an idea does not mean you become fanatical or blinkered. Creating the future is risky but you should guard against ideas of the future which may become purely an investment in dogma or ego.
Goal setting requires faith, but even faith must be grounded in some elements of reality. As Stanford University professor Bob Sutton suggests:
“the best leaders have the courage to act on what they know right now, and the humility to change their actions when they encounter new evidence. They advocate an ‘attitude of wisdom’. Arguing as if they are right, and listening as if they are wrong.”
Although he wasn’t setting business goals, Dr Martin Luther King is a great example of how goal setting “facts” need to be blended with faith and realism. His faith in the future he saw, and the practical ways he went about realising that future, are perfectly illustrated in the way he “managed” Robert Kennedy. He didn’t change his faith because of new evidence. He achieved his goals because he insisted his supporters went out and found it!
Goal setting facts: only change is certain!
So, one particular area where it’s helpful to think about goals and faith, is with respect to change. Goals are usually set with an aim of improving something, how ever we may choose to identify and measure such improvement. The wisdom of goal setting facts indicates that unless we have faith in our actions, we may be the victims of change, rather than the beneficiaries. Sitting back and letting things happen to us means change is unmanaged.
This is a negative approach and one at odds with another famous thinker on the subject, Alvin Toffler. In his seminal book: “Future Shock”, Toffler advocated a positive approach to change:
“It is true that if we do not learn from history we may have to relive it, but if we do not change the future we may have to endure it – and that could be worse”
Perhaps it’s appropriate to end these thoughts on business goal setting with another Peter Drucker suggestion. He reflected on the importance of continually looking to the future.
“Every product and every activity of business begins to obsolesce as soon as it is started. Every product, every operation, and every activity of a business should, therefore, be put on trial for its life every two or three years.”
You can find more of Druckers thoughts on business goal setting in a excellent Harvard article, Peter Drucker on management courage.
We hope you’ve enjoyed our Happy Manager F-plan – to help get both businesses and individuals into shape! Six steps to business goal setting: anticipate the future that’s already happened, make the future you’d like to create.
Now put your goal setting facts to work!
Find our how in our e-guide: SMART Goals, SHARP Goals. The guide contains 30 pages and 5 tools to help you to set SMART goals, then take SHARP action to achieve them. It includes:
- How do you define goal setting?
- What features of goal setting are important, if we want to ensure they are more likely to be successfully achieved?
- What kinds of goals are more likely to make us motivated to achieve them?
- How do you set SMART goals?
- Why do goals matter?
- What kind of goals should you pursue to be happier in what you do?
- How do you set team goals?
- What strategies can you apply to overcome barriers to setting goals?
- How do you develop SHARP plans of action that help you to achieve your goals?
- What techniques can you use to get things done?
- How do you set personal goals?
Tools:
- Tool 1: Conventional goal setting
- Tool 2: Setting SMART goals that motivate
- Tool 3: The kind of goals that will make you happier
- Tool 4: Taking SHARP action
- Tool 5: Team goals flowchart
- Tool 6: Eight personal goal setting questions
Goal Setting Resources
You can find more of our goal setting resources by reading our featured pages (below).
You’ll find our new e-guide: SMART Goals, SHARP Goals is a fantastic, goal setting resource. It’s packed with advice and tools – use it to help you set SMART goals then take SHARP actions to achieve them!
One of our affiliate partners also has an excellent, on-line, goal setting resource. GoalsOnTrack is a “personal success system that will help you really accomplish goals by getting the right things done”. | https://the-happy-manager.com/articles/goal-setting-facts/ |
With this background in mind, consider what happens when the chairman of COGR approaches House Counsel and asks for a written opinion regarding whether Lerner had waived her Fifth Amendment right. This would be not an unusual request as House Counsel routinely “assists committees in issuing subpoenas and carrying out oversight and investigatory activities.” Id. at 198. Because of the political sensitivity of the matter in question, it is likely that House Counsel would have at least notified the Speaker that the request was made, and it is possible that it required the Speaker’s authorization before undertaking the representation. House Counsel would not ordinarily communicate with individual members of the committee, either majority or minority, about the representation. Unless otherwise directed, House Counsel would provide its written opinion to the chairman alone.
Does this course of proceeding reflect the fact that House Counsel has a privileged attorney-client relationship with the chairman, the committee majority or even the committee itself? Probably not. In the course of providing a wide variety of legal advice and services related to the official functions of the House, the House Counsel will often refer to, and in some sense regard, each member or office as a separate “client” for purposes of a particular representation. Id. Yet this designation is misleading to the extent it implies that the “client” enjoys a legally protected relationship that could be asserted against the House as a whole. Ultimately questions about confidentiality or other aspects of the House Counsel’s functions are determined by the Speaker, in consultation with BLAG, or by the House itself.
In this case House Counsel likely regards the “client” as either Issa, in his capacity as chairman of COGR, or the committee itself. I doubt that it would regard the committee majority as the client because this would arguably conflict with the directive that representation “be provided without regard to political affiliation.” But not much turns on who is identified as the nominal client. House Counsel would not treat the question of whether the opinion may be shared with other COGR members as a legal ethics question. Nor would it be normal or appropriate for House Counsel to provide separate opinions to individual members of COGR on a matter relating to the committee’s functions. Instead, House Counsel would provide an opinion to Chairman Issa, who must then determine, as a matter of House and committee rules, whether he is obligated to share the document with individual committee members. While House Counsel (or more likely, the House Parliamentarian) might advise Issa on this issue, the decision is one for the chairman to make. The Speaker, on the other hand, could always direct House Counsel to make its opinion available to other members or to the general public.
Two final observations. First, while there is an argument that the House Counsel opinion should be viewed as a committee record available to all members, see House Rule XI(2)(e)(2)(A), this does not mean that every member gets to have his or her own copy. Issa could have treated the opinion as executive session material and required members to review it at a central location. The fact that he chose to give Cummings a copy but asked him to limit further distribution, therefore, hardly seems worth making a federal case over.
Second, the more serious concern here, IMHO, is not the internal treatment of the House Counsel opinion, but whether the opinion should be released to the general public. True, there may be no legal obligation to do so. But just as it is inappropriate (again, IMHO) for the president to rely on secret OLC memos to justify actions that he takes, it would be so for COGR to move forward with contempt against Lerner in reliance on a memo which it declines to make public. Perhaps there are legitimate reasons to hold off on releasing the House Counsel opinion while negotiations with Lerner’s counsel are still ongoing, but it should be released as soon as possible if COGR intends to hold Lerner in contempt.
Next PostNext Could Congress Subpoena Snowden? | https://www.pointoforder.com/2013/06/30/house-counsel-and-the-congressional-client/ |
This part (54), Mr. Sophan Seng elaborated about the meeting with NEC’s officials on February 19, 2016. Mr. Sophan Seng who is the leader of the CEROC, was honored to meeting H.E.Kuoy Bunroeun, Deputy of NEC at the head office to discuss the right to vote of Cambodians overseas.
The meeting was anticipated by two permanent members and two deputy secretaries of the current NEC. The discussion is summarized following:
Mr. Sophan Seng highly valued the new NEC that is better and more independent than before including the high expectation of its performance for this upcoming commune election 2017 and national election 2018. Precisely, he addressed the need to allow and facilitate access for Cambodians overseas to vote in Cambodia elections (inclusiveness). The CEROC’s objectives are: – to organize suggestions, petitions, and participation of all Cambodians overseas, and – to produce paper work on mechanism, technical and comparative studies through researches and academic gatherings.
Solution: H.E. Kuoy Bunroeun welcomed the tasks of research and recommended to submit petition through a right channel. NEC is implementing in accordance to the existing laws solely.
- In the future, the NEC shall prepare high ranking officer(s) to visit Khmers diaspora to interact with them about the progress of the NEC.
-
H.E. Kuoy Bunroeun debriefed the advantage and disadvantage points of the new NEC following:
Advantage Points:
- Institutionalized into Cambodia Constitution
- Procedures, mechanism, and implementation of the NEC
- Election laws: new voter registration using fingerprint, photos and computer database etc.
- Able to make all decision makings
Disadvantage points:
- Population database and identity matching are under limit
- Ability of NEC’s staffs especially in each voting booth (PSO) is under limit
- Infrastructure such as electricity is very limited
- Etc.
Cambodia Election 2013 of Lesson to Be Learnt through IFES Survey Report
Opinions on the Electoral Process
- Cambodians express a strong sense of civic responsibility when it comes to voting, but, concurrently, are split on whether their individual vote makes a difference. The vast majority of Cambodians strongly (83%) or somewhat agree (16%) voting gives them a chance to influence decision-making in the country, yet there are as many Cambodians who agree (47%) as those who disagree (44%) their vote may not make a difference. Undertaking campaigns explaining the importance and value of each citizen’s vote, as well as building knowledge of and confidence in all aspects of the electoral process may encourage more citizens to vote.
- Focus group findings highlight a lack of awareness regarding other forms of civic influence besides voting. When discussing how citizens can be a part of the democratic process, very few focus group participants are able to cite other examples. While enthusiasm and belief in the importance of voting is very positive, the low awareness of how to be civically active highlights the need to inform citizens about other avenues of civic participation that can help them express their views on social and political issues.
- Just as Cambodians believe in the importance of voting, most citizens believe elections are crucial and participation is the obligation of people living in Cambodia (84%). Still, four in 10 (43%) acknowledge there is room for improvement in the electoral process. Many respondents suggest obtaining more information about, and easing access to, the electoral process in order to improve the process. Respondents also mention addressing procedural issues, such as improving accessibility of polling stations for persons with disabilities, taking steps to improve the voter registry, adding more polling stations and providing better oversight and organization of elections overall. Focus group participants reveal similar opinions. Providing more voter education information; takings steps to improve the transparency and fairness of the process; and making the voter registration process easier to understand are all mentioned as ways to improve the electoral process. These findings highlight citizens’ recommendations on how to improve the process, and consequently their opinions of the process, and could be taken into consideration in future strategic planning initiatives.
- Cambodians also express strong support for public disclosure of campaign contributions in higher percentages than in the 2012 IFES survey. Eighty-five percent of Cambodians believe it is very (45%) or somewhat (40%) important for candidates and parties to publically disclose the money received for their campaigns. This compares to 2012 data, in which 73% of Cambodians said it is very (34%) or somewhat (39%) important for candidates and parties to publicly disclose the money they receive for their campaigns. This increase illuminates heightened awareness of the importance of disclosure in campaign finance over the past year. | https://www.sophanseng.info/2016/03/01/ |
Children’s Day, May 5, was formerly known as Boy’s Festival (Tango no Sekku), however, the name was changed to create a more inclusive holiday. Children’s Day focuses both on the importance of happy, healthy children and the gratitude children feel toward their parents.
Despite the name change, the holiday remains largely geared toward celebrating the health and future success of Japanese boys, and Boy’s Festival remains a key focus of the day. Koi-Nobori, or carp streamers, are hung, a symbol of strength against odds, and samurai dolls are displayed as part of the Children’s Day festivities. One koi-nobori, interestingly, is hung for each male child in the family, with the largest streamer being allotted to the oldest child and stepping down in size for subsequent boys.
The origin of the Boy’s Festival is uncertain. Some scholars trace it to an ancient Chinese custom called sechie, featuring ceremonial helmets by palace guards, that was popular during the reign of Empress Regnant Suiko (593-629 A.D.). Others believe the Festival originated from the May custom of using banners to scare insects during the growing months. Over time, these banners became more grotesque before taking a turn to become heroic figures that were displayed indoors as symbols of manliness.
Still others trace the holiday to Tokimune Hojo’s defeat of the Mongols on May 2, 1282. Finally, others believe the Festival is linked to the unification of Japan by Ashikaga Shogun. What all of these possible origins of the holiday have in common is the emphasis on strength and manliness, and the decorations used in Japanese homes reflects elements of each potential start of the holiday.
Items displayed during Boy’s Festival include a miniature helmet, armor, a sword, a bow and arrow, silk banners with the family crest, and warrior dolls (musha-ningyo) representing Kintaro, a famous general, Shoki, an ancient Chinese general who protected people from demons, and Momotaro, who can best be described as the Japanese version of David.
In addition to traditional treats like Chimaki (sweet rice dumplings wrapped in iris or bamboo leaves) and Kashiwa-Mochi (rice cakes containing sweet bean paste wrapped in oak leaves), leaves of the Japanese iris (shobu), prized for their resemblance to a sword, are steeped in hot water for bathing (the iris is believed to protect against illness) and mixed with sake to create shobu-sake, an ancient samurai beverage.
The Girl’s Festival (Hina Matsuri or Doll Festival) is celebrated on March 3. | http://www.planettokyo.com/japan/holidays/childrens-day-kodomo-no-hi/ |
[EN]How to render the Thai string correctly?
From the article on how to use u8g2 that can render Thai string through the drawUTF8() function of the u8g2 library, the rendering is not correct as shown in Figure 1, therefore, the code of libraries needs additionally adjusted to render correctly as in Figure 2.Read More
[EN] Simple MineSweeper
This article is an experiment to create a Simple MineSweeper as shown in Figure 1, using an ESP32 microcontroller board with a 1.8″ REDTAB st7735 display. The display resolution is 128×160, the same hardware as Simple Tetris [Part 1, Part 2 and part 3] mentioned earlier, still using MicroPython as the main. The explanation starts step by step from screen generation, randomization, counting, motion control, scrolling the options frame turn off visibility, establishing a relationship between identifying where the bomb is likely to be, picking open and counting points at the end of the game.
Simple MineSweeper is one of the first games we’ve been imitating to study ideas and develop programming techniques since the DOS era and the GUI-based Windows operating system DOS, which was written and worked on the DOS operating system at the time, change the mode to graphics mode to contact with mouse and draw pixels by yourself (It’s the same thing as writing on the ESP32 microcontroller board, but it doesn’t have an operating system to use) So let’s get started.Read More
[EN] Simple Tetris Ep.3
The final article on making a Simple Tetris game using MicroPython and an esp32 microcontroller, as written in parts 1 and 2 of the first two articles, is described in the article below. Readers learn to design data structures, drawing the seven types of falling objects and controlling them to move left, right, and rotate. The second article has the object fall from above and keep the object’s position state. And in this article, the falling objects can be stacked along with moving left, right, and rotating the object will check for collisions with previous objects that have fallen before. Also, check if the object falls to the bottom if there are any rows without spaces. If any rows with no spaces are found, they will be deleted. And finally added a section to check the end of the game in case there is no place for objects to fall and move again as in Figure 1, ending our simple game making process.Read More
[EN] Simple Tetris Ep.2
From the previous chapter, we have drawn the background, random objects, object drawing, left and right moving and rotating. In Part 2 of the article, which is the preceding final chapter of the Tetris series, the topic is about creating a backdrop as a grid data structure. If an object falls to the bottom, it converts that object to a table of data as shown in Figure 1, and improves the way the object falls and controls/renders the new object by using a timer without checking for collisions from moving left / right, checking if the falling object overlaps the previous object, rotation and row cutting, which will be discussed in the last article or Simple Tetris Ep.3Read More
[EN] Simple Tetris Ep.1
This article introduces how to write a simple Tetris game by displaying it in a grid of 10 widths and a height of 16 as shown in Figure 1. Using esp32 microcontroller board connected to ST7735 display and 8 switches for controlling. Importantly it is written in Python via MicroPython compiled using the st7735_mpy library. In this article, we talk about storing 7 types of objects that fall, to support the display and rotation of objects with moving objects left and right. The controls and logic of the Tetris game will be discussed in the next article.Read More
[EN] How to make the stopwatch?
From the article Create a clock that displays an analog display through a color display, this time, it has been modified to make it work as a timer or stopwatch by using the ESP32-CAM board connected to the TFT display and using a switch from pin GPIO0 used as a mode switch or program chip when booting the system or supply power to board ESP32-CAM as shown in Figure 1 and programming still uses Python language with MicroPython as alwaysRead More
[EN] List the serial ports connected to the RPi with pySerial and PyQt5.
In the previous article, we have read the list of devices connected to the serial port of the Raspberry Pi board or RPi with the pySerial library in text mode as shown in Figure 1. This article combines the previous working principles with the use of a graphical user interface via the PyQt5 library, listed in the combobox for users to choose from. If no serial port connected to the board is found, the RPi disables the combobox from user selection. Therefore, this article discusses the implementation of pySerial with the QLabel and QComboBox libraries. PyQt5.Read More
[EN] Binary Search Tree data structure programming with Python.
In the previous article, programming to implement queue-based data structures was introduced. In this article, we introduce programming to manage another type of data structure which has different storage and management methods called BST tree or Binary Search Tree, as shown in Figure 1, which is a structure that can be applied to data collection with attributes in which the data in the left branch is less than itself and the right branch is greater than itself or the opposite, i.e. the left branch has a greater value and the right branch is less. It enables searching for data in cases where the tree is balanced on the left and right in the BST structure, saving half the time or number of search times per round of available data, e.g. 100 data sets in the first round if it is not the information you are looking for will be left with a choice to find from the left or right branches which the selection causes the information of the other side is not considered or cut off approximately half. However, if the Binary Search Tree is out of balance, the search speed is not much different from the sequential search.
In this article, we use Python that works on either a Python 3 or MicroPython interpreter to store the data, adding information ,searching for information as an example of further development.Read More
[EN] Control movement from a joystick via WiFi with MicroPython. | https://www.jarutex.com/index.php/category/display/ |
MILWAUKEE -- The Boston Celtics came out of the All-Star break scrappy and competitive on the road against the best team in the NBA, and they still had a chance to win as the final seconds wound down.
“We played hard and we had our chances,” Al Horford said, shortly after the Milwaukee Bucks held off a late charge by the Celtics for a 98-97 victory.
The Bucks had chances to pull away too. Their best one came with 5:26 remaining, when Terry Rozier came up with a loose ball and lofted a pass ahead to Jayson Tatum in transition. Tatum sprinted ahead of the play and tried to finish off a dunk, but Giannis Antetokounmpo bore down on him and swatted the shot away. On the other end, Nikola Mirotic had a chance to make a three which would have put the Bucks up eight. Instead, his shot bounced off the rim and Tatum buried a triple on the other end.
Both teams traded punches down the stretch until the Bucks led by a point with 27 seconds remaining. That set up two wild sequences that remain a little confusing hours after the game finished.
A layup by Kyrie Irving trimmed the Celtics' deficit to one, with a 3.5 second differential between the shot and game clock. Celtics coach Brad Stevens was left with an uncomfortable decision whether or not to foul, and he opted to let his defense play it out.
“That’s a tough one,” Stevens said. “But I felt like even if we had two seconds left we were going to get a reasonable look, or at least a look at the basket. And so we rolled with that.”
The gamble worked. The Bucks milked the clock down until Antetokounmpo got the ball at the top of the key, guarded by Marcus Smart. Smart forced a jump ball with 0.2 seconds remaining on the shot clock, which appeared to be a certain shot-clock violation.
That was when things got confusing. Antetokounmpo won the jump ball and tipped it to Brook Lopez, who tipped the ball off the rim with both hands. Officials blew the play dead and called a shot-clock violation, sparking an explosion of anger from the Bucks’ bench, which believed the shot clock shouldn’t have started until Lopez touched the ball.
In an explanation to a pool reporter, crew chief Mike Callahan seemed to agree that the shot clock shouldn't have started when Antetokounmpo touched the ball, but he said Lopez possessed the ball with 0.2 seconds remaining.
“You cannot have a legal shot attempt with .2 on the shot clock,” Callahan said.
That, of course, still wouldn't explain how 3.5 seconds remained -- the game clock would have started when Antetokounmpo tipped the ball, and the shot clock would have started when Lopez touched it.
In any case, the Celtics took over possession. After a pair of timeouts, Brad Stevens ran a set meant to get Marcus Morris a look under the basket.
The play never materialized.
“I tried to set a good screen on Mook,” Irving said. “I don’t know if he got fouled or not, I don’t know what happened.”
Irving set a screen for Morris on the opposite side of the floor, since the Celtics knew the Bucks were focused on Irving ("They hadn't left Kyrie all night," Smart said). Middleton appeared to hold Morris, breaking up the play and forcing Irving to improvise -- scampering through a hole created by Al Horford and to take the inbound pass at the top of the key. Irving turned the corner and drove to the hoop with plenty of daylight, but Bledsoe pressured his right hip and forced a wild layup attempt -- turning away from the basket and lofting a tough shot at the rim.
Did Irving have comment on the contact after the game? He shook his head.
“I love my money,” Irving said.
Both Stevens and Horford said the final set was what they wanted.
“I felt good about our last action,” Stevens said. “It looked like we might have a layup opportunity there, and then obviously you’ve got Kyrie coming up to the top, and you feel good about that all the time.”
“We got Kyrie with the ball,” Horford said. “That’s what we wanted. We want him to make plays at the end of the game. It just didn’t go our way.” | |
Abstract. Resonant scattering of plane waves by a periodic slab under conditions close to those that support a guided mode is accompanied by sharp transmission anomalies. For two-dimensional structures, we establish sufficient conditions, involving structural symmetry, under which these anomalies attain total transmission and total reflection at frequencies separated by an arbitrarily small amount. The loci of total reflection and total transmission are real-analytic curves in frequency-wavenumber space that intersect quadratically at a single point corresponding to the guided mode. A single anomaly or multiple anomalies can be excited by the interaction with a single guided mode.
Key words: periodic slab, scattering, guided mode, transmission resonance, total reflection.
1 Introduction
A dielectric slab with periodically varying structure can act both as a guide of electromagnetic waves as well as an obstacle that diffracts plane waves striking it transversely. Ideal guided modes exponentially confined to a slab are inherently nonrobust objects, tending to become leaky, or to radiate energy, under perturbations of the structure or wavevector. The leaky modes are manifest as high-amplitude resonant fields in the structure that are excited during the scattering of a plane wave. One may think of this phenomenon loosely as the resonant interaction between guided modes of the slab and plane waves originating from exterior sources. This interaction generates sharp anomalies in the transmittance, that is, in the fraction of energy of a plane wave transmitted across the slab as a function of frequency and wavevector.
In this work, we analyze a specific feature of transmission resonances for two-dimensional lossless periodic structures (Fig. 1) that results from perturbation of the wavenumber from that of a true (exponentially confined to the structure) guided mode. Graphs of transmittance vs. frequency and wavenumber parallel to the slab typically exhibit a sharp peak and dip near the parameters of the guided mode. Often in computations these extreme points appear to reach 100% and 0% transmission, which means that, between two closely spaced frequencies, the slab transitions from being completely transparent to being completely opaque (Fig. 2). The main result, presented in Theorem 12, is a proof that, if the slab is symmetric with respect to a line parallel to it (the -axis in Fig. 1), then these extreme values are in fact attained. Subject to technical conditions discussed later on, it can be paraphrased like this:
Theorem. Consider a two-dimensional lossless periodic slab that is symmetric about a line parallel to it. If the slab admits a guided mode at an isolated wavenumber-frequency pair , then total transmission and total reflection are achieved at nearby frequencies whose difference tends to zero as tends to . The loci in real -space of total transmission and total reflection are real analytic curves that intersect tangentially at .
In this Theorem, it is important that the slab admits a guided mode at an isolated pair . The frequency is above cutoff, meaning that it lies above the light cone in the first Brillouin zone of wavenumbers and therefore in the regime of scattering states (Fig. 3). Perturbing from destroys the guided mode and causes resonant scattering of plane waves at frequencies near . In the literature, one encounters these nonrobust guided modes at , that is, they are standing waves. Although we are not aware of truly traveling guided modes () above cutoff in periodic photonic slabs, we believe that they should exist in anisotropic structures.
Resonant transmission anomalies are well known in a wide variety of applications in electromagnetics and other instances of wave propagation, and a veritable plenitude of models and techniques has been developed for describing and predicting them [7, 2, 6, 8]. The causes of anomalies are manifold and include Fabry-Perot resonance and Wood anomalies near cutoff frequencies of Rayleigh-Bloch diffracted waves. The present study addresses the specific resonant phenomenon associated with the interaction of plane waves with a guided mode of a slab structure.
In our point of view, one begins with the equations of electromagnetism (or acoustics, etc.), admitting no phenomenological assumptions that cannot be proved from them, and seeks to provide rigorous theorems on the phenomenon of resonant scattering. A rigorous asymptotic formula for transmission anomalies in the case of perturbation of the angle of incidence (or Bloch wavevector) has been obtained through singular complex analytic perturbation of the scattering problem about the frequency and wavenumber of the guided mode for two-dimensional structures [16, 12, 13], and the analysis in this paper is based on that work. The essential new result is Theorem 12 on total reflection and transmission. Previous analyticity results were based on boundary-integral formulations of the scattering problem, which are suitable for piecewise constant scatterers. Here, we deal with general positive, coercive, bounded coefficients and thus give self-contained proofs of analyticity of a scattering operator based on a variational formulation of the scattering problem.
There are interesting open questions concerning the detailed nature of transmission resonances. In passing from two-dimensional slabs (with one direction of periodicity) to three-dimensional slabs (with two directions of periodicity), both the additional dimension of the wavevector parallel to the slab as well as various modes of polarization of the incident field impart considerable complexity to the guided-mode structure of the slab and its interaction with plane waves. The role of structural perturbations is a mechanism for initiating coupling between guided modes and radiation [4, §4.4] that also deserves a rigorous mathematical treatment. A practical understanding of the correspondence between structural parameters and salient features of transmission anomalies, such as central frequency and width, would be valuable in applications.
The main theorem is proved in Section 3 in the simplest case in which the transmittance graph exhibits a single sharp peak and dip. The proof rests on the complex analyticity with respect to frequency and wavenumber inherent in the problem of scattering of harmonic fields by a periodic slab. The framework for our analysis is the variational (or weak-PDE) formulation of the scattering problem, which is reviewed in Section 2. Section 4 deals briefly with non-generic cases in which degenerate or multiple anomalies emanate from a single guided-mode frequency. A number of graphs of transmittance in the generic and non-generic cases are shown in Section 5.
2 Background: Scattering and guided modes
Readers familiar with variational formulations of scattering problems can easily skim this section and proceed to Section 3, which contains the main result.
A two-dimensional periodic dielectric slab (Fig. 1) is characterized by two coefficients and , , that are periodic in the -direction and constant outside of a strip parallel to the axis. We take these coefficients to be bounded from below and above by positive numbers:
|(1)|
Physically, the structure is three-dimensional but invariant in the -direction. The Maxwell system for -independent electromagnetic fields in such a structure decouples into two polarizations and simplifies to the scalar wave equation for the out-of-plane components of the field and the field independently. We will consider harmonic fields, whose circular frequency will always be taken to be positive. Given a frequency , plane waves and guided modes are characterized by their propagation constant in the direction parallel to the slab. The component of a harmonic -polarized field with propagation constant is of the pseudoperiodic form
in which the periodic factor satisfies the equation
|(2)|
with . The number can be restricted to lie in the Brillouin zone . As long as the quantities
|(3)|
are nonzero for all integers , the general solution of this equation admits a Fourier expansion on each side of the slab,
|(4)|
For real and , the square root is chosen with a branch cut on the negative imaginary axis, and the sign is taken such that if and if .
2.1 Scattering and guided modes in periodic slabs
The theory of guided modes underlies the analysis of resonance in Section 3. We present the pertinent elements of this theory here and refer the reader to more in-depth discussions in [1, 13].
In the problem of scattering of plane waves for real , one takes in (4) for the infinite set of such that to exclude fields that grow exponentially as . The exponentially decaying Fourier harmonics are known as the evanescent diffraction orders. The finitely many propagating diffractive orders () express the sum of the incident plane waves and the scattered field far from the slab. In view of the factor and the convention , we see that and are the coefficients of inward traveling plane waves and and are the coefficients of outward traveling waves. The linear orders correspond to “grazing incidence”, and will not play a role in the present study.
If for a given pair , for all , the numbers can be continued as analytic functions in a complex neighborhood of . The following outgoing condition is central to the mathematical formulation of the scattering problem and the definition of generalized guided modes.
Condition 1 (Outgoing Condition).
A pseudo-periodic function is said to satisfy the outgoing condition for the complex pair , with if there exist a real number and complex coefficients such that
We will be concerned with the case of exactly one propagating harmonic . This regime corresponds to real pairs that lie in the diamond
shown in Fig. 3. The numbers are analytic functions of in a complex neighborhood of ; thus, .
The problem of scattering of plane waves by a periodic slab is the following.
Problem 2 (Scattering problem).
Find a function such that
|(5)|
in which .
The weak formulation of the scattering problem is posed in the truncated period
and makes use of the Dirichlet-to-Neumann map on the right and left boundaries that characterizes outgoing fields. This is a bounded linear operator from to defined as follows. For any , let be the Fourier coefficients of , that is, . Then
|(6)|
This operator has the property that
where “on ” refers to the traces of and its normal derivative on . In the periodic Sobolev space
in which evaluation on the boundaries of is in the sense of the trace, define the forms
|(7)|
Remark on notation. The forms and depend on and through . In the sequel, the dependence on and of certain objects such as , , , , as well as the operators and defined below, will be often suppressed to simplify notation.
Problem 3 (Scattering problem, variational form).
Given a pair , find a function such that
|(8)|
By definition, a generalized guided mode is a nonvanishing solution of Problem 3 with set to zero. Such a solution possesses no incident field and therefore satisfies the outgoing Condition 1. If is a real pair, then all propagating harmonics in the Fourier expansion (4) of the mode must vanish and the field is therefore a true guided mode, which falls off exponentially with distance from the slab. This can be proved by integration by parts, which yields a balance of incoming and outgoing energy flux. In the case of one propagating harmonic, , this means
Guided modes with are fundamental in the theory of leaky modes [9, 17, 10, 5] and always are exponentially growing as and decaying in time. The following theorem is proved in [15, Thm. 5.2] and [13, Thm. 15].
Theorem 4 (Generalized modes).
Let be a generalized guided mode with (and ). Then ; and as if and only if is real.
It is convenient to write the form as
in which and . Both and are bounded forms in . If we take to be a sufficiently small complex neighborhood of the diamond , is coercive for all in . These forms are represented by bounded operators and in :
If we denote by the unique element of such that , the scattering problem becomes for all , or
|(9)|
Because of the coercivity of and the compact embedding of into , we have
Lemma 5.
The operator has a bounded inverse and is compact.
By means of the Fredholm alternative one can demonstrate that, even if a slab admits a guided mode for a given real pair , the problem of scattering of a plane wave always has a solution. Proofs are given in [1, Thm. 3.1] and [13, Thm. 9]; the idea is essentially that plane waves contain only propagating harmonics whereas guided modes contain only evanescent harmonics and are therefore orthogonal to any plain wave source field.
Theorem 6.
For each , the scattering Problem 3 with a plane-wave source field has at least one solution and the set of solutions is an affine space of finite dimension equal to the dimension of the space of generalized guided modes. The far-field behavior of all solutions is identical.
With the notation
equation (9) can be written as
|(10)|
in which is the identity plus a compact operator. A generalized guided mode is a nontrivial solution of the homogeneous problem, in which is set to zero:
|(11)|
It can be proved that, if and are large enough in the structure and symmetric in the variable (i.e., about the -axis normal to the slab), there exists a guided mode at some point in the diamond [1, §5.1]. Such a guided mode is antisymmetric about the -axis and its existence is proven through the decomposition of the operator into its action on the subspaces of functions that are symmetric or antisymmetric with respect to . The symmetry of is broken when is perturbed from zero with the consequent vanishing of the guided mode.
The frequency should be thought of as an embedded eigenvalue of the pseudo-periodic Helmholtz operator in the strip at which dissolves into the continuous spectrum as is perturbed. It is the nonrobust nature of this eigenvalue that is responsible for the resonant scattering and transmission anomalies that we study in this paper.
2.2 Analyticity
Analysis of scattering resonance near the parameters of a guided mode rests upon the analyticity of the operator . The proof of analyticity is in the Appendix.
Lemma 7.
The operators , , and are analytic with respect to and , as long as for all , which holds in particular for .
Assume that has a unique and simple eigenvalue contained in a fixed disk centered at 0 in the complex -plane for all in a complex neighborhood of . (In fact, that is not necessary for the present discussion.) It will be convenient to work with for a nonzero complex constant to be specified later.
Given a source field that is analytic at , consider the scattering problem
|(12)|
The analyticity of the field is proved in [13, §5.2]. It analytically connects scattering states for to generalized guided modes on the dispersion relation near .
Theorem 8.
The simple eigenvalue is analytic at , and, for any source field that is analytic at , the solution is analytic at .
The analytic connection between scattering states and guided modes, introduced in , is achieved as follows. One analytically resolves the identity operator on through the Riesz projections,
|(13)|
|(14)|
where is a sufficiently small positively oriented circle centered at 0 in the complex -plane. The image of is the one-dimensional eigenspace of corresponding to the eigenvalue if this eigenvalue lies within the circle . This eigenspace is spanned by the analytic eigenvector
in which is an eigenvector of corresponding to . The resolution (13,14) provides an analytic decomposition of the source field near as , with
|(15)|
Now, letting denote the restriction of to the range of , one observes that the field
|(16)|
solves :
The Riesz projection naturally decomposes the source and solution fields into “resonant” and “nonresonant” parts via (15,16).
3 Resonant transmission for a symmetric slab
From now on, we will assume that the structure is symmetric about the -axis. Thus, in addition to the conditions (1), we assume also that and for all . We also assume that is a simple (necessarily analytic) eigenvalue in a neighborhood of and that .
3.1 The reduced scattering matrix
For , consider the problem of scattering of the field incident upon the slab on the left. By Theorem 6, a solution exists and the difference of any two solutions is evanescent; in fact, near the solution is unique if and only if . Thus the propagating components of the periodic part of the solution are unique, resulting in well-defined complex reflection and transmission coefficients and ,
Because of the symmetry of the structure with respect to , an incident field from the right produces identical reflection and transmission coefficients. Thus the reduced scattering matrix for the structure for is
which gives the outward propagating components in terms of the inward propagating components in the expression (4) via .
In terms of the transmission coefficient , we define the transmittance to be the fraction of energy flux that is transmitted across the slab. The transmittance is the quantitiy , which lies in the interval .
Let us now take the incident field from the left to be , which results in a reflected field for and a transmitted field for , with coefficients given by
|(17)|
By the structural symmetry, an incident field from the right results in a reflected field for and for , with coefficients also given by (17). The utility of working with and is that they are analytic, whereas and are not analytic at points where .
Lemma 9.
The coefficients and are analytic in and .
Proof.
The analyticity of the incident field implies the analyticity of the source field in the equation and hence, by Theorem 8, also the analyticity of the solution field in . The coefficients and of are given by
and since is analytic and are bounded linear functionals on , both and are analytic. ∎
This lemma provides a representation of the scattering matrix as the ratio of analytic functions in a complex neighborhood of , except at points of the dispersion relation ,
|(18)|
Assuming that at an isolated point of , we see that, in a real punctured neighborhood of , is a complex-valued real-analytic function. In fact, for real , is unitary, a standard fact that is shown by integration by parts:
|(19)|
This implies, in particular, that three analytic functions vanish at :
which is the feature that leads to the sensitive behavior of the transmission and reflection coefficients and near .
Now, each of and is a complex functions of two complex variables, and a
In this section, we analyze the generic case
|(20)|
The Weierstraß Preparation Theorem tells us that the zero-sets of , , and are graphs of analytic functions of near . Let and . With the appropriate choice of in , the Theorem yields the following factorizations:
|(21)|
in which and either or ; the symbols , , etc., refer to constants. One shows that the same unitary number appears in the second factors of both and by using the second expression in (19). The zero-set of the first factor of each in each of these expressions coincides with the zero set of the corresponding function , , or near .
Under these conditions, one can deduce several properties of the coefficients; see and [13, Theorem 10] for proofs.
Lemma 10.
The following relations hold among the coefficients in the form (21):
(i) ,
(ii) ,
(iii) ,
(iv) .
When , the coefficient necessarily vanishes because of symmetry of the dispersion relation in . Whether can be realized for dielectric slabs remains an open problem. Nevertheless, we do not assume . We also assume that , which is sufficient to guarantee that is an isolated point of in .
The proof of the following technical lemma is in the Appendix.
Lemma 11.
Under the assumptions (20) and , one of the following alternatives is satisfied:
(i) and are distinct real numbers;
(ii) and either or .
3.2 Resonant transmission
We now present and prove the main theorem on total reflection and transmission by symmetric periodic slabs. The content of the three parts of Theorem 12 can be paraphrased as follows.
(i) If the coefficients and of the quadratic part of the expansions (22,23) are distinct numbers (case (i) of Lemma 11), then all of the coefficients and of both expansions turn out to be real numbers. The consequence of this is that and vanish along real-analytic curves in given by
|(22)|
|(23)|
Because , these curves are distinct and intersect each other tangentially at with one of them remaining above the other. They give the loci of transmission ( when ) and transmission (). Thus we are presenting a proof of total transmission and reflection at two nearby frequencies near , whose difference tends to zero as tends to . This establishes rigorously the numerically observed transmission spikes in [16, 13]. The zero sets of and are depicted in Fig. 3. The scattering matrix is not continuous at because takes on all values between and in each neighborhood of . However, as a function of at , is in fact continuous.
(ii) The result of part (i) of Theorem 12 provides a local representation of the zero sets of and about as the graphs of the real analytic functions (22,23) in . These locally defined functions can be extended to real-analytic functions of in an interval around up until their graphs either intersect the boundary of the diamond or attain an infinite slope. It is not known which of these possibilities actually occur in theory.
(iii) In case (ii) of Lemma 11, we prove that the transmittance is continuous at . We are aware of no examples of this case but are not able to rule it out theoretically. This result tells us that, if there is in fact a resonant transmission spike, then case (i) of the Lemma must hold and thus this spike attains the extreme values and by part (i) of Theorem 12. In short, either is continuous at or it attains and in every neighborhood of .
Theorem 12 (Total transmission and reflection).
Given a two-dimensional lossless periodic slab satisfying (1) that is symmetric about a line parallel to it. Let be a wavenumber-frequency pair in the region of exactly one propagating harmonic at which the slab admits a guided mode, that is . Assume in addition the generic condition (20) and that in the expansion of in (21). Then either the transmittance is continuous at or it attains the magnitudes of and on two distinct real-analytic curves that intersect quadratically at . Specifically,
(i) If , then and are real for all and both and can be extended to continuous functions of in a real neighborhood of with values and at .
(ii) If , let denote either or and let denote the corresponding function from the pair (22,23). Then can be extended to a real analytic function on an interval containing such that the graph of is in and for each , the limit exists and either is on the boundary of or .
(iii) If , then and can be extended to continuous functions in a real neighborhood of with values and at .
Proof.
(i) Assume . Recall from Lemma 10 that . Assuming for , we will show that . For subject to the relation ,
Because (see (19)), it follows that , and thus
Letting yields
which implies that . We conclude by induction that for all . The proof that for all is analogous.
To prove the second statement, one sets and observes that the ratios and have limiting values of and , respectively, as , or .
(ii) Define the set
in which is the graph of , and define the numbers
By virtue of the function (22), which belongs to , . Standard arguments show that any two functions from coincide on the intersection of their domains, and one obtains thereby a maximal extension of (22) with domain . We now show that exists. Set and . Because of the continuity of , the segment in is in the closure of the graph of , on which vanishes. Thus . Moreover, for each , there is a sequence of points with and from which we infer that and hence that and . If is nonempty, then must vanish in , which is untenable in view of the assumption that . This proves that so that exists. If and , the implicit function theorem provides an element of with , which is not compatible with the definition of . Analogous arguments apply to the endpoint and to the function .
(iii) If , then by Lemma 11, or . Keeping in mind that and and restricting to , | https://www.arxiv-vanity.com/papers/1105.2906/ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.