content
stringlengths
0
1.88M
url
stringlengths
0
5.28k
Several semantic similarity measures have been applied to gene products annotated with Gene Ontology terms, providing a basis for their functional comparison. However, it is still unclear which is the best approach to semantic similarity in this context, since there is no conclusive evaluation of the various measures. Another issue, is whether electronic annotations should or not be used in semantic similarity calculations. We conducted a systematic evaluation of GO-based semantic similarity measures using the relationship with sequence similarity as a means to quantify their performance, and assessed the influence of electronic annotations by testing the measures in the presence and absence of these annotations. We verified that the relationship between semantic and sequence similarity is not linear, but can be well approximated by a rescaled Normal cumulative distribution function. Given that the majority of the semantic similarity measures capture an identical behaviour, but differ in resolution, we used the latter as the main criterion of evaluation. This work has provided a basis for the comparison of several semantic similarity measures, and can aid researchers in choosing the most adequate measure for their work. We have found that the hybrid simGIC was the measure with the best overall performance, followed by Resnik's measure using a best-match average combination approach. We have also found that the average and maximum combination approaches are problematic since both are inherently influenced by the number of terms being combined. We suspect that there may be a direct influence of data circularity in the behaviour of the results including electronic annotations, as a result of functional inference from sequence similarity. One of the main contributions of bioinformatics in molecular biology has been the introduction of ontologies for genome annotation. These circumvent the shortcomings of natural language descriptions (namely ambiguity, subjectivity and lack of structure) and consequently enable automated annotation and automated reasoning over annotations . Prominent among these is the Gene Ontology (GO), which is dedicated to the functional annotation of gene products in a cellular context and a species independent manner . It comprises three orthogonal ontologies (GO types) organised as directed acyclic graphs (DAGs) which account for distinct aspects of gene products: molecular function, biological process and cellular location. The relationships between GO terms can be either is-a (parent-child) or part-of (part-whole) relationships. Among other applications, the use of ontologies such as GO enables the comparison of gene products based on their annotations, so that functional relationships and common characteristics can be inferred beyond the traditional sequence-based approaches. This requires the use of a semantic similarity measure to compare the terms to which gene products are annotated. There are two main approaches to measure semantic similarity : edge-based measures, which assume a term's specificity can be directly inferred from its depth in the graph; and information content (IC)-based measures, which estimate a term's specificity from its usage frequency within a given corpus. In the case of GO (as in many other biological ontologies) the latter are more adequate because specificity is poorly related with depth in the graph, for instance: the terms binding and translation regulator activity are at the same depth but the latter is both semantically more complex and biologically more specific. Lord et al. [3, 4] were the first to apply GO-based semantic similarity to compare gene products, testing three IC-based measures: Resnik's , Lin's , and Jiang and Conrath's . These three measures, originally developed for WordNet, compare terms by finding their lowest common ancestor (LCA). However, the definition of LCA is not straightforward in GO, since GO terms can have several disjoint common ancestors. Lord et al. [3, 4] addressed this issue by using only the most informative common ancestor (MICA), while later, Couto et al. considered that all disjoint common ancestors should be taken into account. A more critical issue when applying these measures to gene products is that they are measures for comparing single terms whereas gene products have usually several terms (within each GO type). Therefore, obtaining a single similarity score requires combining the semantic similarities of the gene products' terms (of the same GO type). Three distinct approaches have been proposed for this combination: Lord et al. [3, 4] used an arithmetic average of the term similarities, pairing all terms of the first gene product with all terms of the second one; Sevilla et al. used only the maximum similarity between all term pairs; and Couto et al. , Schlicker et al. and Azuaje et al. developed composite (best-match) averages where each term of the first gene product is paired only with the most similar term of the second one and vice-versa. From a biological point of view, there are limitations to both the average and maximum approaches. The average approach is inaccurate for gene products with several shared or similar terms, for instance: two functionally identical gene products having both the terms antioxidant activity and binding have a similarity of 50% rather than the expected 100%, because similarities are calculated between all possible term pairs of the two gene products. By contrast, the maximum approach is indifferent to the number of unrelated terms between gene products, for instance: a gene product with the terms antioxidant activity and binding and a second gene product with only one of those terms would have a similarity of 100%, when functionally they are clearly not equal. The best-match average approach does not suffer from the above limitations, and accounts for both similar and dissimilar terms as would be expected biologically. A different approach to the issue of gene products having more than one term (within each GO type) is to use a semantic similarity measure that compares sets of terms rather than single terms, thus avoiding the need to combine similarities. Since the set of GO terms of a given type to which a gene product is annotated can be seen as a sub-graph of that GO type, a graph comparison measure can be used for this purpose. Gentleman was the first to explore this possibility by developing the simUI measure, which given the annotation graphs for two gene products, defines semantic similarity as the fraction between the number of GO terms in the intersection of those graphs and the number of GO terms in their union. Despite accounting for both similar and dissimilar terms in a simpler way than finding matching term pairs, this measure weights all terms equally, and therefore does not account for term specificity. To overcome this limitation, we developed the simGIC measure, which is similar to simUI, but where each term is weighted by its information content . Applications of GO-based semantic similarity have been innumerous, and include such diverse subjects as: protein interaction prediction , validation of function prediction , network prediction , prediction of cellular localisation , automatic annotation validation , integration of semantic search , pathway modelling , and improving microarray data quality . However, two crucial questions still stand: which type(s) of annotations should be trusted for semantic similarity calculations; and which semantic similarity measure performs better with GO? The first question is central to current molecular biology. On one hand it has become clear with the advent of automated sequencing that experimental work cannot be the sole source for gene product annotation, if the gap between sequence data and functional information is to be bridged. On the other hand, the increasingly important role for bioinformatics in annotation has lead to a growing number of annotations extrapolated from sequence similarity, which are prone to errors [24, 25]. Indeed, it has been suggested that as much as 30% of the annotations corresponding to detailed characteristics can be erroneous as a result of inferring annotations from sequence similarity, particularly from gene products whose annotations had already been extrapolated [26, 27]. Despite this, the precision of automated annotations methods has been increasing steadily (up to 91-100% reported ), and as they account for a growing portion of the annotation space (currently over 97% of all Uniprot GO annotations), the cost of ignoring them becomes heavier. As for which semantic similarity measure is more suitable to GO, it raises another question: how do you evaluate the performance of semantic similarity measures? Authors have used correlations with sequence similarity [3, 4], with Pfam similarity , with gene co-expression , and with protein interactions to evaluate their measures; some discarding electronic (and other) annotations [3, 4, 11] and others using all annotations . This profusion of evaluation strategies, with new results not being directly comparable to previous ones, hinders the extraction of any global conclusion about the measures' performances. In this work, we perform a systematic evaluation of several semantic similarity measures. The mould of this evaluation was to assess, given a set of gene products and a corpus of GO annotations, how well each semantic similarity measure captures the similarity in annotations between gene products. As there is no internal means of making this assessment, an external source of data, correlated with the annotations, must be used. We opted for using sequence similarity, since it is well established to be related to function and there is some insight on that relation, namely: general functional characteristics are conserved at relatively low levels of sequence similarity (30%), while specific functional characteristics are poorly conserved even at high levels (70%) . Since we have this type of insight only between function and sequence, and because the other GO types have been shown to have a looser correlation with sequence similarity [3, 4], only the molecular function GO type was used. To summarise our strategy, we are evaluating the measures by assessing how well they capture the expected relationship between functional and sequence similarity. To evaluate the semantic similarity measures, we used two distinct sequence similarity measures: log reciprocal BLAST score (LRBS) and relative reciprocal BLAST score (RRBS). The former is similar to the sequence similarity measure used previously by Lord [3, 4], but compensates for the fact that BLAST scores are not symmetric, while the latter is analogous to the sequence identity percentage (which has recently been suggested as a good indicator of functional similarity ) but takes amino acid substitutions into account. As RRBS is not directly affected by sequence length (unlike LRBS) we can assess whether the dependency on sequence length affects the outcome of the evaluation. A total of fourteen semantic similarity measures were tested: Resnik's, Lin's, and Jiang and Conrath's term similarity measures, each with the average, maximum, best-match average (BMA), and BMA plus GraSM approaches; plus the graph-based simUI and simGIC measures . We evaluated the influence of using electronic annotations by testing the measures on two distinct datasets: one with all annotations (full dataset) and one without electronic annotations (non-electronic dataset). The raw semantic similarity vs. sequence similarity results were averaged over intervals of fixed number of points (as detailed in the Methods section), so that the global behaviour of the results could be perceived. Upon observing the averaged results, it was clear that their behaviour was not linear (as is visible in Figure 1), regardless of dataset, sequence similarity measure or semantic similarity measure used. What is more, within each dataset and sequence similarity measure, the majority of the semantic similarity measures were similar in behaviour, showing the same patterns in regard to sequence similarity. Therefore, it was necessary to find a type of function that followed the overall behaviour of the results closely, and that could be fitted to all semantic similarity measures (for a given dataset and sequence similarity measure) so that we could quantify the differences between them. We chose to use rescaled Normal cumulative distribution functions (N CDF s), which correspond to error functions, because in addition to fulfilling this requirement, the influence of their parameters in the shape of resulting curve is intuitive, and they promote a simple probabilistic interpretation of our results. The results from the full dataset, where a bimodal-like behaviour was evident (visible as a second increase in semantic similarity after a first plateau had been reached), were modelled by two additive N CDF s, whereas those from the non-electronic dataset required only a single N CDF (Figure 1); in both cases, scale (multiplicative) and translation (additive) parameters were applied to fit the range of the results (as detailed in the Methods section). Typical behaviour of semantic similarity measures. Semantic similarity vs. sequence similarity results using Resnik's measure with the BMA approach: A - using the LRBS sequence similarity metric; B - using the RRBS metric; in red - full dataset results (points) and modelling curve (line) composed of two additive normal cumulative distribution functions; in green - non-electronic dataset results (points) and modelling curve (line) composed of a single normal cumulative distribution function. The results for the full dataset show a bimodal-like behaviour: there is a second increase in semantic similarity after a first plateau has been reached, which is more pronounced in A, but also visible in B. This behaviour is absent in the non-electronic dataset. Confirming their visible similarity, the majority of the semantic similarity measures (the exceptions will be addressed individually in the case-by-case discussion) have modelling functions with identical shape parameters (mean and standard deviation of the N CDF ), and differ mostly in range (Tables 1 and 2). This means that the majority of the measures capture the same pattern (or variations) along the sequence, but tend to translate that pattern into different ranges of the semantic similarity scale. It should be noted that this difference in range occurs only in the averaged semantic similarity results, and not between the actual semantic similarity measures, which are all ranged in a 0-1 scale (and the whole range of that scale is covered by the raw results). Therefore, the range of the results should be interpreted as a tendency of the measure, rather than a scale limit. This tendency is composed by two distinct properties: bias, i.e. the tendency to yield higher semantic similarity values, which is measured by the translation parameter of the modelling function; and resolution, i.e the relative intensity with which (on average) variations in the sequence similarity scale are translated into the semantic similarity scale, which is measured by the scale parameter of the modelling function (or the sum of the two scale parameters in bimodal functions). A measure with a higher bias than another will likely yield a higher value of semantic similarity for a given value of sequence similarity, whereas a measure with a higher resolution than another will likely yield a greater variation in semantic similarity for a given interval of sequence similarity. For each semantic similarity measure in the full dataset, and with each of the sequence similarity metrics (LRBS and RRBS), the mean and standard deviation parameters for the two additive normal cumulative distribution functions (N CDF ) used to model it are shown. Also shown is the global resolution of the measure, which corresponds to the sum of the scale factors applied to each of the normal functions. Although there is some variability on the normal parameters (particularly in the results with the RRBS sequence similarity metric), most of that variability is due to the sensitivity of the modelling method, as the similarity in behaviour between the measures is evident (Figures 3,4 and5) with the exception of the average approach. As the main criterion to distinguishing between the measures is their resolution, the highest resolutions (for simGIC and Resnik's measure with the GraSM approach) are highlighted in bold (Mean1: mean of the first N CDF ; Stdev1: standard deviation of the first N CDF ; Mean2:mean of the second N CDF ; Stdev2: standard deviation of the second N CDF ; Res: resolution of each measure; LRBS: log reciprocal BLAST score; RRBS: relative reciprocal BLAST score). For each semantic similarity measure in the non-electronic dataset, and with each of the sequence similarity metrics (LRBS and RRBS), the mean and standard deviation parameters for the normal cumulative distribution function used to model it are shown, as well as the global resolution of the measure. The variability on the normal parameters with RRBS is evident because the fit is somewhat artificial, and does not reflect the fact that the behaviour of measures is visibly isomorphic. The highest resolutions, corresponding to simGIC and Resnik's measure with the maximum and BMA approaches, are highlighted in bold (Mean: mean of the N CDF ; Stdev: standard deviation of the N CDF ; Res: resolution of each measure; LRBS: log reciprocal BLAST score; RRBS: relative reciprocal BLAST score). The goal of our evaluation was to assess how well each semantic similarity measure captures the similarity in annotation between protein pairs. While previous studies have made this type of assessment by measuring linear correlation [3, 9, 10], or analysing a ROC (receiver operating characteristic) curve , neither approach is suitable for our results because they are neither linear in behaviour nor binary in nature. Since the majority of the measures show identical behaviours, we focused on the differences between them with implications on their performance, and choose resolution as an evaluation criterion. A measure with a higher resolution performs better because it is more likely to distinguish between protein pairs with different levels of sequence similarity than a measure with a lower resolution, which suggests it is more sensitive to differences in the annotations. Overall there are two main differences between the results from the full dataset and those from the non-electronic dataset (as can be seen in Figure 1): semantic similarity values are globally lower in the latter than in the former; and the bimodal-like behaviour evident in the former is absent in the latter. The lower semantic similarity values can be explained by the fact that the number of annotations per protein is smaller in the non-electronic than in the full dataset (Figure 2). Because the proteins have less terms, they are less likely to have shared or similar terms, and therefore have lower semantic similarity. As for the bimodal-like behaviour in the full dataset, we hypothesise that it is a direct result of data circularity, due to the presence of functional annotations inferred from sequence similarity within the electronic annotations. Because functional inference is predominantly made at high levels of sequence similarity, if there was a visible influence of data circularity in our results, we would expect it to be in the form of an abnormal increase in semantic similarity from a given point of the sequence similarity scale. Therefore, the hypothesis of data circularity is consistent with the observed second increase in semantic similarity at high sequence similarity values in the full dataset and with the absence of that behaviour in the non-electronic dataset (Figure 1). We also considered the possibility that this behaviour was tied to the distribution of the number of annotations per protein, as there is a peak of annotations per protein consistent with the range of the transition between “modes” for the LRBS results (Figure 2). However, the absence of a corresponding pattern for the RRBS results is an argument against this possibility. It should be noted that whether or not the bimodal-like behaviour is a consequence of data circularity doesn't affect the validity of our evaluation. While the issue of data circularity can be of dire consequences when applying semantic similarity for specific purposes, our evaluation of the semantic similarity measures is in no way based on the assumption that all annotations are correct. We are only assuming that there is a relationship between the annotations and sequence similarity (be it artificially reinforced by data circularity or not) and testing how well each semantic similarity measure captures that relationship. The differences in the results using the two sequence similarity measures (LRBS and RRBS) can be divided into two categories: shape differences, as reflected by the mean and standard deviation parameters of the modelling N CDF s; and range differences, as reflected by the translation and scale parameters (Tables 1 and 2). The shape differences correspond to differences in the sequence similarity scale, and are likely due to the fact that LRBS is a logarithmic measure whereas RRBS is a linear measure. Indeed, we verified that upon rescaling either sequence similarity measure to the scale of the other one, the results from both measures are described by N CDF s with identical shape parameters (data not shown). As for the range differences, they are likely tied to the other key difference between the measures: the fact that LRBS is biased by sequence length and RRBS is not. Because of this difference, an increase in RRBS corresponds only to an increase in “actual” sequence similarity, whereas an increase in LRBS can be partially due to an increase in sequence length. Therefore, we would expect semantic similarity to be more strongly related with RRBS than with LRBS, assuming there is no direct correlation between semantic similarity and sequence length. Consistent with this hypothesis, we find that for all measures tested, the resolution is higher with RRBS than with LRBS. The influence of the bias for sequence length is also visible in the distribuition of the average number of annotations per protein (Figure 2): there is a clear increase in annotations per protein at high LRBS values, whereas there is a sharp decrease in annotations per protein at low RRBS values. These differences can be due to the presence of large bifunctional proteins, which are expected to have a greater number of terms (note that each functional aspect of a protein is typically described by several terms). The fraction of these large proteins in each averaged data point is expected to increase with the LRBS scale, which would account for the increasing number of annotations per protein for high LRBS values. Furthermore, alignments between large proteins of low sequence identity, will yield relatively high LRBS values, but low RRBS values. The presence of these alignments is likely more predominant at lower RRBS values, which accounts for the higher number of annotations per protein for those values. It should be noted that in the case of the RRBS results with the non-electronic dataset the parameters of the modelling functions are not identical between semantic similarity measures (Table 2). This happens because these results do not match the typical N CDF shape (e.g. they have no apparent inflexion point in the majority of the cases), and therefore the mean and standard deviation are not restrained to the range of the results. Because of this, the resolution of the measures could not be obtained from the modelling function and was instead calculated directly from the results (as detailed in the Methods section). However, after re-scaling them to the LRBS scale, all results followed a N CDF curve with identical mean and standard deviation (data not shown), leading to the conclusion that the differences in shape in the RRBS scale were only apparent. In the full dataset, the average combination approach differs from all other measures and approaches tested in that it shows a decreasing behaviour for high sequence similarity values (Figure 3). In order to describe this behaviour, the modelling function for these results required the addition of a negative linear component to be suitably modelled (as detailed in the Methods section). As we clearly do not expect functional similarity and sequence similarity to be negatively correlated, and this behaviour is exclusive to the average approach, we can only infer that this approach is unable to capture the actual similarity in annotations for proteins with high sequence similarity. The reason behind this behaviour is likely tied with the limitations of the average approach, namely to the fact that it considers proteins as random collections of features. For instance, if two proteins (A and B) have the exact same two terms (t 1 and t 2), the average approach compares not only the matching term pairs (t 1 A with t 1 B and t 2 A with t 2 B ) but also all the unrelated ones (t 1 A with t 2 B and t 2 A with t 1 B ). The consequence of this is that the more terms two functionally identical (or similar) proteins have, the less similar they will be considered by the average approach. Consistent with this notion, we find that in the range of values where the average approach shows a decreasing behaviour, there is an inversely proportional increase in the average number of annotations per protein as function of sequence similarity (Figure 4). Indeed, for high sequence similarity values, the behaviour of the average results is deeply tied with the inverse of the number of annotations per protein. Curiously, we have found that if the results with the average approach are compensated for number of annotations per protein, their behaviour becomes identical to that of the other measures (results not shown). While in the non-electronic dataset the average approach is similar in behaviour to the other approaches (Figure 3), this is likely because overall the number of annotations per protein is smaller in this dataset and also because it is more uniform over the sequence similarity scale (Figure 2). Despite this, the average approach is also the worst combination approach in this dataset, as it shows the lowest resolution (Table 2). As for the maximum approach in the full dataset, its low resolution (Table 1) is a consequence of its simplicity. Because this approach only looks for the most similar terms between two proteins, it is impervious to the number and similarity of other terms those proteins might have; therefore it is naturally limited in its ability to distinguish protein pairs. In addition, the maximum approach also shows singular behaviours at low sequence similarity values: with the LRBS measure it shows high dispersion, whereas with the RRBS measure it shows a decreasing behaviour (Figure 3). Interestingly, both behaviours are directly related to the distribution of the average number of annotations per protein (Figure 2). This is not unexpected, since the more terms two unrelated (or distantly related) proteins have, the more probable it is that they have a common (or similar) term, and therefore the higher their semantic similarity will be with the maximum approach. In the non-electronic dataset, the limitations of the maximum approach are not visible because the number of annotations per protein is lower in this dataset, with the majority of the proteins having only one annotation (data not shown). Therefore the loss of information from using only one term to compare proteins is negligible, which is why this approach is similar in resolution to the BMA approach. Distribution of the average number of GO term annotations per protein. Average number of GO term annotations per protein as function of sequence of sequence similarity: A - using the LRBS sequence similarity metric; B - using the RRBS metric; in red - full dataset; in green - non-electronic dataset. Globally, the number of annotations per protein is higher and less uniform in the full dataset than in the non-electronic dataset. There is a visible increase in annotations per protein for high LRBS values in the full dataset, and also a visible decrease for low RRBS values in both datasets. The BMA approach is clearly the best combination approach in the full dataset, since it not only yields the highest resolutions, but also also does not show the undesired behaviours of the other two approaches. This is because this approach considers all terms of the proteins (and so there is no loss of information), but compares only each term with its most similar (and so is not biased by the number of annotations per protein). Its performance is similar to the maximum approach in the non-electronic dataset, because the number of annotations per protein is small, and therefore there is not much term similarity combination involved. In conclusion, the average approach is contradictory with the purpose of combining term similarities, due to its dependency on the number of annotations per protein; the maximum approach is limited in its ability to compare proteins as it looks for only one shared functional aspect; whereas the BMA approach is able to account for all functional aspects independently of the number of annotations per protein. Compared to the most informative common ancestor (MICA) approach [3, 4], the GraSM approach produced systematically lower semantic similarity values (i.e. a decrease in the bias of the measures), regardless of dataset or sequence similarity measure (Figure 3). This is a natural consequence of this approach: since it considers the average information content (IC) of all disjoint common ancestors instead of only the IC of the MICA, it will necessarily yield smaller or equal semantic similarity values (equal only if all disjoint common ancestors have the same IC, or there is only one disjoint common ancestor). However, the main question is whether considering more of the GO graph's information (as does GraSM) increases the performance of the semantic similarity measures. In the full dataset, the answer to this question is positive, as GraSM leads to an increase in resolution (20-36%) for all measures tested (Table 1); but in the non-electronic dataset the results are not conclusive, as GraSM increases the resolution of Jiang and Conrath's measure, but decreases that of Lin's and Resnik's measures (Table 2). Comparison of four approaches to term similarity measures. Semantic similarity vs. sequence similarity results using four distinct approaches to Resnik's measure: maximum (in red), average (in green), BMA (in blue) and BMA + GraSM (in violet). A - in the full dataset with the LRBS sequence similarity metric; B - in the non-electronic dataset with the LRBS metric; C - in the full dataset with the RRBS metric; D - in the non-electronic dataset with the RRBS metric. Modelling curves in A and C were composed of two additive normal cumulative distribution functions, and the curve for the average included also a negative linear component; in B and D, all curves were composed of a single normal function. It is noticeable that while all four approaches exhibit similar behaviour in the non-electronic dataset (B and D), the maximum and particularly the average approach perform poorly in the full dataset (A and C), with the former having a very low resolution and the latter showing a decreasing behaviour for high sequence similarity values. The same behaviours and the same relationships between the approaches were obtained for Lin's and Jiang and Conrath's measures. Relation between the average approach and the inverse of the number of annotations per protein. Resnik's term similarity measure with the average combination approach (in red) and inverse of the number of annotations per protein (in grey) as function of sequence similarity: A - using the LRBS sequence similarity metric; B - using the RRBS metric. There is an evident parallel between the behaviour of the semantic similarity results and the distribution of the inverse of the number of annotations per protein, which becomes more evident for high sequence similarity values. This parallel reflects the inverse proportionality relationship between the average combination approach and the number of annotations per protein. Independently of approach, dataset or sequence similarity metric, the relationship between Resnik's, Lin's, and Jiang and Conrath's measures is always the same (Figure 5): Resnik's measure has the lowest bias and the highest resolution; Jiang and Conrath's measure has the highest bias and the smallest resolution; and Lin's measure falls in between the two (Tables 1 and 2). This relationship is obviously tied to the measures' definitions of semantic similarity: Resnik's measure is directly given by the IC of the MICA of two terms; Lin's measure is given by a ratio of ICs; and Jiang and Conrath's measure is given by a subtraction of ICs. Therefore, while all three measures produce results in the same range (0-1), they behave differently within that range, which leads to their different resolutions. We can only conclude that Resnik's measure is the term similarity measure most adequate for GO, since it consistently shows the highest resolution. Comparison of the three term similarity measures. Semantic similarity vs. sequence similarity results using Resnik's (in red), Lin's (in green), and Jiang and Conrath's (in blue) measures with the BMA approach: A - in the full dataset with the LRBS sequence similarity metric; B - in the non-electronic dataset with the LRBS metric; C - in the full dataset with the RRBS metric; D - in the non-electronic dataset with the RRBS metric. These results show that the absolute semantic similarity values increase but the resolution decreases from Resnik's to Lin's to Jiang and Conrath's measures. The same relationship was observed using the maximum, average, and GraSM approaches. The graph-based simUI and simGIC measures showed an identical behaviour to that of the term similarity measures combined with the BMA approach, suggesting that qualitatively both graph-based and term-based approaches are suitable for protein semantic similarity (Figure 6). However, the fact that simGIC showed the overall highest resolutions suggests that quantitatively there is an advantage in considering the information conveyed by the structure of the GO graph, rather than just individual annotations. Furthermore, simUI and simGIC have the clear advantage of being computed in a single step, without the need to find matching terms, and independently of the number of annotations per protein. simGIC and simUI measures. Semantic similarity vs. sequence similarity results using the simGIC and simUI measures: in red - simGIC in the full dataset; in green - simUI in the full dataset; in blue - simGIC in the non-electronic dataset; in violet - simUI in the non-electronic dataset; A - with the LRBS sequence similarity metric; B - with the RRBS metric. Both measures show similar behaviours to those of the term measures with the BMA approach, with simGIC having a higher resolution than simUI and indeed the highest overall resolution of all measures tested. From the relationship between simUI and simGIC, we can conclude that, while GO-based semantic similarity can be accurately measured without IC, using it considerably improves the resolution of the measure (since simGIC is a hybrid measure that uses IC in addition to graph structure while simUI does not). From all measures and approaches tested, we conclude that the simGIC is the best suited to measure protein semantic similarity, as it yields the highest overall resolutions, which reflects a greater sensitivity to differences in annotation. Due to the number of GO-based semantic similarity measures proposed over recent years, and to the diversity of strategies used to evaluate them, the questions of which measure performs better and what are the advantages and limitations of each measure were still open. To tackle these questions, we compared the majority of the existing GO-based semantic similarity measures, and evaluated their performance by assessing how well they capture the expected relationship between functional similarity (as described by molecular function GO terms) and sequence similarity. The influence of electronic annotations was assessed by using two separate datasets, while the effect of protein sequence length was investigated by using two distinct sequence similarity metrics. For all measures tested, we found that the relationship captured between functional and sequence similarity is not linear. The majority of the measures were similar in behaviour, and could be suitably modelled by rescaled Normal cumulative distribution functions with the same shape parameters (mean and standard deviation). One of the key differences between the measures was their resolution, i.e the relative intensity with which variations in the sequence similarity scale are translated into the semantic similarity scale. This was the main criterion used to evaluate the measures since it reflects their sensitivity in capturing the relationship between semantic similarity and sequence similarity. Of the three term similarity measures tested, Resnik's measure was the best, having consistently a higher resolution than Lin's and Jiang and Conrath's measures. As for the approaches to combine term similarities, the best-match average approach was clearly the best, not only because it had the overall highest resolutions, but also because it is independent of the number of terms being combined, unlike the average and maximum approaches. The GraSM approach significantly increased the resolution of the measures in the full dataset (20-33%), but produced inconclusive results in the non-electronic dataset. The simGIC measure was overall the best performing measure, showing consistently a high resolution. By comparing the simGIC and simUI measures, we conclude that while the use of the information content in a measure is not essential to accurately convey semantic similarity, it significantly increases its resolution (19-44%). We suspect that there may be an influence of data circularity in the results for the full dataset, as the bimodal-like behaviour in this dataset is consistent with the inference of functional annotations between proteins of relatively high sequence similarity. The absence of bimodality in the non-electronic dataset suggests that the effect of data circularity is mainly due to the presence of electronic annotations. The other major differences between the two datasets are the number of proteins and the number of annotations per protein, which are considerably smaller in the non-electronic dataset as a result of discarding electronic annotations. This loss of information is perhaps the best support for the use of all annotations in large-scale studies, whereas in specific applications where annotation quality is crucial, the use of electronic annotations should be carefully considered. However, as electronic annotations grow in quantity and quality , the cost of ignoring them will eventually outweigh the gain. Recently, a number of novel GO-based semantic similarity measures for proteins have been proposed [25, 30–33], employing various strategies. Future work will include the evaluation of these novel measures as well as investigating the relationship between gene products semantic similarity and other protein aspects, such as Pfam and Enzyme Commission classification. with N being the total number of annotations within the corresponding GO type . A subset of 22,067 Swiss-Prot proteins was selected from the database, consisting of proteins annotated to at least one molecular function GO term of IC 65% or higher. This criterion ensures that poorly annotated proteins (i.e. those with only very generic terms) are discarded, which would otherwise bias the semantic similarity results. The exact value of 65% was chosen as a compromise between computational time and representativity of the dataset (most of the poorly annotated proteins would probably be excluded at a lower cut-off). An all-against-all BLAST search was performed with a threshold e-value of 10−4, resulting in a final (full) dataset of 618,146 distinct protein pairs. The e-value threshold ensures that the alignments considered are statistically significant. To evaluate the influence of electronic annotations, a second subset of 4,608 proteins was selected using the above criteria, but where annotations with evidence codes IEA, NAS, NA and NR were discarded; this lead to a final (non-electronic) dataset of 49,480 protein pairs. Rather than count the number of equal amino acids and divide that by the total length of the alignment (sequence identity), this measure quantifies the whole alignment and divides that by the quantification of the perfect self-alignment (in both directions to ensure symmetry). A total of fourteen approaches to semantic similarity were tested, corresponding to four distinct approaches (GraSM, Average, Maximum and BMA) to each of the three ‘classic’ term semantic similarity measures: Resnik's , Lin's , and Jiang and Conrath's ; plus two graph-based measures: simUI and simGIC. Since uniform IC values were used, all three similarity measures produced also uniform results (in a 0-1 scale). When the GraSM approach was used, the average IC of all disjoint common ancestors was considered instead of only that of the most informative . As the influence of GraSM is independent of the method used to combine term similarities, and since it is a computationally intensive approach, it was applied to all three term measures but only using the BMA approach. The raw semantic similarity vs. sequence similarity results consist of a high number of scattered data points, making it impossible to discern a pattern. This is expected since cases of functionally similar proteins with unrelated sequences, and vice-versa, are well known to occur, if not frequently. However, as we are interested in studying the global pattern, semantic similarity values were averaged over sequence similarity intervals (separately for each of the sequence similarity metrics). Intervals were taken with constant number of data points to ensure all intervals are equally representative, and for each interval the average values of sequence and semantic similarity were computed. In the full dataset each interval contains 5,000 points, and in the non-electronic dataset each interval contains 1,000 points. Here b, μ1 and σ1 are the scale, mean and standard deviation parameters for the first N CDF , while c, μ2 and σ2 are the corresponding parameters for the second N CDF . The addition of the second N CDF visibly improves the quality of the model (Figure 7A), reducing the sum of the squared residuals by 14-27%. Moreover, the dispersion of the residuals becomes centred, whereas with a single N CDF it was noticeably skewed (Figure 7B). Bimodal vs. unimodal fit to the semantic similarity results. A - Semantic similarity (Resnik's measure with the BMA approach) vs. LRBS sequence similarity: black points - averaged results; red line - unimodal modelling function; green line - bimodal modelling function. B - Fit residuals of the bimodal and unimodal modelling functions vs. LRBS sequence similarity: red points - fit residuals of the unimodal modelling function; red line - corresponding linear trendline; green points - fit residuals of the bimodal modelling function; green line - corresponding linear trendline. It is clear that the unimodal modelling function does not describe the behaviour of the results accurately, since the fit residuals are unevenly distributed (as reflected by the negative slope of the trendline); whereas the bimodal modelling function shows evenly distributed residuals (with a nearly horizontal trendline at 0). For the measures using the average combination approach there was an evident decreasing behaviour for high sequence similarity values, which is impossible to model with a monotonically increasing function like N CDF . To account for this behaviour, we added a linear component (d × Sequence sim ) to the modelling function 14, since the decrease was approximately linear. This component was added for the whole sequence similarity range, and not only for the decreasing portion, since the other parameters of the modelling function are able to compensate for its presence and model the behaviour of the data outside of that portion. The results using the non-electronic dataset were modelled using only a single N CDF 13, as there was no visible sign of bimodality. The main parameter we used to evaluate the measures was resolution, i.e the range of the averaged semantic similarity results. Resolution was calculated by the sum of the two scale parameters for the results in the full dataset (since they are modelled by two N CDF s), and is simply given by the scale parameter for the results in the non-electronic dataset with the LRBS sequence similarity measure. However, for the results in the non-electronic dataset with the RRBS sequence similarity measure, resolution cannot be calculated from the scale parameter because for the majority of the measures the fitted N CDF is not contained in the range of the results, and consequently the scale parameter is greater than 1. Therefore, in the case of these results, resolution was calculated as the difference between the maximum and the minimum of the averaged semantic similarity values. We thank our reviewers and editors for their invaluable contribution to the final manuscript through their insightful comments. This work was partially supported by the Portuguese Fundação para a Ciência e Tecnologia with the grant ref. SFRH/BD/29797/2006. This article has been published as part of BMC Bioinformatics Volume 9 Supplement 5, 2008: Proceedings of the 10th Bio-Ontologies Special Interest Group Workshop 2007. Ten years past and looking to the future. The full contents of the supplement are available online at http://www.biomedcentral.com/1471-2105/9?issue=S5. This works was a collaboration with equal contribution from CP and DF, under the supervision of AOF, AEF and FC. HB provided the BLAST results. All authors reviewed the final manuscript.
https://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-9-S5-S4
This paper reconsiders the fertility of historical social groups by accounting for singleness and childlessness. We find that the middle class had the highest reproductive success during England's early industrial development. In light of the greater propensity of the middle class to invest in human capital, the rise in the prevalence of these traits in the population could have been instrumental to England's economic success. Unlike earlier results about the survival of the richest, the paper shows that the reproductive success of the rich (and also the poor) were lower than that of the middle class, once accounting for singleness and childlessness. Hence, the prosperity of England over this period can be attributed to the increase in the prevalence of middle-class traits rather than those of the upper (or lower) class.
http://eprints.lse.ac.uk/100923/
One way to significantly improve the quality of your mind maps and the effectiveness of how they communicate your intended meaning to others is to include a legend in them. If you frequently share your mind maps with colleagues and coworkers, they may not completely understand the meaning and context of what you’re trying to communicate. That’s because mind mapping lacks a commonly accepted visual vocabulary – a set of de facto standards that governs how mind maps should be constructed, what common icons and symbols mean, and so forth. A legend in a mind map serves the same function as one does a geographical map: it defines what each symbol and icon means, so that the person reading them can interpret these marks with the proper meaning. In a mind map, legends typically define the meaning of icons or symbols attached to topics. Unfortunately, because most developers of mind mapping software don’t automate the production of legends, most users of these productivity tools don’t realize it’s possible to add them to their maps, or how to create them. Where should legends be used? Legends should be used on any map where you are utilizing symbols or icons. This is very important, because your map needs to be immediately understandable by anyone with whom you share it. The meaning of each symbol or icon may have been abundantly clear to you at the time you created your map, but remember: you have the advantage of knowing the entire context of your map, because you’re the one who created it. Others viewing this map may be confused by the meaning of these small graphics and, as a result, may not take away the full meaning that you intended when you added them to your map. Typically, a legend is formatted as a floating topic at the top or bottom of the map; a series of subtopics radiate from it, each one containing an icon or symbol, with the topic text providing the meaning. For best results, I recommend placing the legend above the map, because people tend to read top to bottom, left to right. If the first thing they see looking at your mind map from the top down is the legend, they will fully read it first, which will help them to understand the mind map itself. When in doubt, include a legend in your mind maps. The people with whom you share them will appreciate it!
A solid cylinder has surface area of 462 sq cm.Its curved surface area is one third of its total surface area.find the volume of the cylinder. A river 2m deep and 45m wide is flowing at the rate of 3 km/h.Find the amount of water that runs into the sea per minute. Q- The parallel lines of a trapezium are 25 cm and 11cm ,while its non-parallel sides are 15 cm and 13 cm .Find the area of trapezium. A road roller takes 750 complete revolutions to move once over to level a road. Find the area of the road if the diameter of a road roller is 84 cm and length is 1 m. Ex-14 ( F) question-6 , 9 th part the surface area of the six faces of a rectangular solid are 4,4,8,8,18, and 18 square centimeter. The volume of the solid , in cubic centimeter is ... The diameter of a roller is 84cm and its length is 120cm. It takes 500 complete revolutions to move once over to level a playground. Find the area of the playground in m2. The length,breadth and height of a room are 5m, 4m and 3m respectively. Find the cost of white washing the inner of the room and ceiling at the rate of Rs 50 per square metre. find the number of coins,1.5cm in diameter and 0.2 cm thick,to be melted to form a right circular cylinder with a height of 10 cm and a diameter of 4.5cm. if each edge of a cube is doubled, 1.how many times will its surface area increase? 2.how many times will it volume increase? Pls do not refer any link. If the length of each edge of a cube is doubled, by how many times does its volume and surface area increase? the altitude of a triangle is five thirds it's corresponding base. if the altitude was incresed by 4 cm and base decreased by 2 cm the area would remain the same. find the base and altitude. water is pouring into a cuboidal reservoir at the rate of 60 litres per minute. if the volume of the tank is 108m cube, find the number of hours it will take to fill the reservoir? find the area of a rhombus whose side is 6 cm and whose altitude is 4 cm. if one of its diagonal is 8 cm long, find the length of another diagonal. A square of 5m length and 75cm width is to bordered with lace 6cm wide. Find the cost of putting the border at the rate of Rs.0.50 per sq. cm. The radius and height of a cylinder are in the ratio 5:7 and its volume is 550 cu cm . Find its radius? The lengths of parallel sides of trapezium is 12 cm and 15 cm.Area of trapezium is 189msq. What is the distance between the parallel sides? the rainfall recorded on a certAIN DAY WAS 5 c find the volume of water that fell on 2 hectare field The lateral surface area of a hollow cylinder is 4224 cm2. It is cut along its height and formed a rectangular sheet of width 33 cm. Find the perimeter of rectangular sheet? a saree is 5 meter long and 1.3 meter wide. a border of width 25 centimeter is printed along all side .find the cost of printing the border at 1rupees per 100 centimeter2. The outer dimensions of a closed wooden box are 10cm by 8cm by 7cm. Thickness of the wood is 1cm. Find the total cost of wood required to make the box if 1cm3 of wood costs Rs.2 A iron pipe is 21 m long and its exterior diameter is 8cm.If the thickness of the pipe is 1cm and iron weight 8g/cm3,find the weight of pipe? The diameter of a wheel of a bus is 90cm which makes 315 revolutions per minute. Determine its speed in kmph. What is the answer of ncert maths book class 8, ch. MENSURATION pg 176, TRY THESe Q. 3 For future i would like to know where can i get the answers of try these of ncert book Q.8. Find the curved surface area of a garden roller whose length and diameter are 1.5 m and 1.4 m respectively. How much area can it level in 200 revolutions? Q.9. The diameter of a 120 cm long roller is 84 cm. It takes 1000 complete revolutions moving once over to level a playground. What is the area of the playground ? The base and corresponding altitude of a parallelogram are 10 and 12 cms.if the other altitude is 8 cms find the length of other pair of parallel sides.please help me with this question. Thnx A) cube with side 5 cm B) sphere with radius 3 cm C) cuboid with sides 2 cm, 3 cm, 10 cm D) cylinder with radius 2 cm and 10 cm The floor of a building consists of 3000 tiles which are rhombus shaped and each of its diagonals are 45 cm and 30 cm in length. Find the total cost of polishing the floor, if the cost per m2 is Rs 4. Q.10. The volume of the mud taken out on digging a cylindrical tank is 1540 . Find the depth of the tank if the diameter of the base is 7 m. The shape of a garden is rectangular in the middle and semi circular at the ends as shown in the diagram. Find the area and the perimeter of the garden [Length of rectangle is 20 − (3.5 + 3.5) metres] The lateral surface area of a hollow cylinder is 4224 cm square. It is cut along its height and formed a rectangular sheet of of width 33 cm. Find the perimeter of the rectangular sheet? sir, please give me the proper solution with explation of following questions? Q.A box with lid is made of wood which is 3 cm thick. It's external length,breadth and hieht are 56 cm,39 cm and 30 cm respectively.Find the capacity of the box.Also find the volume of wood used to make this box. Q.If V is the volume of cuboid of dimension a,b and c and S is it's surface area , then prove that:- 1/v =2/s [1/a +1/b +1/c] hema bought 2 pairs of jeans for rs 725each she sold one of them at a gain of 8% and the other at a loss of 4% find her gain or loss percent on the whole transaction ans gain of 2% a suitcase with measure 80cm x 48cm x 24cm is to be covered with a tarpaulin cloth. How many metres of tarpaulin of width 96cm is required to cover 100 such suitcases? the dimensions of an open box are 50 cm , 40cm and 23 cm .its thickness is 3 cm. if 1 cubic cm of metal used in the box weighs 0.5 gms , find the weight of the box give the curved surface area,total surface area,volume(formula)-cube, cuboid, cylinder. give the perimeter,area (formula)-square, rectangle,circle, triangle, parallelogram, rhombus, trapezium???????? EXPERTS PLZ ANSWER MY QUESTION PLZ PLZ PLZ PLZ PLZ PLZ PLZ PLZ The area of a courtyard is 3750m2. Find the cost of covering it with gravel to a height of 1cm if the gravel costs Rs 6.40 per cubic metre The areas of 3 adjacent faces of a cuboid are 180 sq.cm , 96 sq. cm and 120 sq. cm . Find the volume of the cuboid ? A room 6m long and 4m wide needs 1000 tiles for its floor. How many tiles will be nedded for a hall 12m long and 8m wide? - a sheet of paper measures 30cm * 20 cm...a strip of 4 cm wide is cut from it all around...find the area of the remaning sheet and also the area of the strip cutout If two legs of a right triangle containing right angle are 4cm and 10cm.Find the area of the triangle.Plzz help me its urgent! How much aluminium will be required to cover Dewar A compared to that required for Dewar B? (Note: The base and the lid are NOT made up of aluminium.) A closed wooden box 80cm long,65cm wide and 45cm high, is made of wood 2.5cm thick.Find the capacity of the box and its weight if 100cm3of wood weighs 8g the inner circumference of a circular track is 220 m. The track is 7 m wide everywhere. Calculate the cost of putting fence along the outer circle at Rs.2 per metre. 1) quadrilateral ABCD where AB = 5cm , BD = 6cm, CD = 5.8 cm,DA = 4.3 cm, BC =5.5cm 2) quadrilateral PQRS where PQ = QR = RS =SP=8cm and angle R = 85° .
https://aakashdigitalsrv1.meritnation.com/cbse-class-8/english/english-grammar/mensuration/popular-questions/10_1_5_74_8306
1 edition of On the lymphatics of cartilage or of the perichondrium found in the catalog. Published 1880 by s.n. in London . Written in English Edition Notes |Statement||by George Hoggan and Frances Elizabeth Hoggan| |Contributions||Hoggan, Frances Elizabeth, Royal College of Surgeons of England| |The Physical Object| |Pagination||p. 122-136, leaf of plate :| |Number of Pages||136| |ID Numbers| |Open Library||OL26302886M| Post-piercing perichondritis. Cartilage nutrition is carried out by the contiguous perichondrium, and it should be preserved adhered to the cartilage in order to avoid necrosis. and the lymphatic vessels of the superior portion of the pinna”s lateral face drain to the superficial peri-parotid lymphatics; the lymphatic vessels of the Cited by: 9. Cartilage is avascular tissue Cartilage lacks blood vessels, lymphatics, and nerves Cartilage matrix is highly permeable Cartilage is supplied by diffusion Diffusion starts from the perichondrium Substances freely pass cartilage matrix, except high . perichondrium [per″ĭ-kon´dre-um] the layer of fibrous connective tissue investing all cartilage except the articular cartilage of synovial joints. adj., adj perichon´dral. perichondrium (per'i-kon'drē-ŭm), [TA] The dense irregular connective tissue membrane around cartilage. [peri- + G. chondros, cartilage] perichondrium (pĕr′ĭ-kŏn. Components of cartilage • Perichondrium- surrounds most of the hyaline & elastic cartilage • Made up of peripheral vascularized, dense irregular connective tissue • Made up of outer fibrous & inner chondrogenic layer • Chondrogenic layer gives rise to chondroblast that secrete cartilage . Hyaline cartilage from the larynx. The perichondrium is composed of thick bundles of collagen and a deeper, more cellular layer that contains chondroblasts. Below the cellular layer of the perichondrium are individual chondrocytes separated by matrix, which quickly adopts the dark staining characteristics of mature matrix. Some repair can occur only in cartilage with perichondrium where limited new cartilage cells are produced. 2 Damaged articular cartilage is replaced by dense connective tissue or fibrocartilage, whose mechanical properties are not optimal for providing low friction joint motion under high mechanical loads. The effects of aging and wear and tear. The exploits of the incomparable Mulla Nasrudin Elementary science. Religious enthusiasm and the Great Awakening Guidelines to the design of quieter hydraulic fluid power systems. Canyonlands boundary revision. Seventy years in Texas Against Fate A Course in individual health insurance. Your English Apology practical guide for the light infantry officer ... Forest fires and their control Home gardeners pronouncing dictionary. Christ, Gods son Full text Full text is available as a scanned copy of the original print version. Get a printable copy (PDF file) of the complete article (M), or click on a page image below to browse page by : George Hoggan, Frances Elizabeth Hoggan. Hoggan G, Hoggan FE. The Lymphatics of Cartilage or of the Perichondrium. J Anat Physiol. Oct; 15 (Pt 1)– [PMC free article] Birch de B. The Constitution and Relations of Bone Lamellae, Lacunae, and Canaliculi, and some Effects of Trypsin Digestion on Bone. The Lymphatics of Cartilage or of the Perichondrium. Distinguishing the contributions of the perichondrium, cartilage, and vascular endothelium to skeletal development. Colnot C, Lu C, Hu D, Helms JA. Dev Biol, (1), 01 May Cited by. Other articles where Perichondrium is discussed: connective tissue: Cartilage: inner chondrogenic layer of the perichondrium. In addition, the young chondrocytes retain the capacity to divide even after they become isolated in lacunae within the matrix. The daughter cells of these divisions secrete new matrix between them and move apart in separate lacunae. The perichondrium (from Greek περί (peri 'around') and χόνδρος (chondros 'cartilage')) is a layer of dense irregular connective tissue that surrounds the cartilage of developing bone. It consists of two separate layers: an outer fibrous layer and inner chondrogenic layer. The fibrous layer contains fibroblasts, Location: Developing bone. Perichondrium and perosteum are two types of connective tissues that exist as membranes. In definition, perichondrium is a dense layer of fibrous connective tissue that covers cartilage in the body while periosteum is a thin layer of connective tissue that covers the bone and promotes bone growth and development. -Avascular and lacks nerves and lymphatics -Surrounded by a perichondrium except the articular cartilage and fibrocartilage. The perichondrium supplies blood to cartilage and feeds it's cells which is important because its avascular. Occurs in articular cartilage and fibrocartilage (due to a lack of perichondrium), and at epiphyseal plates isogenous groups groups of two or. found in the nose and at the end of long bones and the ribs, from rings in the walls of respiratory passages. the fetal skeleton is made up of this type of cartilage. it regenerates very poorly and often the perichondrium forms scar tissue. it does not calcify, but in old age it can. The matrix of cartilage consists of fibrous tissue and various combinations of proteoglycans and glycosaminoglycans. Cartilage once synthesized, lacks lymphatic or blood supply and the movement of waste and nutrition is chiefly via diffusion to and from adjacent tissues. Cartilage, like bone, is surrounded by a perichondrium-like fibrous membrane. Periosteum and perichondrium grafts are biomembranes with two layers, an outer fibrous layer and an inner cambium, or osteogenic, layer. Perichondrium lines developing bone, and when vascularized, becomes periosteum, or the nonjoint lining of bone. • Fibrous Cartilage. • is a form of connective tissue transitional between dense connectivea form of connective tissue transitional between dense connective tissue and hyaline cartilage. Chondrocytes may lie singly or in pairs, but most often they form short rows between dense bundles of collagen Size: 1MB. center of the hyaline cartilage shaft, region where formation of a long bone begins periosteal bud contains a nutrient artery and vein, lymphatics, nerve fibers, red marrow elements, osteoblasts, and osteoclasts; invades forming cavities in endochondral ossification. The perichondrium (Figure 7–2) is a sheath of dense connective tissue that surrounds cartilage in most places, forming an interface between the cartilage and the tissues supported by the cartilage. The perichondrium harbors the blood supply serving the cartilage and a. Two types of growth can occur in cartilage: appositional and interstitial. Appositional growth results in the increase of the diameter or thickness of the cartilage. The new cells derive from the perichondrium and occur on the surface of the cartilage model. (The perichondrium is a layer of dense connective tissue which surrounds Size: 1MB. 3) Perichondrium Dense fibrous tissue covering of cartilage. In young cartilages there are two layers in the perichodrium. Chondrogenic layer – inner cellular layer 2. Outer fibrous layer – with blood vessel, nerves and lymphatics Vascular. Tragal cartilage perichondrium (Cited by: 3. within articular cartilage 1. Oval isogenous groups 2. Columns of isogenous groups during ossification Regeneration by appositional growth from perichondrium; the chondroblasts from perichondrium invade the damaged area and generate new cartilage in extensively damaged areas the cartilage is replaced by dense connective tissueFile Size: 1MB. Cartilage is not innervated and therefore relies on diffusion to obtain nutrients. This causes it to heal very slowly. The main cell types in cartilage are chondrocytes, the ground substance is chondroitin sulfate, and the fibrous sheath is called perichondrium. There are three types of cartilage: hyaline, fibrous, and elastic cartilage. calcified cartilage zone-loss of chondrocytes by apoptosis accompanied by calcification of cartilage matrix. ossification zone-bone tissue first appears. capillaries and osteoprogenitor cells from periosteum invades cavities left by chondrocytes. Osteoprogenitor cells form osteoblasts-forms woven bone. Perichondrium. n Dense CT that covers cartilage. n Contains blood, nerve supply, lymphatics. n Source of new cartilage cells. n Divided into two layers. Inner cellular. Outer fibrous. Articular Cartilage. Hyaline cartilage of articular surfaces do not posses a perichondrium. Zones of articular cartilage.HISTOLOGY BIOL LECTURE NOTES #5B. CARTILAGE, BONE and BLOOD. CARTILAGE AND BONE LECTURE TEXT - POWERPOINT. BLOOD LECTURE TEXT - POWERPOINT. CARTILAGE. Cartilage is a resilient connective tissue composed of cells embedded in an extracellular matrix that is gel-like and has a rigid consistency. Important for:. Cartilage is a type of connective tissue that adapts to the pushing and pulling required for mechanical movement. It is composed of chondrocytes (cartilage cells) and a specialized extracellular matrix ().There are three types of cartilage: hyaline cartilage (the most predominant type, e.g., in the nasal septum), fibrocartilage (e.g., in intervertebral discs), and elastic cartilage .
https://guqabypiroho.displacementdomesticity.com/on-the-lymphatics-of-cartilage-or-of-the-perichondrium-book-34254ib.php
Different forms of forms differ from each other in terms of sides or angles. Many shapes have 4 sides, however the distinction in angles on their sides makes them unique. We speak to these 4-sided forms the quadrilaterals. You are watching: 1 pair of opposite sides that are parallel In this write-up, you will learn:What a quadrilateral is.How the different types of quadrilaterals look like.The properties of quadrilaterals. What is a Quadrilateral? As the word argues, ‘Quad’ suggests four and ‘lateral’ means side. Because of this a quadrilateral is a closed two-dimensional polygon made up of 4-line segments. In basic words, a quadrilateral is a shape via four sides. Quadrilaterals are everywhere! From the books, chart records, computer system keys, tv, and mobile screens. The list of real-human being examples of quadrilaterals is endmuch less. Types of Quadrilaterals There are 6 quadrilaterals in geometry. Some of the quadrilaterals are sucount familiar to you, while others might not be so familiar. Let’s take a look.RectangleSquaresTrapeziumParallelogramRhombusKite A rectangle A rectangle is a quadrilateral through 4 appropriate angles (90°). In a rectangle, both the pairs of oppowebsite sides are parallel and equal in length. Properties of a rhombusAll sides are congruent by interpretation.The diagonals bisect the angles.The diagonals in a kite bisect each various other at appropriate angles. Properties of Quadrilaterals The properties of quadrilaterals include:Eincredibly quadrilateral has actually 4 sides, 4 vertices, and also 4 angles.4The complete measure of all the 4 inner angles of a quadrilateral is always equal to 360 degrees.The amount of interior angles of a quadrilateral fits the formula of polygon i.e. Sum of inner angles = 180 ° * (n – 2), wbelow n is equal to the variety of sides of the polygonRectangles, rhombus, and also squares are all types of parallelograms.A square is both a rhombus and also a rectangle.The rectangle and rhombus are not square.A parallelogram is a trapezium.A trapezium is not a parallelogram.Kite is not a parallelogram. Classification of quadrilaterals The quadrilaterals are classified into two basic types:Convex quadrilaterals: These are the quadrilaterals with internal angles less than 180 levels, and also the two diagonals are inside the quadrilaterals. They include trapezium, parallelogram, rhombus, rectangle, square, kite, and so on.Concave quadrilaterals: These are the quadrilaterals with at leastern one internal angle greater than 180 levels, and at leastern one of the two diagonals is outside the quadrilaterals. A dart is a concave quadrilateral. Tbelow is one more less widespread form of quadrilaterals, referred to as complicated quadrilaterals. These are crossed figures. For example, crossed trapezoid, crossed rectangle, crossed square, etc. Let’s job-related on a few example problems about quadrilaterals. Example 1 The inner angles of an ircontinual quadrilateral are; x°, 80°, 2x°, and 70°. Calculate the worth of x. Solution By a residential or commercial property of quadrilaterals (Sum of internal angles = 360°), we have actually, ⇒ x° + 80° + 2x° + 70° =360° Simplify. ⇒ 3x + 150° = 360° Subtract 150° on both sides. ⇒ 3x + 150° – 150° = 360° – 150° ⇒ 3x = 210° Divide both sides by 3 to get; ⇒ x = 70° Thus, the worth of x is 70° And the angles of the quadrilaterals are; 70°, 80°, 140°, and 70°. Example 2 The internal angles of a quadrilateral are; 82°, (25x – 2) °, (20x – 1) ° and (25x + 1) °. Find the angles of the quadrilateral. Solution The total amount of inner angles of in a quadrilateral = 360° ⇒ 82° + (25x – 2) ° + (20x – 1) ° + (25x + 1) ° = 360° ⇒ 82 + 25x – 2 + 20x – 1 + 25x + 1 = 360 Simplify. ⇒ 70x + 80 = 360 Subtract both sides by 80 to get; ⇒ 70x = 280 Divide both sides by 70. See more: What Does The Top Number Of A Time Signature Mean, Time Signature ⇒ x = 4 By substitution, ⇒ (25x – 2) = 98° ⇒ (20x – 1) = 79° ⇒ (25x + 1) = 101° Therefore, the angles of the quadrilateral are; 82°, 98°, 79°, and also 101°. Practice QuestionsConsider a parallelogram PQRS, whereFind the 4 interior angles of the rhombus whose sides and one of the diagonals are of equal size.
https://ubraintv-jp.com/1-pair-of-opposite-sides-that-are-parallel/
The Northern Lights are one of the most well-known sights in and around the Arctic Circle. Attracting thousands of tourists every year, the Northern Lights, also known as the aurora borealis, causes the night sky to light up in a panoply of spectacular colors, a phenomenon that can occur anywhere in the world but is most common at and near the North and South Poles. But some people may be wondering why the Northern Lights light up in specific colors and not others. A natural phenomenon caused by the Earth’s interaction with the sun, the colors of the Northern Lights depend on a number of factors, including altitude and which types of air molecules are involved. Read on to discover how the colors of the Northern Lights work. The Colors of the Northern Lights and How They Work The Northern Lights are caused when solar wind, the term used for particles regularly emitted by the sun, collides with the magnetosphere, the protective field surrounding Earth that is generated by the magnetic North and South Poles. When solar wind impacts the magnetosphere, the interaction between the two creates auroras, with the colors of the Northern Lights determined primarily by the altitude at which solar wind collides with the magnetosphere and which types of air molecules are present in the surrounding atmosphere. The effect is not dissimilar to that of a neon sign. Neon lights work by using electricity to excite molecules, with the resulting chemical interaction giving off light. Similarly, solar wind is primarily comprised of charged particles that give off light upon impact with the Earth’s atmosphere. This process is slightly different depending on existing solar activity as well as which band of atmosphere the solar wind is able to penetrate. The Earth’s atmosphere is primarily composed of nitrogen and oxygen, with varying levels of both depending on altitude. Because of this, the most common types of auroras are green auroras, which are created when solar wind interacts with oxygen and nitrogen molecules in the lower atmosphere. Blue auroras are less common and result from solar wind penetrating into lower bands of the atmosphere where oxygen is rare or nonexistent. Solar activity is also a factor in the color of the Northern Lights, with exceptionally high levels of sunspot activity generating red auroras from time to time. These occur when solar wind interacts with nitrogen in the outermost bands of the atmosphere. There are also ultraviolet and infrared auroras; the former can be sometimes seen with the naked eye but are rare and difficult to spot, while the latter cannot be seen without the requisite monitoring equipment. Another, rarer occurrence are yellow, pink, and purple auroras. These occur when solar wind is able to penetrate multiple layers of the atmosphere, resulting in primary color auroras mixing at the margins. Due to red, yellow, pink, purple, and other auroras requiring exceptional levels of solar activity to manifest, it is unlikely that you will see one on your Northern Lights adventure, though you should never rule them out, either. As a rule, green auroras occur when solar wind penetrates to a maximum height of 150 miles (241 kilometers) above the Earth’s surface. Red auroras occur at elevations beyond 150 miles, while blue auroras appear at up to 60 miles (96.5 kilometers). Purple, pink, and yellow auroras occur at heights above 60 miles. As an additional corollary, even visible auroras can only be seen in a small manner due to the limitations of human eyesight. While the Northern Lights remains a spectacular sight in person, many bands of light and particles are invisible to the naked eye and thus can only be picked up by cameras or specialized equipment. Don’t let that dissuade you from taking a trip to the Arctic Circle and witnessing an aurora for yourself. Conclusion As a phenomenon dependent on many external factors, the exact colors of the Northern Lights are hard to predict. Green auroras are the most common by far, but it is also possible to see blue, red, and other colors of aurora depending on solar activity. While scientists have tracked solar activity and its peaks and valleys for decades, it remains somewhat unpredictable, so there’s always the chance you could see a rare aurora while on your Northern Lights sojourn. Having said that, the Northern Lights remains a spectacular sight no matter which colors are visible on any given night. As one of the premier natural sights of the Arctic Circle, the Northern Lights is an experience you will remember for the rest of your life. Watching the night sky come alive in an array of colors is an experience you can only have in the far northern parts of the world. If you’re curious, why not book a tour up north and see for yourself?
https://airlinkalaska.com/the-colors-of-the-northern-lights-and-how-they-work/
0:02Skip to 0 minutes and 2 secondsWelcome to the session on genetics and pharmacokinetics. In this session, we will look at how genetic variation affects the body in the way that it handles drugs. To start with, we will discuss how drugs are administered into the body and we will think about how drug concentrations can vary amongst different individuals. We will also focus on how genetic variation can affect three key areas, namely the absorption of the drug, the metabolism of the drug, and the excretion of the drug. The importance of pharmacokinetics stems from the fact that drugs have to be administered in an acceptable and feasible way, and this may include oral tablets, injections, active preparations ,and topical products such as skin creams and eye drops. 1:01Skip to 1 minute and 1 secondOnce the drug has been administered, the substance has to traverse the body and the active compound has to reach the target site within the body, and hopefully the active compound will remain at the target for a sufficiently long duration that a beneficial effect results for the patient. Ideally, the drug will be at an appropriate concentration in the body and it would subsequently be eliminated from the body in a consistent and predictable way. Obviously, we would like to target a drug concentration that delivers good beneficial effects but equally, at a level where adverse reactions are unlikely to take place. Now, genetic variations can affect pharmacokinetics through a number of ways. 2:06Skip to 2 minutes and 6 secondsFirstly, the drug concentration can be affected because of genetic variation in bioavailability, namely, how well it might be absorbed from the stomach, for example. Some drugs require transporters to bring the drug into the body through the intestinal cells and there may be variation in how well these transporters bring the drug in, depending on the genetic makeup of the individual. Now as the drug enters the body - something known as first pass metabolism can take place. This may have activation or breakdown of the drug. Genetic variation can affect the activity of enzymes that process these drugs as the drug travels into the body. Now once the drug is in the body, it will eventually have to be broken down and moved. 3:02Skip to 3 minutes and 2 secondsThis may be a metabolic process or may be excreted through the renal system. Again, genetic variation can affect the activity of these transporters and the enzymes that are involved in eliminating the drug. Let's look at an example of variation in drug absorption. Now we take as an example here methotrexate. There's considerable variation in methotrexate dosing and this stems partly from differences in absorption from the gastrointestinal tract. 3:42Skip to 3 minutes and 42 secondsThere are membrane transport proteins that have different affinity for methotrexate. Equally, the oral bioavailability is also affected by the ABC transporters that push the methotrexate back out into the intestinal lumen, where it is lost. 4:06Skip to 4 minutes and 6 secondsNow variation in metabolism is a very important part of pharmacokinetics. Our example here is a drug known as codeine which is a very widely used painkiller. Codeine is metabolised in the liver by cytochrome p450 2d6 to a more potent opioid analgesic, morphine. As we all know morphine provides good pain relief but it is also associated with serious adverse effects such as drowsiness and respiratory depression. Now an individual who is a rapid metaboliser would end up converting codeine to morphine at a very fast rate. This leads to buildup of morphine and a greater risk of adverse effects. 4:54Skip to 4 minutes and 54 secondsIn contrast, individuals whose genetic makeup leads them to be slow metabolisers end up producing morphine at a slower rate and therefore they have poorer pain relief. 5:20Skip to 5 minutes and 20 secondsNow, there's also variation in excretion of the drug. Metformin is a drug that does not undergo liver metabolism but it's actually eliminated in urine through a process known as active tubular secretion. And there are organic cation transporters here which help to clear metformin from the human body. Genetic variation in these transporters means that there are differences amongst individuals in how well they clear metformin from the body through the urinary tract. So what are the clinical implications of variations in pharmacokinetics stemming from these genetic differences? 6:04Skip to 6 minutes and 4 secondsWell, genetic variation can lead to measurable differences in half-life, clearance of the drug, buildup of toxic metabolites, and even non activation of a prodrug, which is an inert precursor that needs conversion to the active molecule in the body. Nevertheless, despite these potential sites of variation, genetic testing for pharmacogenetic adjustments have not been clinically validated. This may be because there are many other factors that affect dosing regimens. Clinicians have many other ways of adjusting individual doses without resorting to looking at genetic information. For instance, a clinician could take a blood sample to measure the actual drug concentration in the patient. There may well be also other markers for drug response that they could use. 7:16Skip to 7 minutes and 16 secondsEqually, the dosing regimen could be adjusted for a patient. For instance - step titration, starting from a low dose and gradually moving upwards until the desired response is obtained, while carefully monitoring for any adverse effects. Now, there are some drugs to have good benefit and little harm across a wide range of doses and in these cases, detailed measurement of pharmacokinetics is not a major issue. These drugs have a good benefit/harm balance and it's not essential to adjust the doses based on genetic variation. And finally, if there are drugs which have difficult pharmacokinetics, clinicians may well prefer alternative treatment options that have far more predictable actions in the human body. 8:21Skip to 8 minutes and 21 secondsSo, in summary, although we have demonstrated that genetic variation can influence pharmacokinetics, particularly in areas such as absorption, metabolism and elimination, and that indeed cytochrome p450 enzyme systems are a major contribution to the genetic diversity and variation in pharmacokinetics, there are actually currently no genetically guided strategies tailored to individual patients that are widely used in clinical practice at the present time. This tutorial by Professor Yoon Loke will explain how genetic variation can affect how the body handles drugs.
https://www.futurelearn.com/courses/personalized-medicine/0/steps/34819
Is losing weight, losing fat, or losing inches really as simple as “calories in vs. calories out”? If you burn more calories than you eat, you will always lose fat or inches? It depends on who you talk to. The reason for that is because it SHOULD be that simple, but the reality is, we live in a world with more and more no-totally-healthy people, so there are often many other factors that need to be considered. The fact that these other factors do exist, I personally, say it’s too broad of a statement to be accurate, therefore, I don’t think it’s correct to say “well, if you have fewer calories in, or if you burn more calories than you eat, you should never gain weight.” In theory, it should be that simple. And it could be if you had a room full of people who had matching DNA. They were all relatively young. No one had encountered any disease. They had no food sensitivities. They ate perfectly nutrient-dense whole foods for every meal. They were all moderately physically active. They had no stress and their appetites never waver. And if they didn’t have any genetic predispositions to things like MTHFR, PCOS, candida, different types of insulin sensitivities or metabolic syndromes. But the chances of that are pretty non-existent. Highlights in this episode: • What does “calories in vs. calories out” mean? • What if you’re predisposed to genetic, insulin, or metabolic factors? • Thyroid hormones and metabolism • How different types of hormones affect energy balance • How nutrient absorption affects overall energy balance • Are you affected by sub-optimal nutrient absorption? • How medications can play a role • Not making excuses even if you carry extra weight due to metabolic or hypothyroidism • Cautions of caloric readouts on fitness equipment, apps, and on nutrition facts labels • How problems with energy balance are figuroutable Questions, thoughts or comments? You can contact me about this episode in two ways. Messages might be played or read on the show but will be kept anonymous. Subscribe on your favorite podcast app! Please leave a rating or review and share the podcast with a friend. The information shared is for educational and informational purposes only. It should not be interpreted as an intent to diagnose, treat, cure, heal or prescribe.
https://fitfizzstudio.com/calories-in-vs-calories-out-and-hormones
By Gregory A. Dale, Ph.D. Duke University "If you want to build an atmosphere in which everybody pulls together to win, then you as a leader have to recognize that it all starts with you, it starts with your attitude, your commitment, your caring, your passion for excellence, your dedication to winning, It starts with the example you set. It starts with the way you treat and relate to your athletes. - Pat Williams, Senior Executive Vice President, Orlando Magic Have you ever wondered why some coaches achieve so much success with their athletes and teams - winning and gaining everyone's respect along the way - while others continually fall short or struggle to get their teams or athletes to perform at a consistently high level? If you are like most coaches, you have probably asked yourself questions such as the following: How do some coaches consistently get the most out of their athletes while others have athletes who chronically underachieve? How do some coaches gain their athletes' confidence, trust and respect while others have athletes who never buy into them and what they are trying to accomplish? How do some coaches inspire their athletes to compete with confidence, aggressiveness and mental toughness while others have athletes who routinely crumble and choke under pressure? How do some coaches get athletes to willingly “run through walls” for them while others have athletes with little commitment, no work ethic and bad attitudes? How do some coaches inspire a sense of loyalty and pride in their athletes while others have athletes who want to quit, or worse yet instigate a revolt and try to get their coaches fired? In my work as a sport psychology consultant, I have come to the realization that the most successful coaches are those that not only win most of the time but also are able to develop meaningful relationships with the athletes they coach. In other words their athletes respect them and willingly “put it on the line” for them when asked. Following are seven characteristics that successful coaches and their athletes have identified as being essential for a coach to have credibility with their athletes and ultimate success. As you read these characteristics, I hope you will honestly examine the way you coach. Ask yourself if there are any areas that need attention. Remember, you continually ask your athletes to work on aspects of their games that are lacking. It seems to only make sense that you would do the same for yourself if you want to improve. 1. Character - These coaches: - Do what they say they are going to do. They don't tell athletes one thing and then do another. - Are honest with athletes regarding their role on the team. They don't promise things they can't deliver. - Follow the rules as they are written and don't look for ways around those rules to have a better chance to win. 2. Consistent - These coaches: - Are consistent in the way they administer punishment. They don't show favoritism toward better athletes. - They don't have a “doghouse.” Disagreements are dealt with and everyone moves on in a productive manner. - Are consistent in their mood and the way they approach their athletes on a daily basis. They don't take things out on their athletes. - Create an environment where their athletes know what to expect from them. There are no petty mind games. 3. Communicator - These coaches: - Make sure their positive/instructive comments outweigh the negative comments. - Are proactive. They seek out athletes and check in with them instead of waiting for problems to arise. - Truly have an active, open door. - Clearly communicate with athletes and staff about roles, expectations and standards. They make no assumptions. - Focus on really listening to players. - Seek input from team leaders on key decisions. Athletes feel like they can come and talk to them. 4. Caring - These coaches: - Act as servants. Athletes feel like the coach would do anything for them regardless of their talent. - Take a genuine interest in the athletes' lives away from the sport. - Treat athletes as more than just a group of individuals who can help the coach move up the career ladder. - Forge long-term relationships with their athletes. There is a sense of loyalty for life. 5. Competent - These coaches: - Know their sport inside and out, but are also human enough to admit when they are wrong. - Keep up to date with the latest advances. - Always learning and willing to look for new ideas. - Their athletes improve from the time they entered their program to when they finished, no matter how good they were when they started. 6. Committed - These coaches: - Have a clear vision for the program and are able to communicate that vision to athletes. - Are passionate/invested. They are committed to putting in the time to be good. They come early and stay late. - They aren't afraid to list their secrets of success because they know no one will outwork them. - Have a competitive fire. They are highly competitive individuals. 7. Confidence Builder- These coaches: - Are inspiring. They sell athletes on themselves. They create and maintain hope and optimism. They also plant seeds of greatness. - Know that athletes want to feel appreciated, valued, competent and important. Great coaches make athletes feel good about them. - Realize that confidence is fragile and they are willing to praise athletes in public and criticize in private (never publicly embarrassing them). They catch people doing things right. - Are appreciative. They share credit with staff, especially acknowledging the “little” people. - They have the mindset that the athletes are the ones who really win games, not the coach. Gaining and maintaining respect and credibility with your athletes is vital to ultimate success. Great coaches are great because they see the importance of credibility and respect. They know how fragile they are and work hard to maintain them. Where are you in your journey to becoming one of the great coaches? In conclusion, I would like you to consider how you want to be remembered by the athletes you coach. Every athlete who competes for you will remember his or her experience with you and your coaching for something. When you think about it, your coaching career is relatively short in the whole scheme of life, Whether you are involved for a few years or dedicate much of your life coaching, the time you have available to impact people is relatively short. Essentially your career is the “dash” between your first and last day of coaching (e.g., 1995–2035). It's an inch. It is very short, Therefore, it is imperative that you invest your time wisely and determine what you will do with the "dash" you have been given. How are you going to coach during those years? What legacy would you like to leave behind after you are gone? What would you want the important people in your life to say about you when celebrating your career at your retirement banquet?
https://coachdeck.com/blogs/news/the-seven-cs-of-coaching-credibility
physical Presence. | Little as the orthodox Christian may care to admit it, the entire Gospel story in its four forms or presentations, contains little else except symbolic details about the Mysteries which are (as far as humanity is concerned) five in all. These Mysteries indicate, in reality, five important points in the spiritual history of an aspirant; they indicate also five important stages in the progress of human consciousness. This advance will become definite and clear in a manner not understood today, at some point during the Aquarian Age. Humanity, the world disciple (through its various groups all at various stages of unfoldment) will "enter into" new states of awareness and into new realms or spheres of mental and spiritual consciousness, during the next two thousand years. Each age has left a reflection of a modern fivefold development upon it. Four ages have just passed away, astronomically speaking: Gemini, Taurus, Aries, and Pisces. Today Aquarius, the fifth age, is coming into power. In Gemini, its symbolical sign of the two pillars set its seal upon the Masonic Fraternity of the time and the two pillars of Jachin and Boaz - to give them their Jewish names which are, of course, not their real names - came into being approximately eight thousand years ago. Then came Taurus, the Bull, wherein Mithra came as the world Teacher and instituted the Mysteries of Mithras with an (apparent) worship of the Bull. Next followed Aries the Ram, which saw the start of the Jewish Dispensation which is of importance to the Jews and unfortunately of importance to the Christian religion, but of no importance to the untold millions in the other parts of the world; during this cycle came the Buddha, Shri Krishna and Sankaracharya; finally we have the age of Pisces the Fishes, which brought to us the Christ. The sequence of the Mysteries which each of the signs of the Zodiac embodies will be clarified for us by the Christ, because the public consciousness today demands something more definite and spiritually real than modern astrology, or all the pseudo-occultism so widely extant. In the era which lies ahead, after the reappearance of the Christ, hundreds of thousands of men and women everywhere will pass through some one or other of the great expansions of consciousness, but the mass reflection will be that of the renunciation (though this does not mean that the masses will by any means take the fourth initiation); they will renounce the materialistic standards which today control in every layer of the human family. One of the lessons to be learnt by humanity at the present time (a time which is the antechamber to the new age) is how few material things are really necessary to life and happiness. The lesson is not yet learnt. It is, however, essentially one of the values to be extracted out of this period of appalling deprivations through which men are every day passing. The real tragedy is that the Western Hemisphere, particularly the United States, will not share in this definite spiritual and vitalizing process; they are at present too selfish to permit it to happen. You can see, therefore, that initiation is not a ceremonial procedure, or an accolade, conferred upon a successful aspirant; neither is it a penetration into the Mysteries - of which the mysteries of Masonry are, as yet, only the pictorial presentation - but is simply the result of experiencing "livingness" on all three levels of awareness (physical, emotional and mental) and - through that livingness - bringing into activity those registering and those recording cells within the brain substance which have hitherto not been susceptible to the higher impression. Through this expanding area of registration or, if you prefer it, through the development of a finer recording instrument or responsive apparatus, the mind is enabled to become the transmitter of higher values and of spiritual understanding. Thus the individual becomes aware of areas of divine existence and of states of consciousness which are always eternally present but which the individual man was constitutionally unable to contact or to register; neither the mind, nor its recording agent, the brain, were able to from the angle of their evolutionary development. When the searchlight of the mind is penetrating slowly into hitherto unrecognized aspects of the divine mind, when the magnetic qualities of the heart are awakening and becoming sensitively responsive to both the other aspects, then the man becomes able to function in the new unfolding realms of light, love and service. He is initiate. These are the mysteries with which the Christ will deal; His acknowledged Presence with us and the presence of His disciples will make possible a far more rapid development than would otherwise be the case. The stimulation of the objective Hierarchy will be increasingly potent and the Aquarian Age will see so many of the sons of men accepting the great Renunciation that world effort will be on the same scale as the mass education of mankind in the Piscean Age. Materialism as a mass principle will be rejected and the major spiritual values will assume greater control. The culmination of a civilization, with its special note, quality and gifts to posterity, is significant of the reflection of the spiritual intent, and (through its massed populations) of one of the initiations. History will some day be based and written upon the record of the initiatory growth of humanity; prior to that, we must have a history which is constructed around the development of humanity under the influences of great and fundamental ideas. That is the next historical presentation. The production of the culture of any given period is simply the reflection of the creative ability and the precise consciousness of the initiates of the time - those who knew they were initiate and were also conscious of admittance into direct relation with the Hierarchy. At present, we use neither of these two words, civilization and culture, in their rightful sense or with their true meaning. Civilization is the reflection in the mass of men of some particular cyclic influence, leading to an initiation. Culture is esoterically related to those within any era of civilization who specifically, precisely and in full waking consciousness, through self-initiated effort, penetrate into those inner realms of thought activity which we call the creative world. These are the realms which are responsible for the outer civilization. The reappearance of the Christ is indicative of a closer relation between the outer and the inner worlds of thought. The world of meaning and the world of experience will be obviously blended through the stimulation of the advent of the Hierarchy and of its Head, the Christ. A tremendous growth of understanding and of relationships will be the major result.
http://energyenhancement.org/Alice-Bailey/bk/reappearance/reap1037.html
Critics of homeschooling often choose a single disadvantage when citing this type of program’s inferiority to public or private schooling-socialization. How can children who do not interact daily with others outside of the family learn the social skills necessary for introduction into an adult world? Socialization is a huge obstacle for parents to overcome when they choose to provide a homeschool education for their children, and this facet of learning cannot be overlooked as parent-teachers are preparing lessons in math, reading, and the sciences. Click Play Below To Listen To The Full Article In response to the article above I have received the following reply: I appreciate your site. I arrived at it by doing some browsing of chicken raising, but as we home school, I couldn’t resist checking your thoughts on the subject. In your article, I came across your statement: “Socialization is a huge obstacle for parents to overcome when they choose to provide a homeschool education for their children, and this facet of learning cannot be overlooked as parent-teachers are preparing lessons in math, reading, and the sciences.” Without being contentious, I have to disagree that socialization for home educated children is any greater an obstacle than the “proper” socialization of children under other educational models. There are a many valid replies to the nay-saying opponents of home education, but in an attempt to address both issues with one response, I offer the following: Someone contending that home schooling puts children at a disadvantage socially, is making two assertions, neither of which is legitimate. The first is that public and private education, (where children are grouped in greater student to teacher ratios with others of like age) will develop ONLY persons who will be contributing, patriotic, law-abiding citizens, void of ANY psychological, educational, and/or “abnormal” (read outside the current “worldview”) traits. The second assertion is that home education ONLY produces persons who will not possess the aforementioned traits. Both of these claims fail when observing the results of public/private education as well as those of homeschooled children. Sure, I will grant that there are many fine young men and women that are the product of public school. Additionally, there are some homeschooled children who do not fit the characteristics mentioned above. That isn’t the claim of those who attack socialization of homeschooled youth. The public/private institution does not guarantee good citizens, nor does homeschooling prevent it. Where do all the deviants, criminals, self indulgent, perverse and wayward come from? They must be the homeschooled group. No, actually they are not. In fact, as we move toward relativism, postmodernism and individual rights, while removing accountability, morality and absolute truths, any social behavior becomes acceptable and it is considered “audacious” to call anything wrong. So really, the claim is self refuting, but to a greater degree, we see that the normalization of deviance is what the “institutional” school advocates really wants. Unacceptable behavior becomes normal and anyone holding fast to “old fashioned” beliefs and teaching children to challenge the current worldviews, develop convictions about life issues and believe in absolute and objective moral truth and law given by an absolute moral Law Giver are label antisocial or lacking socialization. Just Google “school problems”and you will see the challenges facing our public education system. Ironically, teachers and administrators are all pleading from the classrooms for more parent involvement. Take away all the gun, drug, overcrowding, underfunding problems and the school will still be lacking parental involvement. No child stands a chance against the pressures of today’s world without a loving parent devoted to instilling a sense of purpose, responsibility, accountability and values. “Train up a child in the way he should go; and when he is old, he will not depart from it.” My goal in this is to encourage you to see that you don’t have to worry about socialization. When you teach your child to think critically, see the value in others and themselves, respect the world we are stewards over, and act responsibly as participating member of society; they may stand out from the crowd, but isn’t that the better way? May God continue to bless your work. Semper Fidelis (Always Faithful) Jon P.S. I hope I didn’t give you the impression that you were attacking homeschooling. I appreciate that you are giving it a forum for discussion on your website and believe you are addressing valid questions regarding the home school option. Having heard many coworkers bring the same socialization charge against home school, I realize that they believe public/private school does give an advantage. In fact, being military, I have seen a real and legitimate bias against recruiting homeschooled children because they believe they are “disadvantaged” socially. Thanks for the reply and if you believe it helps shed light on the discussion, feel free to add my comments.
http://www.self-sufficient-life.com/socialization/
The European Union General Data Protection Regulation Will Affect Companies in the United States and Canada. Many Still Aren’t Ready to Comply On May 25, 2018, the European Union member states will begin to enforce the General Data Protection Regulation (GDPR). The GDPR imposes sweeping data privacy, access, consent, transfer, processing and storage requirements on companies that offer goods or services to, or monitor the behavior of, people residing in the EU at the time the data is collected, whether or not the companies are located in the EU. The GDPR governs all aspects of the collection, use and processing of personal data. Personal data is any information related to a person or that can be used to directly or indirectly identify the person. Regulated information includes email addresses, banking information, medical information, IP addresses, photos and posts on social media. The penalties for non-compliance are severe. A tiered system categorizes the severity of violations and corresponding fines up to a maximum of four percent of a company’s global annual turnover or twenty million euros, whichever is greater. Yet despite the global impact of the regulations and the severity of the penalties for non-compliance, some estimates indicate that less than half the companies in the United States and Canada that are subject to the GDPR are prepared to comply. The regulations governing privacy notices will be among the most significant for many companies. GDRP requires that companies provide privacy notices to the individuals whose data is being collected (referred to as “data subjects”). The notices must contain the following information: - The identity and the contact details of the company’s data protection officer (where applicable); - The purposes for processing the personal data as well as the legal basis for the processing, including the legitimate interests pursued by the company; - The recipients or categories of recipients of the personal data, if any; - The fact that the controller intends to transfer personal data to a third country and how it will ensure adequacy of protection; - The period for which the personal data will be stored, or if that is not possible, the criteria used to determine that period; - The existence of the right to request from the company access to and correction or erasure of personal data or restriction of processing concerning the data subject or to object to processing as well as the right to data portability; - Where the processing is based on consent, the existence of the right to withdraw consent at any time, without affecting the lawfulness of processing based on consent before its withdrawal; - The right to lodge a complaint with a supervisory authority; - Whether the provision of personal data is a statutory or contractual requirement, or a requirement necessary to enter into a contract, as well as whether the data subject is obliged to provide the personal data and of the possible consequences of failure to provide such data; - The existence of automated decision-making, including profiling, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject. Some uses of data under the GDPR will require consent from data subjects that goes far beyond the requirements of state or federal statutes in the United States or the Personal Information Protection and Electronic Documents Act in Canada. - If consent is given in the context of disclosures that also concern other matters, like a website’s general terms and conditions, the request for consent must be presented in a manner which is clearly distinguishable from the other matters, in an intelligible and easily accessible form, using clear and plain language. Any part of the consent that does not comply will not be binding. - A data subject has the right to withdraw consent at any time. The withdrawal of consent will not affect the lawfulness of processing based on consent before the withdrawal. It must be as easy to withdraw consent as to give it. - To determine whether consent was freely given, consideration will be made of whether the goods or services could be obtained without agreeing to the processing of personal data that is not necessary for the performance of the contract. In addition, companies governed by the regulation must provide mechanisms for enforcement of “Data Subject Rights”, which include the rights to: - Correct inaccurate data - Erase data (the “right to be forgotten”) under certain circumstances, including that the data is no longer necessary for the purpose for which it was collected or the data subject withdraws consent, - Restrict processing to verify accuracy of data, - Data portability – companies have to give data subjects their data in a format which the individual can take to another company; - Object where processing is based on public interests or legitimate interests or for direct marketing; There are a number of other aspects of the GDPR that will present challenges to the companies that fall within the regulation. For instance, the GDPR: (1) requires that data breaches which may pose a risk to individuals be reported to the governing authority within 72 hours and to affected individuals without undue delay; (2) contains requirements regulating how data is processed and transferred; (3) requires that organizations maintain documentation to demonstrate compliance with the GDPR, including data processing activities, purposes of processing, description of categories of data, security measures and data flow maps; and (4) restricts exports of data to third-parties outside of the EU by permitting export only where the recipient of the data is in a country that offers an adequate level of protection. As state and federal governments and agencies in the US and Canada continue to develop laws and regulations governing data privacy and cybersecurity, it is important to remember that nations across the world are dealing with these same issues. Companies that do business outside of the US and Canada need to be cognizant of these developing laws, and the GDPR is one such substantial and far-reaching regulation that international companies must take into account in this digital age.
http://barclaydamon-testing.com/alerts/The-European-Union-General-Data-Protection-Regulation-Will-Affect-Companies-in-the-United-States-and-Canada--Many-Still-Arent-Ready-to-Comply-01-16-2018
Online Resources and Inspiring Stories in the Time of COVID-19 Religious leaders, communities and organizations all over the world are rapidly responding to the challenges of the COVID-19 pandemic. Leaders have moved much of their pastoral work online, and continue to provide guidance and solace to those suffering from anxiety, fear, isolation and bereavement. Faith-based organizations are stepping up to support communities in need and educate others about avoiding infection. Religious communities are taking action to ensure that there is adequate social provision for vulnerable groups. At KAICIID, we are reaching out to inspire people to take action and respond to the needs of their communities. We are showcasing stories of hope from diverse religious communities around the world and highlighting resources and training on how to use online tools to worship, spread messages of cooperation and engage in practical measures to alleviate suffering. Below you will find stories, testimonials and statements highlighting how religious leaders, friends of KAICIID and our Fellows are helping the vulnerable and calling for renewed solidarity in their communities. You will also find tools for e-worship, webinars, meeting tools and even reading recommendations. On social media, you can find these stories, testimonials and initiatives under our hashtag #ReligionsRespond. Community organizations, religious congregations, medical professionals, religious leaders, social activists, individuals, families, friendship circles and social networks all have a shared, global commitment to supporting one another in these difficult times. Publications Drawing on the experience of religious communities and actors in the field, KAICIID's new publication “Interfaith Dialogue in Action” provides practical recommendations for using dialogue to address the challenges of COVID-19. It also offers strategies for fostering open communication, upholding diversity and preventing further community disconnection, isolation and distrust. Stories and Resources Amid Violence and Uncertainty, KAICIID Pursues Peace in the Central African Republic In the Central African Republic, COVID-19 has aggravated the country's humanitarian crisis ahead of an already tense election period. Lockdowns and increased security risks have prevented foreign aid workers from moving around the country to supply much needed crisis relief. In response, KAICIID has collaborated with the Plateforme des Confessions Religieuses de Centrafrique to fill the gaps of international aid missions, working with Central African religious and civil society leaders to provide pandemic safety training to local communities and reach out to vulnerable groups. “We are united, even in the midst of a pandemic:" KAICIID Fellows Offer Health and Humanitarian Services During COVID-19 As COVID-19 has overburdened the world’s health and humanitarian aid services, almuni from the KAICIID Fellows Programme have tapped into their interfaith networks, using interreligious dialogue to offer psychological support, combat discrimination and hate speech and provide much-needed welfare services. "Solidarity is not a choice or act of charity. It is a must": Addressing COVID-19 at the G20 Interfaith Forum Hundreds of religious leaders, policymakers and other experts gathered virtually on day two of the G20 Interfaith Forum to take stock of the ongoing COVID-19 health crisis and outline the crucial role that faith communities play in supporting global efforts to alleviate the suffering caused by the pandemic. "Leadership is not about knowing everything, it's about knowing who to connect to": G20 Interfaith Leaders Target COVID-19 Relief Religious leaders and policymakers from across the globe came together at the G20 Interfaith Forum to explore how to support and mobilise faith communities during this protracted period of crisis. At that global platform, they discussed the biggest problems that face humanity —proposing solutions and looking for ways to achieve the SDGs through collective action From Zoom to the Globe: Online Dialogue Series Fosters Dialogue between Saudi Arabia and Beyond After the COVID-19 pandemic hit in early 2020, KAICIID Fellows launched a new dialogue project called “Ladies for Intercultural Dialogue”. The project brought together Arabic speakers, both men and women, throughout Saudi Arabia to interact in dialogue sessions on important regional and global topics such as the importance of interreligious dialogue, implementation of United Nations Agenda 2030 and harnessing social media to promote peace. The G20 Interfaith Forum, Education and COVID-19: Inclusive and Caring Education as a "Vaccine" for the Pandemic of Social Inequality" The COVID-19 crisis has unveiled a hidden pandemic that has been consuming a huge part of humanity for many decades – discrimination and exclusion of those who are disadvantaged due to gender, ethnicity, religion, age, ability, socio-economic status, language, beliefs, and other backgrounds. What ‘vaccine’ is necessary to help bring to an end of this hidden pandemic? The Importance of Being a Good Listener During the COVID-19 pandemic, Rabbi Bater´s pastoral counselling moved from the physical to the virtual world. “I now use the phone 24 hours a day because people need to be in contact. We are isolated with our families. And if you don't have any family, you are completely alone,” he said. The G20 Interfaith Forum and COVID-19: Our Unique Opportunities at this Unprecedetend Moment Prof. Victoria Wyszynski Thoresen is the UNESCO Chair for Education about Sustainable Lifestyles at The Collaborative Learning Centre for Sustainable Development at Inland Norway University. She writes about the opportunities for policymakers and religious leaders during the COVID-19 pandemic. Uniting Faith Leaders and Policymakers in Iraq against COVID-19 As part of KAICIID’s efforts to help actively address the pandemic in 60 countries around the world, 2019 Fellow Barzan Baran Rashid brought together the Ministry of Endowment and Religious Affairs and the Ministry of Health in Kurdistan to raise awareness of the dangers of COVID-19. Partnerships between policymakers and religious communities are crucial in tackling COVID-19 in a region where more than a million people gather in most Kurdistan mosques for Friday prayers. Faith Leaders Step in to Provide Humane Treatment of Incarcerated Individuals During COVID-19 Nearly 11 million prisoners and other detainees are being held worldwide in often overcrowded and unsanitary conditions. The widespread transmission of COVID-19 within jails and detention centres means that some incarcerated individuals are facing a literal death sentence. Faith-based organizations and religious insitutions are stepping in to meet the need - providing health and wellness services, spaces to quarantine, pastoral care and resocialision programmes, and donations such as soap and masks. During COVID-19 Religious Communities Leverage the Internet to Foster Greater Dialogue In response to the millions of people at home during COVID-19 lockdowns, KAICIID Fellows are helping religious communities leverage the power of social media and the internet to foster greater digital dialogue. Fellows Target Initiatives at Hate Speech Prevention and COVID-19 Response KAICIID Fellows from over 60 countries across the Arab Region, Asia and Africa have launched targeted initiatives, working with faith leaders to raise awareness of e-worship and fight hate speech. “We Need Each Other to Survive”: How KAICIID Platforms, Fellows, Are Responding to COVID-19 In response to the overwhelming need brought on by the pandemic, KAICIID developed a plan to address the crisis by increasing and re-directing resources to support interreligious leaders like the KAICIID Platforms and Fellows around the globe. A Faith Based Call to Protect the Vulnerable Due to the fierce economic, social and human toll of the current pandemic, members of KAICIID’s multireligious Board of Directors met virtually to discuss the role of religious leaders in protecting vulnerable communities during COVID-19. "Violence against women is a pandemic without borders" According to a the United Nations, more than 243 million women and girls globally have been subjected to physical or sexual violence in the last 12 months. This number is expected to increase throughout the COVID-19 pandemic, as security, health and financial concerns heighten tensions exacerbated by lockdown and isolation measures. Ross Tutin: "Scouts have been role models for interreligious dialogue" Ross Tutin is a Scout from Australia who serves as the Leader of the Spiritual Development Unit and as a Global Consultant for the World Organisation of the Scout Movement (WOSM). KAICIID spoke to Ross about the COVID-19 pandemic, the cooperation between the Scouts and KAICIID and about the importance of spirituality within the Scout Movement. Overcoming the Digital Divide: Dialogue in the Age of COVID-19 KAICIID staff and partners are finding inventive ways to bring dialogue online, building a virtual community based on trust and safety and discovering that online dialogue can foster transformative relationships across differences. Young Leaders in the Arab Region Take on Covid-19 through Social Media Graduates of KAICIID’s Social Media as a Space for Dialogue Programme have launched a series of online campaigns in response to a rise in hate speech during COVID-19, working to combat discrimination and misinformation, raise awareness and source funding and supplies for humaintarian relief. Religious Holidays under Lockdown: Praying and Celebrating in Unusual Ways As governments around the world have imposed lockdowns and implemented social distancing measures to fight the COVID-19 pandemic, important religious holidays like Çarşema Sor, Easter, Passover, Ramadan and Vesak are being celebrated online. Young people are formulating an active global response to COVID-19 through dialogue and cooperation with other religions. Their responses have been as original as they have been effective – countering fake news and hate speech on social media, providing relief services and raising awareness in their local communities. Young People Launch Virtual Peacebuilding Project in Nigeria Through the support of KAICIID, youth leaders in Nigeria have launched a virtual peacebuilding project during COVID-19 with two cohorts in Abuja and Lagos. The online course aims to educate young people on the drivers of conflict, how to resolve violence through constructive dialogue, and how to foster religious tolerance. Father Andreas Kaiser: "If We Can Give People Hope and Stability, We have Fulfilled our Mission" As COVID-19 forces people to stay home, Father Andreas Kaiser of the Ober St. Veit Parish in Vienna moves worship services online and finds innovative ways to keep his parishioners connected. From Virtual Worship to Solidarity by Screen: The Best Tools to Keep You Connected Religious institutions and faith communities are turning to conference apps and streaming services to provide virtual worship, online classes, counselling and support. Check out some of our favourite digital tools to help you stay connected during COVID-19. KAICIID Fellows Take Part in Virtual Conference on Efforts to Counter COVID-19’s Global Effects Over 120 KAICIID Fellows from all over the world took part in a virtual conference on COVID-19 this week, providing a vivid picture of how the pandemic is affecting their respective communities and outlining their efforts to mitigate its effects. The conference coincides with efforts by KAICIID to identify and support, with funding where necessary, initiatives by the Fellows related to countering the effects of COVID-19. So far 26 projects have been identified, ranging from social media campaigns to combat disinformation linked to the disease, to setting up an online database of interreligious initiatives, to care programmes for the vulnerable in isolated areas. KAICIID Expands Learn at Home Offerings Stuck at home during COVID-19? KAICIID has launched a series of e-learning tools including webinars and online courses on interreligious dialogue to help you learn from home. These digital offerings are designed to support educators, policymakers, religious communities, IRD practitioners and researchers. IPDC Calls for Multireligious Cooperation The KAICIID-supported Interreligious Platform for Dialogue and Cooperation in the Arab Region urges religious institutions and communities to stand in solidarity and raise awareness about COVID-19. KAICIID Launches Virtual Chain of Hope KAICIID has launched a new digital campaign called Virtual Chain of Hope which invites participants to express the value of human interconnectedness during COVID-19 on social media. KAICIID and WOSM bring Dialogue Tools to 4,000 Scouts Online KAICIID and WOSM have taken dialogue online, hosting a special edition of Jamboree on the Internet (JOTI) which connects young people during COVID-19. The digital sessions encouraged youth to avoid loneliness and isolation by actively reaching out in friendship to one another. KAICIID Joins Arigatou International in Online Interfaith Prayer for Children KAICIID joined Arigatou International, senior religious leaders and children from diverse religions for an interfaith prayer and a message of unity for the world’s youngest citizens. The event is part of a larger campaign which focuses on the well-being of children during COVID-19. Standing Shoulder to Shoulder: Religious Leaders from the Arab Region Call for Solidarity As part of KAICIID's #ReligionsRespond campaign, senior Muslim and Christian leaders from the Arab Region, as well as KAICIID Fellows, have drafted 11 articles tackling topics such as the social impact of COVID-19, reducing the pandemic's spread, and humanitarian responsibilities for vulnerable communities. Interreligious Communities Come Together by Staying Apart: Each in their Own Way #PrayforHumanity As COVID-19 sweeps the globe, people following diverse religions around the world answer the Higher Committee of Human Fraternity's call to join together on May 14 for a day of reflection; each from their own home, within the parameters of their own faith, in their own way. This demonstration of interreligious solidarity comes in a time of social distancing and is intended as a demonstration of global solidarity and hope at a time when collective worship is actively discouraged. KAICIID Fellow Leads Meditation for Dialogue, Helps Families in Uganda Cope with COVID-19 Lockdown Since worldwide lockdowns have been imposed to combat the spread of COVID-19, there has been an alarming rise in domestic violence everywhere. KAICIID Fellow Nageeba Hassan is trying to support families in her community by offering live online interreligious meditation guidance for parents and their children during the lockdown. Covid-19 and Religion: New Ways to Worship and Serve Those in Need The KAICIID International Fellows are leading the way on the COVID-19 response, presenting practical solutions and guidelines to reach vulnerable communities, provide pastoral care and offer new tools for worship. They also propose recommendations for responsible ways to conduct community outreach while practicing social distancing. International Organizations Partner with Religious Organizations to Reach Vulnerable Communities during COVID-19 Religious leaders and institutions are standing up for vulnerable communities, actively partnering with policymakers and intergovernmental organizations to mitigate the social, economic and political impacts of COVID-19. Call for Proposals Religious communities are on the front lines of responses to the COVID-19 pandemic, preventing the spread of infection and supporting the most vulnerable groups in society. The International Dialogue Centre (KAICIID) is offering small grants for short-term initiatives for organizations and individuals in the Arab Region, Myanmar and Nigeria responding to the pandemic, particularly through an interreligious approach. KAICIID is calling for proposals for projects that enhance the role of interreligious dialogue in responding to COVID-19 and promoting public health. Webinars Protecting the Vulnerable: A Multireligious Call for Solidarity and Action How have religious leaders acted upon their responsibilities to intervene in response to the COVID-19 pandemic? What should religious leaders be saying to policymakers and the public, as the disease progresses through parts of the globe less empowered to deal with its social and economic effects? What are religious communities doing to modify their observance practices as a response to the COVID-19 crisis? Bringing together expert speakers from around the world, this webinar examines moving community gatherings online, praying for relief and deepening engagement with communities through digital means. COVID-19 and Religion Part 2 How are religious communities responding to the challenge of COVID-19? This webinar examines diverse initiatives on relief work, providing mediation and counselling, as well as useful tools and strategies that faith leaders can use during uncertain times.
https://www.kaiciid.org/news-events/news/online-resources-and-inspiring-stories-time-covid-19
decade in years? 36 What is thirty-one halves of a microgram in nanograms? 15500 How many centimeters are there in 103.9959 meters? 10399.59 How many minutes are there in 23363.5 hours? 1401810 What is 0.5303431 nanometers in kilometers? 0.0000000000005303431 What is 7/2 of a microgram in nanograms? 3500 How many milliseconds are there in 0.0732991ns? 0.0000000732991 What is seven quarters of a litre in millilitres? 1750 What is 5/8 of a kilometer in centimeters? 62500 What is 3/8 of a week in hours? 63 Convert 0.9215109ug to kilograms. 0.0000000009215109 Convert 9.85159 kilometers to millimeters. 9851590 What is 1/4 of a minute in seconds? 15 What is 1146457.8ms in days? 0.0132691875 Convert 13.391304 months to millennia. 0.001115942 What is one fifth of a milligram in micrograms? 200 What is thirteen quarters of a microgram in nanograms? 3250 What is 1/5 of a century in months? 240 How many months are there in fifty-five sixths of a year? 110 How many litres are there in 397.7837ml? 0.3977837 Convert 45.01643l to millilitres. 45016.43 Convert 3.609223 nanograms to kilograms. 0.000000000003609223 How many micrometers are there in three eighths of a millimeter? 375 What is fifty-seven fifths of a millennium in decades? 1140 What is 49486.15 millennia in decades? 4948615 Convert 74255.47 millimeters to nanometers. 74255470000 How many millilitres are there in twenty-five halves of a litre? 12500 How many millilitres are there in 27/4 of a litre? 6750 How many grams are there in 1/16 of a tonne? 62500 How many months are there in three eighths of a millennium? 4500 What is 45.04896ms in seconds? 0.04504896 Convert 356.4372 years to centuries. 3.564372 Convert 1.531421 nanograms to micrograms. 0.001531421 How many millilitres are there in 3/5 of a litre? 600 Convert 6744.744 centuries to years. 674474.4 How many nanoseconds are there in 47118.63 days? 4071049632000000000 What is fourty-eight fifths of a litre in millilitres? 9600 How many millilitres are there in 46/5 of a litre? 9200 Convert 1652.7483 months to decades. 13.7729025 How many micrograms are there in 4352.057 nanograms? 4.352057 Convert 3.245877ml to litres. 0.003245877 How many millilitres are there in 3/25 of a litre? 120 Convert 0.4014015ng to kilograms. 0.0000000000004014015 Convert 92.6601 minutes to milliseconds. 5559606 What is 31/5 of a centimeter in millimeters? 62 How many millilitres are there in twenty-one fifths of a litre? 4200 How many seconds are there in 1/27 of a day? 3200 What is 3/32 of a millimeter in nanometers? 93750 What is five quarters of a meter in millimeters? 1250 How many centimeters are there in 1/20 of a meter? 5 How many decades are there in 33/5 of a century? 66 What is 1/5 of a millennium in months? 2400 What is twenty-seven quarters of a decade in months? 810 How many years are there in six fifths of a century? 120 What is 8.743962ml in litres? 0.008743962 Convert 0.515724 decades to centuries. 0.0515724 What is 9.179085 weeks in days? 64.253595 How many nanometers are there in three fifths of a micrometer? 600 How many milliseconds are there in 1/24 of a minute? 2500 How many years are there in three eighths of a millennium? 375 What is 45.54694 hours in nanoseconds? 163968984000000 How many millilitres are there in thirty-one fifths of a litre? 6200 How many nanoseconds are there in 2.12323us? 2123.23 How many decades are there in 3168.611 years? 316.8611 What is 15074.76 centuries in millennia? 1507.476 What is 507.1118l in millilitres? 507111.8 How many months are there in thirty-seven halves of a decade? 2220 What is 8/25 of a meter in centimeters? 32 How many millilitres are there in 72054.39 litres? 72054390 What is 6/25 of a millennium in months? 2880 How many microseconds are there in 60.84257 nanoseconds? 0.06084257 How many grams are there in 1/5 of a kilogram? 200 How many millilitres are there in one quarter of a litre? 250 What is 0.6588953ml in litres? 0.0006588953 How many milligrams are there in 53943.64ug? 53.94364 How many milliseconds are there in 3/5 of a minute? 36000 How many nanograms are there in 32.76624 tonnes? 32766240000000000 How many millilitres are there in sixty-nine halves of a litre? 34500 Convert 333311.5um to kilometers. 0.0003333115 How many months are there in 5/6 of a year? 10 How many decades are there in fifty-seven fifths of a century? 114 Convert 283.8883m to kilometers. 0.2838883 What is 5/8 of a millimeter in micrometers? 625 What is eleven quarters of a hour in minutes? 165 How many millilitres are there in 5/8 of a litre? 625 How many micrograms are there in one quarter of a milligram? 250 How many micrograms are there in 5/8 of a milligram? 625 Convert 5.682079g to nanograms. 5682079000 How many millilitres are there in 3/8 of a litre? 375 How many millilitres are there in 5/4 of a litre? 1250 Convert 2365663.86 seconds to weeks. 3.91148125 Convert 27.30997 nanograms to grams. 0.00000002730997 Convert 0.7734918 microseconds to milliseconds. 0.0007734918 How many meters are there in 7/4 of a kilometer? 1750 What is 71/5 of a kilometer in meters? 14200 What is 41337.42l in millilitres? 41337420 How many centimeters are there in fifteen sixteenths of a kilometer? 93750 How many centimeters are there in 0.6839165 kilometers? 68391.65 What is seven halves of a century in years? 350 What is fifty-one halves of a millennium in decades? 2550 How many meters are there in 1/4 of a kilometer? 250 How many micrometers are there in 7038.594km? 7038594000000 Convert 20581.58kg to tonnes. 20.58158 How many millilitres are there in 3/25 of a litre? 120 What is 3/8 of a kilometer in centimeters? 37500 How many millennia are there in 14.91101 decades? 0.1491101 What is 469.6559 millilitres in litres? 0.4696559 How many millilitres are there in one fifth of a litre? 200 What is twenty-five eighths of a milligram in micrograms? 3125 Convert 43469.54 hours to microseconds. 156490344000000 What is one quarter of a decade in months? 30 How many centimeters are there in 5/8 of a kilometer? 62500 What is 0.1828033 milligrams in kilograms? 0.0000001828033 How many centimeters are there in 18/5 of a meter? 360 How many micrometers are there in five eighths of a millimeter? 625 What is one quarter of a hour in seconds? 900 What is 21/5 of a gram in milligrams? 4200 How many months are there in 37/2 of a decade? 2220 What is 1.648206 decades in months? 197.78472 How many millennia are there in 1513145.1 months? 126.095425 What is 10589.25 millennia in years? 10589250 What is 5/6 of a decade in months? 100 How many seconds are there in 45075.4 days? 3894514560 What is 694266.8 weeks in minutes? 6998209344 How many years are there in 40344.18 millennia? 40344180 What is 220855.02 months in millennia? 18.404585 How many seconds are there in 3/25 of a day? 10368 What is five eighths of a microgram in nanograms? 625 How many nanograms are there in 33/5 of a microgram? 6600 How many minutes are there in twenty-nine fifths of a hour? 348 Convert 647.1589 hours to seconds. 2329772.04 How many years are there in 1179.312 millennia? 1179312 How many minutes are there in 13/5 of a day? 3744 Convert 8233.55 grams to nanograms. 8233550000000 How many months are there in 28/3 of a decade? 1120 How many grams are there in 1/5 of a kilogram? 200 How many millimeters are there in thirteen fifths of a centimeter? 26 How many millilitres are there in fifty-one fifths of a litre? 10200 What is 7/8 of a kilometer in meters? 875 What is 71.10822 millilitres in litres? 0.07110822 What is 1157830.2ms in days? 0.0134008125 What is 722.3627g in kilograms? 0.7223627 What is 8726.572 millilitres in litres? 8.726572 What is 974.3103 centuries in months? 1169172.36 How many millennia are there in 58693.38 centuries? 5869.338 Convert 0.9050128km to centimeters. 90501.28 Convert 7.393283ml to litres. 0.007393283 What is 63688.7 milligrams in tonnes? 0.0000636887 What is 90.55709 millilitres in litres? 0.09055709 What is 6718.942ug in milligrams? 6.718942 How many grams are there in sixty-four fifths of a kilogram? 12800 What is 0.1925823 weeks in microseconds? 116473775040 Convert 428.509g to micrograms. 428509000 What is 13/8 of a centimeter in microme
and Receive Complimentary E-Books of Previous Editions The idea of performing medical examinations and evaluations using telecommunication networks is not new. Shortly after the invention of the telephone, attempts were made to transmit heart and lung sounds to a trained expert who could assess the state of the organs. However, poor transmission systems made the attempts a failure. We list below some of the historic landmarks across the evolution of telemedicine: 1906 - ECG Transmission: ECG transmission over telephone lines in 1906 made by the “father of electrocardiography”, Einthoven 1920s - Help for ships: During this time, radios were used to link physicians standing at shore stations to assist ships at sea that had medical emergencies 1955 - Telepsychiatry: Nebraska Psychiatric Institute was one of the first facilities in USA to have closed-circuit television in 1955. In 1971 the Nebraska Medical Center was linked with the Omaha Veterans Administration Hospital and VA facilities in two other towns. 1967- Massachusetts General Hospital: This station was established in 1967 to provide occupational health services to airport employees and to deliver emergency care and medical attention to travelers 1970s - Satellite telemedicine via ATS-6 satellites: Paramedics in remote Alaskan and Canadian villages were linked with hospitals in distant towns or cities An important role in the early development of telemedicine was played by the National Aeronautics and Space Administration (NASA). NASA's efforts in telemedicine began in the early 1960’s when humans began flying in space. NASA has been a pioneer in telemedicine research and applications. Since the first days of suborbital flight, telemedicine has been transformed by the increasing complexity of space operations. NASA have started in telemedicine applications field by monitoring physiological parameters of astronauts sent into space (telemetry) and parameters of the cabin and external environments. These first efforts and the development of satellite communication led to the development of telemedicine as well as developing various equipment used in health care today. Another universally recognized promoter of telemedicine is the U.S. Defense Department that is interested in new remote medicine developments, due of combat missions that are taking place mainly at distance from the national territory. Telecardiology: Telecardiology is the practice of cardiology which utilizes telecommunications, and as such is a new alternate and cost-effective means of providing cardiac care. Classifi cation: Automated classification tools such as decision trees have been shown to be very effective for distinguishing and characterizing very large volumes of data. They assign items to one of a set of predefined classes of objects based on a set of observed features. Classifiers can be learned automatically from a set of examples through supervised learning. Classification rules are rules that discriminate between different partitions of a database based on various attributes within the database. The partitions of the database are based on an attribute called the classification label. ECG Signal: The electrocardiogram (ECG) is the recording on the body surface of the electrical activity of the heart. Databases: A structured repository for data, consisting of a collection of data and their associated data model, and usually stored on a computer system. The existence of a regular and formal indexing structure permits rapid retrieval of individual elements of the database. Artificial Intelligence: Artificial intelligence (AI) is the mimicking of human thought and cognitive processes to solve complex problems automatically. AI uses techniques for writing computer code to represent and manipulate knowledge. Different techniques mimic the different ways that people think and reason. AI applications can be either stand-alone software, such as decision support software, or embedded within larger software or hardware systems. Automated ECG Analysis: Consists of a series of procedures that can be used in order for useful clinical information to produced that will help the physician to reach a diagnosis faster and safer concerning the path physiological condition of the patient’s heart. Compression: Use of a mathematical algorithm to reduce the size of data, audio, or video transmissions for greater speed or use of lower bandwidths. Telemedicine: Telemedicine is thought of as long-distance clinical health care, including practitioner-to-patient meetings, practitioner-to-practitioner discussions and exchange of clinical information via technology. Telehealth: Telehealth relates to the use of telecommunication equipment and computing technology to support long-distance clinical health care, patient and professional health-related education, public health concerns and health care administration.
https://www.igi-global.com/chapter/electrocardiographic-signal-processing-applications-telemedicine/40649
"But there is also a fourth scenario, in which China’s leaders propel the country forward, establishing the rule of law and regulatory structures that better reflect the country’s diverse interests. Beijing would also have to expand its sources of legitimacy beyond growth, materialism, and global status, by building institutions anchored in genuine popular support. This would not necessarily mean transitioning to a full democracy, but it would mean adopting its features: local political participation, official transparency, more independent judicial and anti-corruption bodies, an engaged civil society, institutional checks on executive power, and legislative and civil institutions to channel the country’s diverse interests." I am certain China will evolve as described above in Foreign Affairs magazine, however China will not be a wall flower and let the western world push it around. The rest of this essay gives you my rational for reaching this conclusion. China has a lot of pride and history. China has a long history and long memories. The Chinese are a patient people including their leaders. They remember well the "century of humiliation" dating from the mid-19th century and ending after World War II in the mid 1940's. Much of this humiliation came at the hands of the western powers. They will not forget. The question is can they forgive and become a true trading partner. The treaties forced upon the Chinese demanded that Britain and other sovereign nations be looked upon by China as equals , thus shattering the centuries old thought that China was the Middle Kingdom and the rest of the world was lower in culture and tradition. This was very humiliating and distasteful to the Chinese people. Other events that were part of the century of humiliation included unequal treaties where Britan by force imposed one-sided agreements on China, the Taiping Rebellion, which was a civil war from 1850 to 1964 where over 20 million Chinese died, the sacking by the British of the sacred old Summer Palace, the Sino-French War, the First Sino-Japanese War, the Twenty-One Demands by Japan in the early 20th century, and the Second Sino-Japanese War from 1937 to 1945. In this period, China lost all the wars it fought and often forced to give major concessions to the great powers in later treaties. Mao Zedong declared the end of the "century of humiliation" in the aftermath of World War II, with the establishment of the People's Republic of China in 1949. It is worthwhile to note that well conducting negotiations with the United States in the 1970's one of the major stumbling blocks was Taiwan, an island that was part of China before the Korean War and China still claims it as its territory. This issue was overcome by Mao saying that Taiwan was probably best left under the care of the Americans for now, but sometime in the future, perhaps 50 or 100 years from then, China will demand its return. The United States will need to adjust to living in a world of equals.
http://www.freeourfreemarkets.org/2014/09/lost-century-for-china.html
Today's post is written by Elissa Gathman, Corporate Services Fellow at Points of Light. During the 2013 Conference on Volunteering and Service, I had the pleasure of attending, “Learnings From The Civic 50: America’s Most Community-Minded Companies,” a workshop that featured three leaders from the top 50 companies of 2012. This workshop drove home the value of incorporating social responsibility into corporate strategy, and of companywide involvement in addressing community needs. It was inspiring to hear these leaders discuss the key trends they are most excited about in the world of corporate social responsibility (CSR). Susan Portugal, senior vice president and CSR philanthropy director of Bank of America, highlighted the change in partnerships between companies and nonprofits, which now go deeper than just dollars. Today, companies and nonprofits are establishing longer term and evolving relationships that are mutually beneficial. Janine Rouson, director of global volunteerism and corporate citizenship of GE, focused on the direct tie-in involving volunteerism, leadership development and positive company image. “No longer should volunteerism be looked at as something that’s separate and to the side,” Rousen said, as she pointed to the value of employee volunteerism for professional development, character building and community connectivity. Tracy Moore, group manager of community relations for Target, said she was excited about the shift in the movement of community-minded companies embracing a more impact-oriented mindset. She explained that leaders are now asking deeper questions about the depth and social impact of their company’s CSR programs, and noted that companies that embrace this trend attract millennials, whom she called "a generation of people who want to contribute. They want to work for a company that does no harm.” To round out the workshop, the opening of the 2013 Civic 50 survey was announced. This year, The Civic 50 – an initiative that identifies the country's 50 most community-minded companies – is partnering with True Impact, and will be focusing on key elements of community engagement: civic commitment, strategic resource allocation, business integration, company policies and measurement. All S&P 500 companies are eligible to take the survey, which will remain open through August 16. The 50 most community-minded companies will then be featured by Bloomberg in early December 2013. To take the survey, visit The Civic 50 website: civic50.org.
http://pointsoflight.org/blog/2013/08/01/learnings-civic-50-america%E2%80%99s-most-community-minded-companies
Harris and Krueger point out that the current legal standard for distinguishing between \”employees\” and \”independent contractor\” involves nine different distinctions–and these distinctions are made in different ways in the Fair Labor Standards Act, the Employee Retirement Income Security Act (ERISA), in tax law, and in various court decisions about all of the above. The nine distinctions are: \”Role of work: Is the work performed integral to the employer’s business? Skills involved: Is the work not necessarily dependent on special skills? Investment: Does the employer provide the necessary tools and/or equipment and bear the risk of loss from those investments? Independent Business Judgment: Has the worker withdrawn from the competitive market to work for the employer? Duration: Does the worker have a permanent or indefinite relationship with the employer? Control: Does the employer set pay amount,w ork hours, and manner in which work is performed? Benefits: Does the worker receive insurance, pension plan, sick days, or other benefits that suggest an employment relationship? Method of Payment: Does the worker receive a guaranteed wage or salary as opposed to a fee per task? Intent: Do the parties believe they have created a employer– employee relationship?\” Harris and Krueger define their proposed legal category of \”independent workers\” in this way: \”Independent workers operate in a triangular relationship: they provide services to customers identified with the help of intermediaries. The intermediaries create a communications channel, typically an “app,” that customers use to identify themselves as needing a service—for example, a car ride, landscaping services, or food delivery. (An intermediary need not utilize the Internet to match independent workers and customers …) … The intermediary does not assign the customer to the independent worker; rather, the independent worker chooses or declines to serve the customer (sometimes within broadly defined limits). However, the intermediary may set certain threshold requirements for independent workers who are eligible to use its app, such as criminal background checks. The intermediary may also set the price (or at least an upper bound on the price) for the service provided by independent workers through its app. But the intermediary exercises no further control over how and whether a particular independent worker will serve a particular customer. The intermediary is typically rewarded for its services with a predetermined percentage of the fee paid by the customer to the independent worker. … The independent worker chooses when and whether to work at all. The relationship can be fleeting, occasional, or constant, at the discretion of the independent worker.\” They estimate there are about 600,000 \”independent workers\”, which is about 0.4% of US employment, working with online intermediaries. in the gig economy. This number seems to be growing rapidly. They also mention a number of existing jobs that don\’t operate through on-line apps but seem to share many of the traits of \”independent workers,\” and discuss how many traditional taxi drivers (as opposed to Uber and Lyft drivers), temporary staffing agency employees, labor contractors, members who secure jobs through union hiring halls, outside sales employees, and (perhaps) direct sales employees occupy the points of triangles with other economic actors.\” Here\’s a quick summary (with more discussion in the paper) of Harris-Krueger proposal for how \”independent workers\’ would be treated under law: In our proposal, independent workers — regardless of whether they work through an online or offline intermediary — would qualify for many, although not all, of the benefits and protections that employees receive, including the freedom to organize and collectively bargain, civil rights protections, tax withholding, and employer contributions for payroll taxes. Because it is conceptually impossible to attribute their work hours to any single intermediary, however, independent workers would not qualify for hours-based benefits, including overtime or minimum wage requirements. Further, because independent workers would rarely, if ever, qualify for unemployment insurance benefits given the discretion they have to choose whether to work through an intermediary, they would not be covered by the program or be required to contribute taxes to fund that program. However, intermediaries would be permitted to pool independent workers for purposes of purchasing and providing insurance and other benefits at lower cost and higher quality without the risk that their relationship will be transformed into an employment relationship. Like any compromise choice, a new legal category like the Harris-Krueger proposal for \”independent workers\” is going to be somewhat unpopular with many parties. Many companies would prefer to treat their gig workers as independent contractors, to whom they have no additional legal responsibility. Some gig workers would prefer to have both their existing freedom of action but also the legal protections of employees. To resolve these issues, we can either go with the full-employment-for-lawyers approach and litigate the issues over and over in every new context in which they arise–an approach that is already underway–or we can settle on a compromise position. I don\’t have a strong opinion on whether the Harris-Krueger proposal for the legal status of \”independent workers\” is the right compromise. But it almost certainly beats smothering the gig economy in red tape and legal briefs.
https://conversableeconomist.com/2015/12/09/new-rules-for-workers-in-the-gig-economy/
[Not Available]. BIOLOGICAL ASPECTS OF JAK/STAT SIGNALING IN BCR-ABL-NEGATIVE MYELOPROLIFERATIVE NEOPLASMS: Myeloproliferative disorders more recently named Myeloproliferative neoplasms (MPN) display several clinical entities: chronic myeloid leukemia (CML), the classical MPN including polycythemia vera (PV), essential thrombocythemia (ET), primary myelofibrosis (PMF) and atypical and unclassifiable NMP. The term MPN is mostly used for classical BCR-ABL-negative (myeloproliferative disorder) (ET, PV, PMF). These are clonal diseases resulting from the transformation of an hematopoietic stem cell and leading to an abnormal production of myeloid cells. The genetic defects responsible for the myeloproliferative abnormalities are called « driver » mutations and all result in deregulation of the cytokine receptor / JAK2 / STAT axis. Among them, JAK2, the thrombopoietin receptor (MPL) and calreticulin (CALR) mutations are found in around 90% of the cases. These driver MPN mutations can be associated with other driver mutations also found in other hematological malignancies, especially in PMFs. These are chronic diseases with major risks being thrombosis, hemorrhage and cytopenias for PMF and the long-term progression to myelofibrosis and the transformation to leukemia. Most recent therapeutic have focused on targeting the JAK2 signaling pathway directly by inhibitors of JAK2 or indirectly. Interferon a allows in some cases hematologic and molecular remission patients.
Throughout this application various publications are referred to in parentheses. Full citations for these references may be found at the end of the specification. The disclosures of these publications are hereby incorporated by reference in their entirety into the subject application to more fully describe the art to which the subject invention pertains. The nervous system developed over evolutionary time to optimize survival in response to signals from the internal and external environment. In mammals, chemical, mechanical, and electromagnetic signals are sensed by neurons, which propagate action potentials to the central nervous system (CNS). These comprise the afferent arcs of reflex circuits that maintain the body's homeostasis. This fundamental principle of sensing environmental changes in order to mount appropriate reflex responses is central to the physiological mechanisms that allow for not only homeostasis but adaptability and species survival. Thirty years ago, it was discovered that products of the immune system, including cytokines and other mediators, could be sensed by the nervous system, prompting the suggestion that the immune system could serve as a functional sensory modality (1). In this context, foreign invaders, microbial products, and other exogenous immune stimulators culminate in the release of cytokines. These immune products can in turn interact with the peripheral nervous system and the CNS to elicit neurophysiological responses; however, the question remains whether the sensory neural signals are encoded in cytokine-specific patterns. There has been an expanding body of knowledge delineating the extensive interface between the nervous and immune systems. Similar to the neural control of the body's general physiological and metabolic states, systemic inflammatory pathways can be modulated by the CNS, with the archetypal pathway being the inflammatory reflex of the vagus nerve (VN) (2). In its efferent arc, electrical signals move down the vagus nerve to the celiac ganglion from which the splenic nerve further propagates the signal towards the spleen. Within the spleen, a specialized subset of T lymphocytes completes the link between the nervous and immune systems (3, 4). Acetylcholine, which is released by these T cells, down-regulates cytokine production by resident macrophage populations thereby producing a systemic anti-inflammatory effect (3). In contrast to the well-mapped motor arc, the afferent arc remains incompletely understood. Notably, the vagus nerve is primarily sensory, such that numerous afferent signals regarding physiological status travel the vagus nerve from the periphery into the CNS. Oftentimes neglected is the notion that these signals might include the inflammatory status of the animal. The pioneering work by Niijima and collaborators (5-7) led them to postulate that IL-1β might activate peripheral afferents of the vagus nerve that would signal to the CNS about the presence of this cytokine. Physiological studies have shown that an intact vagus nerve is required for a pyrexia response to intra-abdominal IL-1β administration, further corroborating the notion that the vagus nerve might be a primary peripheral inflammation sensor for the CNS (8, 9). Parallel studies in isolated sensory neurons show that neurons express a variety of cytokine receptors, such as the TNF and IL-1β receptors, and are able to change their activation thresholds when exposed to the corresponding exogenous cytokines (10-12). In combination, these studies suggest that the vagus nerve is an important substrate for a peripheral neural network capable of capturing real-time signals pertaining to changes in peripheral inflammatory and immune states. The present invention addresses the need for improved methods for treating diseases and disorders, in particular methods that do not require administration of drugs to a subject. The methods disclosed herein use a stimulus pattern derived from a disease-specific or condition-specific or endogenous mediator-specific or pharmacologic agent-specific neurogram to produce a stimulus pattern that is applied to a nerve such as the vagus nerve to treat the disease or disorder.
全球电竞平台在线正规 is leading the global initiative towards post-COVID-19 resurgence of the micro, small and medium enterprises (MSME) sector developed under the UN framework for the immediate socioeconomic response to the pandemic. Dubbed the MSME Surge project , it seeks to strengthen the capacity and resilience of MSMEs in developing countries and economies in transition to mitigate the economic and social impact of the pandemic. It’s implemented in partnership with several UN entities and regional economic commissions, including UN DESA , UNESCWA , UNECE , UNECA , UNESCAP and ECLAC . Targeted services The project provides targeted advisory and capacity-building services to governments and entrepreneurs to facilitate resurgence and strengthen the capacity and resilience of MSMEs to mitigate the economic and social impact of the global COVID-19 crisis. The project follows a coherent approach based on the Entrepreneurship Policy Framework (EPF) of 全球电竞平台在线正规. In 2021, it promoted entrepreneurship skills, disseminating business management knowledge, focusing on the areas that are critical for MSMEs’ green and sustainable recovery. MSMEs received immediate and short-term interventions thanks to the establishment of the first iteration of a virtual knowledge hub offering all policy tools, training material and capacity-building toolkits developed in the framework of the project to support them to address COVID-19 related challenges. As a result, over 75% of policymakers and other stakeholders from beneficiaries of the knowledge hub and network confirmed that the project had improved their capacities on formulating and implementing enabling policies in the context of COVID-19 resurgence. Empowering entrepreneurs At the same time, the project delivered capacity-building activities for entrepreneurs. In partnership with Empretec national centres and international master trainers, over 200 Empretec certified trainers from 25 countries were equipped with updated or new tools and methods to further develop and strengthen much-needed entrepreneurial skills tools to facilitate MSMEs’ post-COVID-19 resurgence. The capacity-building activities also targeted entrepreneurs within vulnerable groups, such as people with low literacy skills and rural entrepreneurs. This yielded immediate positive results. The project also promoted the exchange of experiences, success stories and best entrepreneurial practices through the organization of live sessions/webinars in partnership with a network of Empretec centres in 10 countries. Through the sessions, 370 entrepreneurs and former Empretec beneficiaries shared how they adapted their business practices using the Empretec competencies framework and survived or even made profits despite the pandemic. Other undertakings in support of entrepreneurship and business skills facilitation included e-learning courses and exchange of good practices on digital, green, agri, blue, inclusive and resilient entrepreneurship, reaching over 2,500 participants. Formalizing small businesses Further leveraging on the EPF, the project facilitated MSMEs’ registration and formalization, supported them in their business activities and/or enabled them to benefit from relief measures. In El Salvador, through Cuentamype.org , a platform where MSMEs can register and access digital services, the project allowed them to access a $600 million COVID-19 rescue package aimed at assisting small businesses affected by lockdowns. Evidence showed that between June and July 2020, the number of users to the platform increased two and a half times, with 56% of those who registered through the new portal being women, reflecting how online portals made government services more accessible to vulnerable groups. In the same workstream, the project supported the Gambia and Kenya to improve their policy environment for MSME formalization. Improving access to finance The project facilitated access to finance by adequate reporting, not only on economic performance but also on sustainability issues. To improve MSMEs access to finance, the project developed an accounting training manual focusing on micro and small companies. The training focused on improving financial literacy for entrepreneurs, including their understanding of accounting, financial analysis and common requirements to access financing. To expand outreach, in 2021, the project conducted training of trainer workshops for 382 participants from 39 countries in English, French and Spanish. In addition, based on 全球电竞平台在线正规’s guidance on core indicators for entity reporting on contribution towards implementation of the Sustainable Development Goals and a training manual on its implementation, the project conducted six online workshops, which benefited 429 participants from 26 countries. Also, the development of an e-learning course on core SDG indicators began under the project.
http://seventismanagement.com/msme-surge-project.html
United States Department of Agriculture We examined patterns of non-native plant diversity in protected and managed ponderosa pine/Douglas-fir forests of the Colorado Front Range. Cheesman Lake, a protected landscape, and Turkey Creek, a managed landscape, appear to have had similar natural disturbance histories prior to European settlement and fire protection during the last century. However, Turkey Creek... Questions: Mountain systems have high abiotic heterogeneity over local spatial scales, offering natural experiments for examining plant species invasions. We ask whether functional groupings explain non-native species spread into native vegetation and up elevation gradients.We examine whether non-native species distribution patterns are related to environmental... The introduction of non-native species has accelerated due to increasing levels of global trade and travel, threatening the composition and function of ecosystems. Upon arrival and successful establishment, biological invaders begin to spread and often do so with considerable assistance from humans. Recreational areas can be especially prone to the problem of... Straw mulch is commonly used for post-fire erosion control in severely burned areas but this practice can introduce non-native species, even when certified weed-free straw is used. Rice straw has recently been promoted as an alternative to wheat under the hypothesis that non-native species that are able to grow in a rice field are unlikely to establish in dry forested... The composition, diversity, and structure of vascular plants are important indicators of forest health. Changes in species diversity, structural diversity, and the abundance of non-native species are common national concerns, and are part of the international criteria for assessing sustainability of forestry practices. The vegetation indicator for the national Forest... Aim: Much is known about the elevational diversity patterns of native species and about the mechanisms that drive these patterns. A similar level of understanding is needed for non-native species. Using published data, we examine elevational diversity patterns of non-native plants and compare the resulting patterns with those observed for native plants. Location:... The urban forest provides a community numerous benefits. The urban forest is composed of a mix of native and non-native species introduced by people managing this forest and by residents. Because they usually contain non-native species, many urban forests often have greater species diversity than forests in the surrounding natural... Many tropical island forest ecosystems are dominated by non-native plant species and lack native species regeneration in the understorey. Comparison of replicated control and removal plots offers an opportunity to examine not only invasive species impacts but also the restoration potential of native species. In lowland Hawaiian wet forests little is known about native... Non-native plant invasion along elevation and canopy closure gradients in a Middle Rocky Mountain ecosystem Mountain environments are currently among the ecosystems least invaded by non-native species; however, mountains are increasingly under threat of non-native plant invasion. The slow pace of exotic plant invasions in mountain ecosystems is likely due to a combination of low anthropogenic disturbances, low propagule supply, and extreme/steep environmental gradients. The... Concerns about the long-term sustainability of overstocked dry conifer forests in western North America have provided impetus for treatments designed to enhance their productivity and native biodiversity. Dense forests are increasingly prone to large stand-replacing fires; yet, thinning and burning treatments, especially combined with other disturbances such as drought... In the semi-arid sagebrush steppe of the Northeastern Sierra Nevada, resources are both spatially and temporally variable, arguably making resource availability a primary factor determining invasion success. N fixing plant species, primarily native legumes, are often relatively abundant in sagebrush steppe and can contribute to ecosystem nitrogen budgets. ... Biological invasions are a global and increasing threat to the function and diversity of ecosystems. Allee effects (positive density dependence) have been shown to play an important role in the establishment and spread of non-native species. Although Allee effects can be considered a bane in conservation efforts, they can be a benefit in attempts to manage non-native... Firewood can serve as a vector in the transport of non-native species, including wood-boring insects that feed within the wood and thus can be transported accidentally. Governments have enacted limitations on the movement of firewood in an effort to limit the anthropogenic movement of non-native species through, for example, recreational camping. Although the movement... This study aims to document shifts in the latitudinal distributions of non-native species relative to their own native distributions and to discuss possible causes and implications of these shifts. We used published and newly compiled data on intercontinentally introduced birds, mammals and plants. We found strong correlations between the latitudinal distributions... Despite widespread acknowledgment that disturbance favors invasion, a hypothesis that has received little attention is whether non-native invaders have greater competitive effects on native plants in undisturbed habitats than in disturbed habitats. This hypothesis derives from the assumption that competitive interactions are more persistent in habitats that have not... Estimating rates of spread and generating projections of future range expansion for invasive alien species is a key process in the development of management guidelines and policy. Critical needs to estimate spread rates include the availability of surveys to characterize the spatial distribution of an invading species and the application of analytical methods to... A changing climate and fire regime shifts in the western United States have led to an increase in revegetation activities, in particular post-wildfire rehabilitation and the need for locally-adapted plant materials. Broadcast seeding is one of the most widely used post-wildfire emergency response treatments to minimize soil erosion, promote plant community recovery,... Geographical variation in numbers of established non-native species provides clues to the underlying processes driving biological invasions. Specifically, this variation reflects landscape characteristics that drive non-native species arrival, establishment and spread. Here, we investigate spatial variation in damaging non-native forest insect and pathogen species to... Non-native species invasions, growing human populations, and climate change are central ecological concerns in tropical island communities. The combination of these threats have led to losses of native biota, altered hydrological and ecosystem processes, and reduced ecosystem services. These threats pose complex problems to often underfunded management entities. We...
https://www.fs.usda.gov/treesearch/search?keywords=%22non-native+species%22
HERNANDEZ-ALVARADO, Luis A et al. RESISTANCE OF Capsicum annuum GENOTYPES TO Bemisia tabaci AND INFLUENCE OF PLANT LEAF TRAITS. Rev. fitotec. mex [online]. 2019, vol.42, n.3, pp.251-257. Epub 16-Oct-2020. ISSN 0187-7380. Bemisia tabaci (Gennadius) (Homoptera: Aleyrodidae) is one of the most damaging pests of Capsicum annuum L. (Solanales: Solanaceae) worldwide. The large genetic diversity of landrace genotypes of C. annuum in several regions of America offers an excellent opportunity to study the factors involved in the resistance response to B. tabaci. This study was carried out to evaluate the oviposition preference and nymphal mortality of B. tabaci in landrace genotypes of C. annuum and to determine whether the physical or chemical characteristics of the leaves influence this response. Oviposition preference varied among genotypes. Low oviposition preference and high nymphal mortality were observed in genotypes Amaxito and Simojovel. Oviposition preference and nymphal mortality showed no significant correlation with leaf size, leaf hardness or trichome density. The chemical composition analyses of leaves of four genotypes with differential response on nymphal mortality showed significant differences in the foliar content of N, phenol, and total flavonoids, but there was no clear trend in the association between the nymphal mortality and chemical composition of leaves. Palabras llave : Chili germplasm; insect-plant interaction; plant resistance.
http://www.scielo.org.mx/scielo.php?script=sci_abstract&pid=S0187-73802019000300251&lng=es&nrm=iso&tlng=en
- Bring cash to pay for purchases. - Bargaining has fallen out of favor at this local market and most vendors now have fixed prices. If no prices are listed, feel free to try and negotiate, but be aware that you may be rejected. - Get there early to nab the best produce.
https://www.viator.com/Zagreb-attractions/Dolac-Market/d5391-a12318
CUMMINGS, Circuit Judge. This case arises on the petition of National Dairy Products Corporation ("National") to set aside an order of the Federal Trade Commission applicable to its Sealtest Foods Division. National is a Delaware corporation with its principal office and place of business in New York City. It is engaged in the business of purchasing, manufacturing, processing, distributing and selling dairy and other products throughout the United States. It is the nation's largest dairy product distributor. Its Sealtest Division has general supervision over National's food, milk and ice cream divisions and subsidiaries. The Sealtest divisions sell a diversified line of food products, including milk and ice cream. In 1956, National's net sales were approximately $1,352,878,000, increasing to $1,790,834,000 in 1961. At the close of 1957, the Federal Trade Commission issued a complaint charging that National had violated Sections 2(a) and (d) of the Clayton Act, as amended by the Robinson-Patman Act (15 U.S.C. §§ 13(a) and 13(d)), in the course of sales of milk and other dairy products through its Sealtest Foods Division. In July 1963, a hearing examiner held that National had violated both statutory provisions and accordingly recommended a National appealed to the Commission from the Section 2(a) portion of the examiner's order. The Commission granted the appeal in part in a 2-1 decision culminating in the following order: In addition to arguing that the Commission's findings and conclusions are unsupported by substantial evidence, National asserts that the Commission majority misinterpreted the clauses in Sections 2(a) and (b) of the Clayton Act dealing with cost justification, good-faith meeting of competition and competitive effects. Commissioner Elman's dissent, on which National relies, deals only with the standards to be applied to the defense of meeting competition under Section 2(b) of the Act. The Commission's order was based on National's discriminatory pricing in the following areas: (1) Jackson-Lansing-Battle Creek, Michigan; (2) Toledo, Ohio — Monroe, Michigan; (3) Memphis, and (4) New Orleans. As to the first area, the Commission held that the evidence was insufficient to find that National's sales were in interstate commerce, so that only the other three area are presently involved. In this opinion, each area and the legal issues pertaining thereto will be covered separately. Toledo-Monroe Area Here in 1958 National granted 13 customers a fluid milk discount of 12%, 8 customers 10%, and one customer 7%. These 22 discounts were in excess of those received by National's other retail store customers. One hundred fifty-eight of them received no discounts and 112 received discounts ranging from 2% to 6%. Although rejecting the examiner's finding of primary line injury, the Commission sustained his finding of potential secondary line injury to retail grocers selling Sealtest milk. The competitive effects clause of the statute of course does not require showing that injury has actually occurred, but merely that the effect of the discrimination "may be substantially to lessen competition" (15 U.S.C. § 13(a)). As here, any substantial, sustained differential between competing resellers is prima facie injurious. "Mini-injury" is the test. Rowe, "Section 2(a) of the Robinson-Patman Act New Dimensions in the Competitive Injury Concept," 37 ABA Antitrust Law Journal 14, 16 (1968). Only six independent grocery stores received more than a 6% discount from National, whereas most of its chain and group store customers were receiving a 12% discount. These high discounts enabled them to sell Sealtest milk at a price lower than the price paid to National for such milk by all but six of its In E. Edelmann & Company v. Federal Trade Commission, 239 F.2d 152, 154 (7th Cir. 1956), certiorari denied, 355 U.S. 941, 78 S.Ct. 426, 2 L.Ed.2d 422, we held the competitive effects clause of Section 2(a) of the Clayton Act satisfied in the following circumstances: Since all these factors were present here, the Commission's finding of probable competitive injury must stand. This is true even if there had been direct testimony by non-favored customers that the price discriminations had not injured their businesses. Foremost Dairies, Inc. v. Federal Trade Commission, 348 F.2d 674, 680 (5th Cir. 1965), certiorari denied, 382 U.S. 959, 86 S.Ct. 435, 15 L.Ed. 2d 362. As there pointed out, injury may be inferred even if the favored customer did not undersell his rivals, for a substantial price advantage can enlarge the favored buyer's profit margin or enable him to offer attractive services to his customers. As in United Biscuit Company of America v. Federal Trade Commission, 350 F.2d 615, 621 (7th Cir. 1965), certiorari denied, 383 U.S. 926, 86 S.Ct. 930, 15 L.Ed.2d 845, the Commission did not apply a per se test in finding that the competitive effect of National's pricing practices would be substantial. Instead, it relied on the following factors and testimony of independent store owners (Trade Reg.Rep. (65-67 Transfer Binder) ¶ 17,656, pp. 22,917-22,918) (1966)): This evidence satisfies the Morton Salt and United Biscuit tests. Furthermore, unless minuscule, the portion Even though National's discounts violated Section 2(a) of the Act, National would have a good defense by showing that its lower price "was made in good faith to meet an equally low price of a competitor" (15 U.S.C. § 13(b)). As to its 22 customers receiving more than the customary discounts, National asserts that discounts to 19 were granted in good faith to meet offers of competitors. The principal reason given by the Commission for rejecting the good-faith meeting of competition defense was as follows (op. cit. p. 22,922): The Commission apparently considered that National had not made the requisite showing because National did not prove that its competitors' list prices were the same or lower, but that requirement is inconsistent with Federal Trade Commission v. A. E. Staley Manufacturing Co., 324 U.S. 746, 65 S.Ct. 971, 89 L.Ed. 1338; Callaway Mills Co. v. Federal Trade Commission, 362 F.2d 435, 443-444 (5th Cir. 1966), and Forster Manufacturing Co. v. Federal Trade Commission, 335 F.2d 47, 55-56 (1st Cir. 1964), certiorari denied, 380 U.S. 906, 85 S.Ct. 887, 13 L.Ed.2d 794. We agree with Commissioner Elman's dissenting view that such a burden of proof is too strict and unreasonable and is not imposed by Section 2(b); but even if there were such a burden, National has met it. Actually the evidence showed that Page's list price for half gallons was 2 cents below National's; Driggs' half gallon prices were the same; Meadowgold's, Trilby Farm's and Cherry Grove's prices were the same or less; while Wilson Dairy, Independent Dairies and United Dairies had lower Monroe prices than National. The only relevant list price missing from the record is Babcock's, and the majority opinion concluded that Babcock's list was higher than National's. The majority speculated that Babcock must have had a higher list price than Sealtest because Babcock's discount schedule had lower point requirements. If this were true, Babcock would not be competitive with Page, Driggs, or the lower-priced small dairies. The Commission assumed that Babcock's list prices must be higher than National's because National "could not sell milk in this market at net prices which were consistently one or two cents above its competitors'" (op. cit. p. 22,923 (1966)). This theory is contradicted by the facts, for the record shows that various National competitors had the same list prices and better discounts than National. Being a well-known national brand, Sealtest could sell The Commission rejected the good-faith meeting of competition defense with respect to National's discounts to the Associated Grocers and Saveway group purchasers, on the ground that those concerns were already planning to replace Meadowgold as their milk supplier. National's representatives were seasoned dairy men and were familiar with the competitive situation, so that they should have been aware of AG and Saveway dissatisfaction with Meadowgold. It was, we believe, permissible for the Commission to conclude that National did not in good faith believe it was necessary to compete with Meadowgold's prices in order to recapture AG and Safeway as customers. National has not endeavored to support its discounts to Joseph's, National Food or Wrigley's corporate chains by the defense of meeting competition. Joseph's and National Food were both large customers, and even Wrigley's purchased $4300 worth of milk per month from National. National's failure to support the meeting competition defense to these three customers and AG and Saveway justified the Commission's view that National's discounts in this area might substantially injure competition. Another reason advanced by the Commission majority for rejecting the defense of meeting competition was that National was meeting unlawful discounts with respect to the AG and Saveway accounts. However, as held in Standard Oil Co. v. Brown, 238 F.2d 54, 58 (5th Cir. 1956), Section 2(b) may be satisfied unless the seller is meeting prices "that he knows to be illegal or that are of such a nature as are inherently illegal." That standard was met here, for Meadowgold's prices to AG and Saveway were not plainly illegal. Meadowgold's discounts may have been cost justified Since the Commission's opinion does not discuss the meeting competition defense with respect to 17 other Toledo-Monroe customers of National, this opinion need not consider those off-scale discounts. Next, National relies on the cost savings defense with respect to 8 of its 22 customers receiving discriminatory discounts. The milk deliveries were to the individual stores and not to any central warehouse. Time studies showed that it took National's drivers less time per case to deliver to a large volume store, so that National's per case distribution costs would be less in serving such stores. The cost savings proviso permits differentials making "only due allowance for differences in the cost of manufacture, sale, or delivery resulting from the delivery Under the former method, National totaled the distribution cost to each unit and divided it by the total dollar purchases of rebatable products of all units in the chain or group to obtain the cost per dollar of serving the purchaser. This averaging method would warrant the same discounts to all stores whose volume qualifies for the particular discount in question, assuming the discount brackets are fairly drawn. Therefore, averaging would be permissible for National with respect to six of its corporate chain customers. National's "per customer" cost savings defense fails because there is no "close resemblance of the individual members" of these two voluntary groups, so that the cost of serving them may not be computed by using the averaging method. National argues that "purchaser" as used in the cost proviso means all the stores in a chain or group taken as a unit. This may be true, but it does not help National. The crucial question is not whether a chain is a purchaser, but whether the discount is "due allowance" for cost differences. The Commission observed that when a store, by virtue of being averaged with a larger store, receives a discount it has not earned, it gains an unfair competitive advantage of the sort which the amended Clayton Act was designed to prevent. An averaging system, when it permits some stores to receive a significantly larger discount than they could earn individually, has an anticompetitive effect comparable to that National also argues that abandonment of the averaging system will put it at a competitive disadvantage vis-a-vis intrastate dairies who will continue to grant average discounts to chains. If this happens, the net prices of the competing dairies will either be lower than National's or they will not. If they are lower, the meeting competition defense may be available to protect National; if they are not lower, National will not really be at a competitive disadvantage. Finally, National relies on a number of cases dealing with the adequacy of cost studies. Borden does permit the grouping together of reasonably homogenous units. For example, if a National cost study showed that a store using 3,000 units The Commission found that prior to the adoption of the averaging schedule in 1960, National had a discount system in which the purchases of all the stores in the chain or group were aggregated. Under such a system a group of stores, none of which was individually entitled to any discount, could obtain the maximum discount by aggregating their purchases. An aggregating system bears less resemblance to the realities of cost savings than does an averaging system. National does not seriously dispute this. Rather, it argues that it did not in fact employ an aggregating system but granted the large discounts to meet competition (except as to Joseph's, National Food and Wrigley). Because the evidence is in conflict on this question, we may not disturb the Commission's finding that the prior discount system to chain and group purchasers was based on impermissible aggregating. National did not attempt to cost justify the aggregating practice. National asserts that it has cost justified its discounts to 6 corporate chains (Big Bear, Joseph's, National Foods, Kroger, Wrigley's, and Seaway Foodtown (except for one store)) even on the store-by-store basis demanded by the Commission. Memphis Area In Memphis, 200 independent stores, representing 50% of National's ice cream customers, received no discount. At the same time, National had the following discount schedule in effect for independent stores: Gallons Discounts/Gallon0 — 49 ................... 0 50 — 79 ................... 2¢ 80 — 109 ................... 3¢ 110 — 139 ................... 4¢ 140 and over ................ 5¢ Under this schedule most independents received no discount. Chain and group stores were accorded a 7¢ per gallon off schedule discount. The sale of ice cream in this area was highly competitive, and the independent stores receiving no discount were located near many of the chain and group members receiving the 7¢ per gallon discount, thus enabling them to sell ice cream at lower prices than the independents. Numerous chain and group member stores handled smaller ice cream volume than numerous independents and yet received the 7¢ discount solely due to their affiliation. Concomitantly, many independents received no discount while equally small affiliated stores received the 7¢ per gallon discount. We agree with the Commission that these facts show that National's discounts had the probable effect of lessening competition at the secondary level. See Federal Trade Commission v. Morton Salt Co., 334 U.S. 37, 50-51, 68 S.Ct. 822, 92 L.Ed. 1196; United Biscuit Company of America v. Federal Trade Commission, 350 F.2d 615, 621 (7th Cir. 1965), certiorari denied, 383 U.S. 926, 86 S.Ct. 930, 15 L.Ed. 2d 845; Whitaker Cable Corporation v. Federal Trade Commission, 239 F.2d 253, 255 (7th Cir. 1956), certiorari denied, 353 U.S. 938, 77 S.Ct. 813, 1 L.Ed.2d 761. As with the Toledo-Monroe market, such a conclusion is not foreclosed by the availability of these discounts to National's independent customers deciding to join a voluntary or cooperative group. As in the Toledo-Monroe area, the Commission concluded that National's National also contends that its 7¢ per gallon discount granted to Malone & Hyde, Inc., a wholesale grocer sponsoring a voluntary chain of stores, was a legitimate functional discount. On the other hand, the Commission found that National sold to the M & H member stores rather than to M & H. The invoices stated that the ice cream was "sold to" the particular stores, not to M & H. In addition to the 7¢ per gallon discount, National granted M & H a 2% allowance for assuming the credit risk of the member stores. As the Commission noted, this allowance is consistent with the member stores themselves being the purchasers, for if M & H were the purchaser, National would have no risk as to the credit of the stores. The Commission permitted this 2% allowance as a payment for services performed by M & H for National. It was not included in the price discrimination that the Commission found National gave to the M & H stores. The Commission explained its conclusion that the M & H stores were the purchasers from National as follows (op. cit. p. 22,925): In our view, M & H is sufficiently unlike the ordinary wholesaler to support the Commission's decision that the member stores were the actual purchasers. However, under Federal Trade Commission v. Fred Meyer, Inc., 390 U.S. 341, 88 S.Ct. 904, 19 L.Ed.2d 1222, the M & H retail stores could be classified as purchasers from National even if they technically purchase from M & H where, as here, such a classification would further the purposes of the Clayton Act, as amended. Cf. Federal Trade Commission v. Sun Oil Co., 371 U.S. 505, 512-523, 83 S.Ct. 358, 9 L.Ed.2d 466. New Orleans Area, and Scope of Order Here National sold a private label milk, "Velva," to H. G. Hill Stores, Inc., a chain (and its successor, Winn-Dixie Stores, Inc.) at a net price of about 10¢ per gallon less than to many of National's store customers receiving no discount. Besides the 20% discount to Hill, National sold milk to 5 other wholesale customers at 5-10% off its wholesale list price. The Commission sustained the defense of meeting competition with respect to all customers receiving discriminatory discounts except the Hill Stores. To sustain the defense of meeting competition with respect to Hill in 1951, National relied on competitors' bids to public institutions. However, the bids to such institutions were not subject to the Act and involved different delivery costs and therefore would not show what the competitors' bids were to Hill. To sustain the defense of meeting competition with respect to Hill in 1954, National asserts that Hill's representative told National of a low offer from the Franklinton, Louisiana, Co-operative. The Commission disregarded the Franklinton offer because National had not verified it National contends that it ceased selling milk to Hill's successor in 1960 and that a 1958 Louisiana statute bans discounts on milk, so that the requisite competitive effects no longer exist under Section 2(a). But such a longtime discriminatory practice, if followed elsewhere by National, could similarly injure competition with customers receiving no equivalent discounts, so that the Commission was entitled to consider this evidence in framing its order. Federal Trade Commission v. Ruberoid Co., 343 U.S. 470, 72 S.Ct. 800, 96 L.Ed. 1081; Foremost Dairies, Inc. v. Federal Trade Commission, 348 F.2d 674, 681-682 (5th Cir. 1965), certiorari denied, 382 U.S. 959, 86 S.Ct. 435, 15 L.Ed.2d 362. Even though National has stopped granting unjustified discounts to Winn-Dixie in New Orleans, its similar practices in Toledo-Monroe and Memphis, as well as its former practice in New Orleans, justified the nation-wide order entered. In contrast to Dean Milk Co. v. Federal Trade Commission, 395 F.2d 696 (7th Cir. 1968), there are no considerations present here that impel a limitation of the Commission's order to the specific areas where the price discriminations were proved. National operates in 35 states and it is concededly typical for Sealtest to have volume discount schedules in effect at its various operations. The practices condemned by the Commission occurred in widespread areas over a long period of years. Consequently we cannot say that this order was so broad as to constitute an abuse of discretion. See Federal Trade Commission v. National Lead Co., 352 U.S. 419, 428, 431, 77 S.Ct. 502, 1 L.Ed.2d 438; Swift & Company v. United States, 393 F.2d 247.256 (7th Cir. 1968), and Lloyd A. Fry Roofing Co. v. Federal Trade Commission, 371 F.2d 277, 284, 286 (7th Cir. 1966). We have considered other points raised by the parties but they merit no discussion. The order is affirmed and will be enforced.
https://www.leagle.com/decision/1968912395f2d5171797
|The U.S. Navy seeks a partner interested in licensing a patented method of designing and manufacturing microstructures within a structure for distributing the load of an impact over a greater surface area without changing the mass or materials of the design. | Material properties of a structure or material can change with a material's grain: size/shape/orientation relative to a force load. Grain boundaries can be described as interfaces where crystals of different orientations meet. These boundary areas contain atoms that have been perturbed from their original lattice sites, dislocations, and impurities that have migrated to the lower energy grain boundary. Grain boundaries disrupt the motion of dislocations through a material. Dislocation propagation is impeded because of the stress field of the grain boundary defect region and the lack of slip planes and slip directions and overall alignment across the boundaries. NSWC Crane has patented structures and methods of manufacturing utilizing direction of force loading or shock induced deformation of structures including microstructures. Research in this inventive effort discovered, among other things, that the nature of materials or structures under investigation can show behavior that can change under certain types of force loading or high strain rates, such as shocks, on structures design according to embodiments of the invention. Materials that resist motion (failure) under lower strain rates can reverse normal behavior and promote motion under high strain due to behavior of dislocations in a material(s). A dislocation can be a crystallographic defect, or irregularity, within a crystal structure. A presence of dislocations can strongly influence many properties of materials. For example, dislocations can stop motion and make materials stronger/brittle. However, high densities of organized dislocations can become slip paths for exemplary material subjected to an exemplary force such as shocked material. What was strong becomes ductile. Efforts were made to develop ways of utilizing direction of shock induced deformation including in design of shape charges as well as other structures.
https://inventions.prf.org/innovation/6480
Children with breathing problems may be affected by fine particulate pollution at levels well below the federal standard, according to a new study by a Brigham Young University professor. The study by C. Arden Pope also found that individuals being treated for asthma needed increased dosages of medication when pollution levels increased. The study appears in the September issue of American Review of Respiratory Disease, a journal affiliated with the American Lung Association.Pope's study, conducted during the winter of 1989-90, tracked 34 fourth- and fifth-grade students with diagnosed asthma or a history of wheezing, and 21 asthma patients under a doctor's care. The children attended schools in Lindon or Orem. During the study, each participant measured lung function daily and kept a diary of symptoms and medication use. That information was then correlated with fine particulate readings taken over the winter. Fine particulate pollution or PM10 is caused by dust, woodburning and other combustion processes. The particles measure less than 10 microns in diameter and are capable of penetrating and damaging lung tissue. The primary source of PM10 in Utah County is Geneva Steel. Under current federal air quality standards, PM10 should not exceed a level of 150 micrograms per cubic meter of air during a 24-hour period. The winter of 1989-90 was mild as far as pollution episodes go; the federal standard was exceeded only twice and pollutant levels crept above 100 micrograms only four other times. But Pope and three other researchers found that children experienced a 13 percent increase in lower respiratory tract symptoms when PM10 levels ranged from 51 to 100 Continued from B1micrograms. When pollutant levels climbed above 100 micrograms, children were 53 percent more likely to report wheezing and other respiratory problems. "We thought it would have taken larger pollution levels to see the effects we did," Pope said. "We saw effects even in relatively healthy children." The study's results mean the current PM10 standard may be too high to protect the most sensitive people. "The federal standard is supposed to protect everyone - even those people who are more sensitive to pollution," said Douglas W. Dockery, co-author and associate professor at the Harvard School of Public Health. The study was also co-authored by John D. Spengler, also of Harvard, and Mark E. Raizenne, Canadian Department of National Health and Welfare. Pope noted two other findings. Because of the age of the children in the study, he was able to determine that respiratory synctial virus did not cause the respiratory symptoms. Several year ago, an epidemiologist hired by Geneva said the virus, which affects babies and toddlers, was responsible for increased respiratory illnesses during winter months, rather than PM10 levels. Pope also noted a one- to five-day lag period between increased pollution levels and reports of respiratory symptoms. Pope's study confirms that pollution in Utah Valley is harmful to health, according to Sam Rushforth, co-founder of the Utah County Clean Air Coalition. "Even during this mild winter, the air pollution was so bad that reductions in ability to breathe and increases in cough, wheezing and trouble breathing were observed," Rushforth said. Mitch Haws, director of corporate communication at Geneva, said the company had not seen Pope's study and could not comment on his findings. However, Haws said Geneva is moving as quickly as possible to reduce its emissions.
https://www.deseret.com/1991/10/10/18945508/study-links-pm10-levels-to-problems-in-breathing
Indoor garden is an idea created to provide solutions for urban residents who have limited land. Creating an indoor garden not only cleans the air around the house and adds to the aesthetic value of indoor space, but it can also be an opportunity for you to grow organic plants that complement your family's nutrition. By incorporating a combination of principles and elements used to design outdoor plants, including smart placement and focus on the shape, texture, color, and proportion of indoor space, you can create a soothing and calming indoor landscape. 1. Determine the planting field. Indoor gardens can take up less space. You can provide various types of plants that you need, even vegetable plants can be placed in small pots or on window frames. If you need a large room, you can use a desk or other furniture that can be used. Look for areas with tile or linoleum floors that can absorb water, or place a tarp under your desk. Bored with conventional garden designs? You can try vertical garden, which is a garden layout technique that combines several types of plants by using certain media, for example by the hydroponic method and arranged vertically on the wall. 2. Enough light. Plants need light for photosynthesis in order to survive. Without proper lighting, plants will not grow properly. For the planting field in the pot, you can remove the pot for some time in the morning to get enough sunlight, then put it back in the afternoon. The field of planting in the form of a rack will provide more planting space and take up less space. If using a vertical rack, make sure each plant has adequate light, so you may need a separate lamp for each shelf. 3. Pay attention to the proportions or scale of the space. Choose plants that not only match the size of the room, but also the place that you have prepared. Plants that are too large can take up a lot of space, while smaller plants can appear to be of no value if placed in a large-scale room. One of the tips is to choose plants that grow in your yard to create a relationship with outside the home. For plants on window frames, try a narrow tub that can hold several small plants or a row of small pots with various heights. Use a pot or container that is able to absorb water flowing through the drain hole in the bottom to prevent damage to your window sills. 4. Variation in shape and play on plant composition. Try to choose variations in shape and size of plants that will not disturb the atmosphere, after that set the appropriate composition in the room. Use a variety of plants such as vines, hanging pots, column shapes, rounded, pyramid, and various characters. 5. Pamper your eyes with visual texture. When applied to gardens, visual texture is related to garden layout with the use of plant form elements, fruit, color, leaf and twig texture, and plant placement. Give additions to the appendages to make it look beautiful like rocks, ponds, or homemade elements in the form of garden ornaments, and the whole picture is called visual texture. 6. Color theory in gardens. Color is one of the easiest ways to express your personality through the garden. Color has a function to bind the unity or integration between other elements or it will even have a detrimental effect on the appearance of the garden when the selection is incorrect. Try using bold, bright, shady, or cool color variations to give a natural impression in the room. If you don't like the color of green plants, you can outsmart it with the use of colorful pots or furniture, or the right lighting techniques. 7. Create rhythm in the music of the garden. Rhythm or repetition is directly related to unity as the main design principle that we must take into account. Take advantage of all landscape components such as plants, architectural elements, and ornaments, by forming certain patterns or sequences to create a character or theme. All elements must be compatible and complementary. If you are not interested in using some of the same plants, choose plants with the same shape, but in different sizes and textures. 8. Object vs. plants as focal points. Less is more. Utilizing the focal point in the garden is very useful to direct the view to the area that you want to highlight. You can create a focal point by inserting objects such as benches, statues, large stones, or water elements, or you can also use a number of plants with certain patterns. 9. Harmony in balance. We can not separate the balance with the stability and harmony of an ideal garden, and unite every element of the garden into a balanced whole of shape, texture, aroma, area, and color that will give a very natural impression. There are several structuring patterns that are symmetrical or formal, asymmetrical or informal, and vertical dimensions. Everything depends on the balance of vertical and horizontal elements, or a combination of shrubs and shrubs. A safe solution is to maintain the same number, size and shape of plants, but with a variety of structuring tricks.
https://homeideas.healthystories.xyz/2019/12/indoor-garden-ideas-you-will-fall-for.html
Guidelines on Security and Privacy in Public Cloud Computing (NIST Special Publication 800-144) provides an overview of the security and privacy challenges facing public cloud computing and offers recommendations that organizations should consider when outsourcing data, applications, and infrastructure to a public cloud environment. The document provides analysis of threats, technology risks, and safeguards related to public cloud environments to help organizations make informed decisions. The publication recommends that organizations plan the security and privacy aspects of cloud computing before implementing it, understand the public cloud computing environment offered by the cloud provider, ensure that both cloud resources and cloud-based applications satisfy organizational security and privacy requirements, and maintain accountability over the privacy and security of data and applications implemented and deployed in public cloud computing environments. SP 800-144 is aimed at system managers, executives and information officers making decisions about cloud computing initiatives; security professional responsible for IT security; IT program managers concerned with security and privacy measures for cloud computing; system and network administrators; and users of public cloud computing services. "Public cloud computing and the other deployment models are a viable choice for many applications and services. However, accountability for security and privacy in public cloud deployments cannot be delegated to a cloud provider and remains an obligation for the organization to fulfill", said publication co-author Tim Grance.
https://www.infosecurity-magazine.com/news/nist-issues-guidelines-for-public-cloud-computing/
Published On : 23 September 2019 The Artificial Intelligence has finally caught the attention of All India Council for Technical Education (AICTE). The AICTE as a polestar of the technical education in India has confirmed approval for new B.Tech course in AI, Data Science. With the rising demand for expert techies in Artificial Intelligence (AI), Machine Learning (ML), and Data Science, the educational institutes are to be tuned to generate the workforce with these professional skills. Various leading IITs (Indian Institutes of Technology) in the India have now declared full-time courses in Artificial Intelligence (AI). Check out the detailed course curriculum by the end of the Article. AICTE has made sure to approve a Bachelor in Technology (B. Tech) course in Artificial Intelligence (AI) and Data Science, in the recent conference, organised by Education Promotion Society for India (EPSI), in Chennai. During the question and answer session, AICTE Chairman confirmed that need to offer undergraduate degree programs in these technologies. This decision is part of the technical education council's endeavours to enhance the quality of technical education in the country. This course will provide to demand skilled professionals in these technologies. The AICTE Chairman, Mr. Anil Dattatraya Sahasrabudhe, said, "The committee constituted to assess the need for offering degree programmes in technologies, which are driving the next big transformation, had approved AI and data science alone. Other technologies like Internet of Things, Blockchain, and Cyber Security were considered. However, it was decided that we need not have full-fledged degree programmes on these now. Instead, they can be offered as specializations." Listing a few initiatives that needed to be rolled out by AICTE, he also highlighted that a semester-long training programme for teaching staff members must be made compulsory. The teaching staff will need undergo certification training to teach these courses at engineering establishments. The council has designed 8 module teacher certification programs. “All new joiners must complete this training to become eligible to teach. For those working, this is mandatory for promotions,” he said. On the plan to bestow ‘graded autonomy’ to the educational institutes that meet learning standards in terms of quality, he said: “There is a meeting on September 20, where a decision will be made, and implementation will happen soon after,” Mr. Sahasrabudhe added. Governor Banwarilal Purohit, who inaugurated the conference, appealed to educational institutions to impart knowledge to institutions catering to rural and backward areas. G. Viswanathan, president, EPSI, and founder-chancellor of VIT University, stressed on the need for increased government spending on education. The Central Board of Secondary Education (CBSE) has announced to put in AI as an elective subject for students in classes 9 – 12. The plan of launching AI in CBSE curriculum was suggested by Niti Ayog, the government’s think-tank. The curriculum of the subject will be finalised by IBM India to help subject experts. As per the reports, IBM will conduct a pilot project in 1,000 schools in various cities of India including Bengaluru, Delhi, Kolkata, Bhubaneswar, Hyderabad, and Chennai. As per the analysts, there is a huge necessity of skilled professionals across all sectors. Another News came from Public Sector Banks including State Bank of India and IDBI hiring for specialized skills. Due to lack of skilled manpower, some of these positions remain vacant in these sectors. Currently India is among the top 5 nations in the world among AI-driven startups as the future prediction for 2025. It is also estimated that businesses will develop across zones with the use of AI. Every industry is playing its part in digitalization of India. Therefore, the education institutes in the India also need to gear up to prepare for future generations to handle these needs. The need of AI professional’s demand is also growing exponentially in the IT sector. In September, one of the India’s top IT Company Infosys was in headlines for recruiting resources for AI and Automation technologies. Artificial Intelligence is a significant step forward in how computer system adapts, evolves and learns. It has a widespread application in almost every industry and is considered to be a big technological shift, similar in scale to past events such as the industrial revolution, the computer age, and the smartphone revolution. The particular course will allow gaining expertise in one of the most fascinating and fastest-growing areas of Computer Science through classroom program that covers engaging and compelling topics related to human intelligence and its applications in industry, defence, healthcare, agriculture, and many other areas. This course will give the students a rigorous, advanced and professional graduate-level foundation in Artificial Intelligence. After undergoing this course, the students will be able to: 1. Introduction (3 Hours) Concept of AI, history, current status, scope, agents, environments, Problem Formulations, Review of tree and graph structures, State space representation, Search graph and Search tree. 2. Search Algorithms (9 Hours) Random search, Search with closed and open list, Depth first and Breadth first search, Heuristic search, Best first search, A* algorithm, Game Search. 3. Probabilistic Reasoning (12 Hours) Probability, conditional probability, Bayes Rule, Bayesian Networks- representation, construction and inference, temporal model, hidden Markov model. 4. Markov Decision process (12 Hours) MDP formulation, utility theory, utility functions, value iteration, policy iteration and partially observable MDPs. 5. Reinforcement Learning (9 Hours) Passive reinforcement learning, direct utility estimation, adaptive dynamic programming, temporal difference learning, active reinforcement learning- Q learning.
https://doubtnut.com/blogpage/aicte-to-announce-btech-course-in-ai,-data-science
With all the pomp and ceremony fitting a regal event, officials christened King’s new building, and its new council Dec. 3. Local business owner Rory MacKinnon piped in the King Fire & Emergency Services Honour Guard, that led the procession into the council chambers. The Hon. Mr. Justice Simon Armstrong of the Ontario Court of Justice, administered the declarations of office for the mayor and councillors. Kathryn Moyle, King’s director of clerks and bylaw, welcomed everyone and provided some well deserved praise. She observed the last quarter of 2018 has been quite hectic, with the municipal election and major Township relocation. “I want to highlight to all you, the residents, that you should be very proud and excited for the council representation before you tonight!” She stressed that Mayor Steve Pellegrini’s acclamation is a reflection of the community’s support and “we can all agree, witnessed through his continuous dedication and representation, the mayor has displayed his personal commitment to represent his constituents and fellow colleagues both respectfully and fairly.” She wanted to recognize the “unsung heroes” of the King team who helped facilitate the relocation. She singled out King’s facilities staff, under the leadership of Chris Fasciano – Gavin, Mike, Warren and the rest of the crew. “As King residents, you should feel fortunate and proud in knowing that you have such a wonderful team of individuals that work/represent King Township on a daily basis.” The 2018-2022 term of council, Moyle observed, has a blend of “old” and “new” members, eager and committed to work forward on the current issues facing King. “King council is renowned as being a cohesive, cooperative team, cognizant of their constituents’ needs. King residents know they are invited to speak freely, and more importantly, that council listens to their concerns. Through the next four years, I have no doubt that they will continue to focus on creative solutions and compromises, with objectives to achieve resolution based on fulsome and engaging consultations.” Moyle called upon the mayor’s wife Barbara, to present him with the chain of office. “I can never express the magnitude of the contributions that my lovely wife Barbara has made. She is the pillar of life at home. Our children, Stephanie, Ashley, Emma, David and Joseph have all supported me, each in his or her own way. “As I take office as your mayor I am more grateful than words can express at the opportunity you have given me to lead King for another four years,” the mayor said. “It’s very humbling to be given the responsibility of making decisions, along with the rest of my council colleagues, to make King Township the place to live, work and play. “As I look around this table I have a reason to feel optimistic. We have a mix of seasoned councillors who have been here for multiple terms, along with two new faces. I’m looking forward to Jordan and Jakob providing a younger perspective on the issues that we deal with in these chambers. “I am encouraged and gratified that each ward councillor is committed to the greater good of King Township. Combined with our Township staff, we have a fabulous team. “Throughout the last terms of office I have always advocated for strong partnerships between the Township, its institutions, our businesses, our community groups and our residents. I thank you all who have partnered with us. “King has become a leader for other municipalities. We are the benchmark for great government, great community living and great business.” Each councillor addressed the public for the first time. Ward 1 Councillor Jordan Cescolini thanked his volunteers, friends and family for thier support. Anything worth doing is difficult, he said, but his election bodes well for the future of democracy and youth engagement in King. He thanked constituents for putting their faith in the unknown and he said he’s here to listen and be their voice in Ward 1. He wants to ensure residents he’s bringing value for their tax dollars and he’ here to “make real change that has a lasting impact.” Ward 2 councillor David Boyd said he’s back to provide consistent representation. He pointed out the community is really coming together and he’s looking forward to even more improvements for the Nobleton area in 2019. He will build upon the great partnerships already established. Ward 3 councillor Jakob Schneider said his mandate is to maintain King’s rural character and he’s proud to represent Ward 3 residents. Veteran Ward 4 councillor Bill Cober said they all work together to enhance the community and he’ll continue to foster positive relationships. He personally thanked retired councillor Linda Pabst for her years of service and her leadership. Economic development, infrastructure and growth management are key areas in the coming term. He said he’s committed to being a dedicated voice and he’s very proud to “call King Township home, now and forever.” Ward 5 councillor Debbie Schaefer said her foray into politics was a matter of being in the right place at the right time. After two terms, she still has the aspirations and enthusiasm to make important decisions, ones that have ramifications beyond the four-year term. She will continue to put forth her time and energy in keeping residents informed and gathering their valuable input. The recent campaign, she said, allowed her to up her game. The new term, she said, will come with challenges but she will work with her constituents, do her research, listen and be accountable and transparent. Ward 6 councillor Avia Eek said she’s learned a lot of about effective government in her years on council. She’s big on relationship building and she pointed out that hearing and listening to the public is vital, and should never be discounted. She vowed to work for, and with, residents, for the good of the entire community. Her focus will continue to be agriculture, economic development and the environment. “My door is always open to serve,” she said. As far as a productive council goes, “we are the envy of the GTA.” You must be logged in to post a comment.
http://kingsentinel.com/?p=10881
The annual honor, featuring $150,000 in research funding over two years, is designed to encourage innovative proposals leading to the treatment and cure of Huntington's disease, as well as to honor prior work in the field. It is widely considered to be among the most prestigious awards honoring breakthrough research into the devastating illness. Finkbeiner was recognized for resolving a mystery associated with Huntington's disease, using a robotic microscope that he custom-designed to allow the tracking of changes in cells, including those associated with neurodegeneration, over long lengths of time. As reported in a Nature cover story last fall (October 14, 2004), Finkbeiner and his team determined that abnormal deposits of mutant huntingtin protein, which appear in the brains of all Huntington's disease patients, are not the cause of neuronal death, but rather are a beneficial coping response on the part of distressed cells. The finding suggests that mutant huntingtin protein inflicts its damage in some form other than as abnormal deposits. The new funding will enable Finkbeiner and his team to build on these findings. He will use the microscope to elucidate which forms of mutant huntingtin are most poisonous. Identifying these toxic forms could reveal how mutant huntingtin causes degeneration, and could lead to specific therapies that block it.
http://news.bio-medicine.org/biology-news-3/Gladstone-investigator-Steve-Finkbeiner-wins-prestigious-Lieberman-Award-12236-1/
A large fire started at 7:30 p.m. on Sunday (2), destroying about 90% of the National Museum collection, in the district of Quinta da Boa Vista, north of Rio de Janeiro. The six hours of uninterrupted fire also damaged the structure of the historic building, São Cristóvão Palace, founded on August 6, 1818 by King João VI, which served as a residence for the royal and imperial Brazilian family. The National Museum held a natural and cultural treasure with over 20 million items - the British Museum (London) has 8 million pieces and the Metropolitan Museum of Art (New York), 2 million pieces in their collections. It was the greatest museum of natural history and anthropology in Latin America. The imposing building with colonial-style architecture had its structure damaged but is no longer in dangfer of collapsing. The building became the Museum headquarters in 1892 and the Federal University of Rio de Janeiro runs it since 1946. Since some pieces were stored in safes and might have survived the fire, nobody knows for sure the extent of the tragedy. We have selected five considerable items and collections from the historical and cultural perspectives. Luzia’s Fossil It is the oldest human fossil ever found in Brazil, and it was named Luzia in a reference to Lucy, a partially completed primate fossil (Australopithecus) discovered in Africa. Luzia was found in 1975 in the city of Lagoa Santa, state of Minas Gerais, by the French archaeologist Annette Laming-Emperaire and was named by the Brazilian biologist, anthropologist and archaeologist Walter Neves. Luzia was a 1.5m tall, black woman who died at age 20. It is assumed that Luzia has inhabited the region about 11,000 years ago, reinforcing the theory that our species, homo sapiens, arrived in America via the Bering Strait - which now separates Alaska from Russia - about 14,000 years ago. Since then, the species advanced towards the South. The fossil is expected to be inside the museum's safe and has been at least partially preserved. Bendegó Meteorite The Bendegó stone is the greater meteorite ever found in Brazil and the 16th in the world - when it was found, in 1784, it was second in the global ranking. In the same year, Domingos da Motta Botelho was grazing cattle on a farm in the city of Monte Santo, Bahia, when he found the 5.36-tons piece. The meteorite is a rock from a solar system range between Mars and Jupiter. It is estimated to be over 4 billion years old and it has fallen to Earth a few millennia ago, but there is no exact date. As far as the world knows, the meteorite’s excessive weight made transportation to Rio de Janeiro difficult – ox carts did not support its weight and the meteorite fell into a dry stream bed of the Bendegó River, where it remained for a century. In 1886, D. Pedro II ordered to bring the piece to the National Museum, where it is since 1888. The meteorite surely is, thus far, the only piece that survived the fire, since it can resist over 10,000 degrees Celsius. Natural Paleontology Collection The National Museum paleontology collection was one of the most important and extensive of the continent. There were 56,000 copies and 19,000 records of elements such as fossils, reconstitutions and replicas of Brazilian plants and animals and from other places around the globe. There were, among the collection, unique fauna and flora elements of the Brazilian territory. Sadly, one of the losses is the prehistoric crocodile fossil from 70 million years ago, considered one of the most complete fossil found in the world - the Tyrannosaurus jaw, the only one discovered in Brazil, and the almost complete skeleton of Maxakalisaurus topai, the first large one to be set up in the country. Egyptian and Classical Archeology The National Museum archeology collection was the greatest and oldest in Latin America. The Ancient Egyptian collection comprised more than 700 pieces, such as mummies and sarcophagi, and the Greco-Roman civilizations collection had 750 pieces. Probably the Sarcophagus of Hori - piece of the Third Intermediate Period of Egyptian civilization (1049-1026 BC) - and the Koré Statuette - an element of the Archaic Period of Ancient Greece (5th century BC) which was later incorporated by Roman culture - were also lost in the fire. Another valuable piece was the Sha-Amun-En-Su skiff, dated 750 BC, acquired by Dom Pedro II on his second visit to Egypt. Amerindian Civilization Items It was a unique ethnology collection of the National Museum. The collection of these items housed unique artifacts, objects, and records of indigenous people cultures from all over America (from the Atlantic Ocean to the Pacific) and from Afro-Brazilian people. In total, there were more than 1,800 pieces of Amerindian civilizations of the pre-Columbian era. In addition, there were audio recordings from 1958 of indigenous languages which unfortunately are extinct languages, the original ethnic-historical-linguistic map with the Brazilian ethnicities’ location and the entire archives of the German ethnologist Curt Nimuendajú (registered name Curt Unckel) who had traveled through Brazilian indigenous ethnicities for 40 years.
http://bluevisionbraskem.com/en/human-development/fire-brazils-national-museum-5-collection-items-lost-fire
‘Loch Ness Monster’ Mystery Could Finally Be Solved With New Expedition For thousands of years, the legend of monster “Nessie,” a long-necked dinosaur-like creature surviving from prehistoric times in the freshwaters of Loch Ness, Scotland, has been doing the rounds. Some people believe in the existence of the animal, while others think it is nothing but a creation of human imagination. There is no evidence, except for a few debatable images, sonar readings and over 1,000 claimed sightings, to confirm if the legendary "Nessie exists," but every year thousands of people visit the region hoping to catch a glimpse of the mysterious creature. The mystery, one of the greatest in the world, still remains unsolved, but a new expedition launched by an international team of scientists could finally find the answer we all have been looking for. Led by University of Otago’s Neil Gemmell, the group has taken up the ambitious task of cataloging all the life present in the murky waters of the loch. A few months from now, the researchers will take a dip into the dark, mysterious lake to sample environment DNA (eDNA) samples from the waters. “Whenever a creature moves through its environment, it leaves behind tiny fragments of DNA from skin, scales, feathers, fur, feces and urine,” Gemmell said in a statement. “This DNA can be captured, sequenced and then used to identify that creature by comparing the sequence obtained to large databases of known genetic sequences from 100,000’s of different organisms," the researcher added. “If an exact match can’t be found we can generally figure out where on the tree of life that sequence fits.” This photograph, one of two pictures known as the “surgeon's photographs,” was allegedly taken by Colonel Robert Kenneth Wilson, though, it was later exposed as a hoax by one of the participants, Chris Spurling, who, on his deathbed, revealed that the pictures were staged by himself, Marmaduke and Ian Wetherell, and Wilson. References to a monster in Loch Ness date back to St. Columba's biography in 565 AD. More than 1,000 people claim to have seen “Nessie” and the area is, consequently, a popular tourist attraction.Photo: Keystone/Getty Images The effort will help them create a list, which they will compare with all the life thriving in other lochs in the region to see what’s different at Loch Ness. Though Gemmell said he is open-minded regarding what they might find in the lake, he also noted that he’ll be surprised if any DNA evidence sampled from the waters looks related to a large extinct marine reptile in any way. “Large fish like catfish and sturgeons, have been suggested as possible explanations for the monster myth, and we can very much test that idea and others,” he noted. Theoretically, if any creature like those suggested exists in reality, they should find some biological remnants of its presence. The mythical monster is the spotlight of the upcoming expedition, but it won’t be the only thing that the researchers are hoping to find. The group hopes to document new species of bacteria and other forms of life in the waters while learning more about many native as well as new invasive species recently discovered in the loch such as the Pacific pink salmon. “While the prospect of looking for evidence of the Loch Ness monster is the hook to this project, there is an extraordinary amount of new knowledge that we will gain from the work about organisms that inhabit Loch Ness — the U.K.’s largest freshwater body.”
As the United States electric grid converts to clean energy sources like solar and wind, individuals can also help mitigate climate change through personal energy conservation behaviors. As a strategy to encourage energy conservation, energy contests that pit groups against each other have become common. Participants attempt to save the most of a particular resource (electricity, water, etc.) within a given timeframe, ideally establishing energy-saving habits in the process. Through connection with such a contest and personal electricity usage data via a dashboard platform, K-12 schools may be an ideal place to teach and practice energy conservation behaviors. This study used the Pearson’s correlation coefficient to explore the relationships between the energy actions taken by classrooms and the school’s electricity savings. Activities involving the interactive feedback of a dashboard, as well as traditional classroom lessons were most strongly correlated with electricity savings (r = .485, p = .002 and r = .469, p = .002, respectfully). This study also examined how the activities aligned with components of the Theory of Planned Behavior (attitude, subjective norms, and perceived behavioral control) and how meeting the components might correlate with electricity savings via an independent samples t-test. Although such a relationship did not exist (t(38) = -0.768, p = .439), coding the actions highlighted the importance of providing detailed, goal-oriented descriptions for each of them. These results may provide insight for best practices for energy contests and other energy education endeavors such as adjusting motivation to participate in particular types of activities and/or fostering post-contest participant interaction. Subject Dashboard Energy Conservation Energy Contest Energy Education Feedback Theory of Planned Behavior Permanent Linkhttp://digital.library.wisc.edu/1793/82066 Part of Related items Showing items related by title, author, creator and subject. - - Developing, Initiating, and Evaluating a Grade 6 Energy Education Program for the Midwest Renewable Energy Association's Solar Energy Trailer, Sun Chaser Hansen, Steve (University of Wisconsin-Stevens Point, College of Natural Resources, 1999-07)The purpose of the project is to develop, initiate, and evaluate a grade 6 energy education program for the Midwest Renewable Energy Association's solar energy trailer, Sun Chaser. During these times of limited educational ... - Simulation of Air-to-Air Energy Recovery Systems for HVAC Energy Conservation in an Animal Housing Facility Freund, Sebastian W. (University of Wisconsin-Madison, 2003)Implementation of energy conservation measures in buildings can extend our use of finite resources while simultaneously reducing our impact on the environment. This project summarizes efforts to identify economically-viable ...
https://minds.wisconsin.edu/handle/1793/82066
Executive Summary ================= 1. Background ------------- *"Natural"* menopause occurs when a woman has her last period. All over the world this is usually between 45 and 55 years of age, with an average age of 51. What is termed *"induced"* menopause occurs after two ovaries are removed through a bilateral ovarectomy operation or when chemotherapy or radiation therapy shuts down ovarian function. Perimenopause refers to the interval from just before the onset of 'natural' menopause until twelve months later. The menopausal transition or climacteric period describes the change in a woman's life from the reproductive years to the time of relatively low estrogen. Many women suffer differing levels of psychological and physical complaints when their periods stop, this being the sign of a fall off in the cyclical peaks of estradiol production in the ovaries. While some women have no symptoms between the age of 40 and 60, a large number of women suffer complaints of varying degree. This involves mainly vasomotor symptoms such as hot flushes and night sweats which are usually confined to the upper body. In North America and Europe 45% to 80% of women are affected. In Asia, the number is 10% to 50%. A number of further symptoms which can occur singly or together are often considered part of the menopausal or climacteric syndrome. However, unlike the symptoms already mentioned, these are not specifically menopausal but rather the result of the vasomotor symptoms or are caused by something else. They often include depression, headaches, sleep disturbance, mood swings and impaired concentration or memory loss. The period following menopause is called postmenopause. Here estrogen levels are much lower than during the reproductive years. The menopausal transition, with all these associated health problems, usually stretches from premenopause over several years into the beginning of postmenopause. The vasomotor symptoms and associated problems do taper off or even disappear in the first two years of postmenopause. Many diseases, symptoms and pathological changes are connected beyond the menopausal transition with the lower estrogen levels of postmenopause and are or were seen as medical indications for long-term hormone therapy (HT) at this time of life, especially to prevent fractures due to osteoporosis or a heart attack. Estrogens alone or combined with progestin, based on natural or synthetic hormone preparations, feature most frequently in HT. The combination HT is given to prevent endometrial hyperplasia and endometrial cancer. The objective of this HTA-report is to assess medically and from an economic point of view, the use of HT to treat hot flushes and night sweats and to prevent osteoporosis and cardiovascular disease in postmenopausal women. Published studies and systematic reviews will be evaluated and summarised according to standardised criteria. The medical assessment will consider the efficacy and the risk factors within the framework of the particular indicators. The economic evaluation will focus on the cost-effectiveness of HT compared with no treatment or between different types of HT. 2. Objectives ------------- The medical efficacy and cost-effectiveness of HT will be evaluated in medical and socio-economic terms. This HTA-report will first address the question of whether HT is an effective method as a treatment for vasomotor symptoms (hot flushes and night sweats). Furthermore, it will consider whether HT is an effective medication in postmenopause for primary prevention of osteoporosis and cardiovascular disease. 3. Medical evaluation --------------------- ### 3.1 Methods For the group of women without serious preconditions who received HT either as a therapy for hot flushes and night sweats or to prevent osteoporosis and cardiovascular disease, relevant publications were identified using a structured search of the literature through DIMDI on 23 March 2004. For this purpose MEDLINE, EMBASE, Int. Health Technology Assessment, the Cochrane Library, the databank of the NHS Centre for Reviews and Dissemination at the University of York were consulted as well as 19 other databases. This reference search was also extended by looking up internet sites of national and international scientific societies for gynaecology and menopause. The search parameters focussed on the indications of HT in the menopausal transition (hot flushes and night sweats) and the use of HT for disease prevention in postmenopause (osteoporosis and cardiovascular dis-ease). There was such an abundance of literature that the present review was limited to publications in English and German between 01 January 1999 and 23 March 2004. The information was sorted according to certain criteria. The basic requirement for the publications that were selected through a structured search and the ongoing review was that the title and abstract made it clear that the publication was about the use of hormones to treat women with hot flushes and night sweats and to pre-vent osteoporosis and cardiovascular disease in postmenopause. Studies where it was clear from the title or abstract that authors were looking exclusively at HT for women with serious problems such as carcinoma or post hysterectomy were not included. The selected studies also had to consider clinically relevant endpoints such as death or disease and not just bio-chemical markers as a surrogate parameter, e. g. bone density measurement or laboratory results. The publications identified in this way were categorised according to the quality of their methodology as well as relevance and then evaluated for this HTA-report, if they met the basic quality requirements. The checklist of the 'Scientific Working Group: Technology Assessment for Health Care' provided the criteria for medical quality. Studies which did not meet the given criteria were not evaluated. The results are presented below for each individual publication. ### 3.2 Results For the evaluation of HT in the treatment of hot flushes and night sweats during the menopausal transition, out of the total of 272 identified published papers, 16 publications reporting the results from 18 studies fulfilled the given medical inclusion criteria and the requirements in terms of high quality methodology and transparency. The evaluated studies all showed that HT can be regarded as an effective method for treating these complaints during the menopausal transition. The hormonal preparations that were studied in these papers showed a significantly higher efficacy of 75% to 95% compared with placebos for postmenopausal women. For women who were still perimenopausal, the few results available suggest a much smaller or no difference between HT and placebos. There were differences in the efficacy of the same medication depending on the dose but no significant differences between differing hormonal preparations by intra-nasal versus oral or transdermal application. Other forms of medication were not examined in the selected studies. For the evaluation of HT to prevent osteoporosis and cardiovascular disease in postmenopause, only ten pub-lications out of the total 272 identified sources met the set medical inclusion criteria and the requirements in terms of high quality methodology and transparency. The studies gave the following results. Postmenopausal women using HT as compared to a placebo showed: a 24% to 27% lower risk of any fractures (47 to 59 less fractures per 10,000 women per year);a 33% lower risk of hip fractures (five fractures less per 10,000 women per year);a 22% higher risk of cardiovascular disease (including heart attack, stroke and thromboembolic events) (27 more per 10,000 women per year);a 41% higher risk of stroke (eight strokes more per 10,000 women per year);a 111% higher risk of thromboembolic events (18 more cases per 10,000 women per year);a 24% higher risk of breast cancer (ten more breast cancer cases per 10,000 women per year);a higher risk of ovarian cancer and a slightly lower risk of endometrial cancer (neither of which were statistically significant);no change in risk for total mortality. Overall, it was shown that HT containing equine estrogens plus medroxyprogesterone acetate does not prevent osteoporosis and cardiovascular disease in healthy postmenopausal women. For other medication or combinations of HT, there is only information from good randomised studies for the risk of fractures and thromboembolic diseases, but not for the other outcomes. ### 3.3 Discussion The authors of the studies see HT as an effective method for treating vasomotor symptoms (hot flushes and night sweats). However, since the majority of the studies under scrutiny were sponsored by the producers of the medications in question, the critical discussion of the results and the methods used in the study are rather superficial. The women in the study populations were mostly postmenopausal and thus only partly representative for women who are often still perimenopausal and are receiving HT in everyday clinical practice for hot flushes and night sweats. Overall the relatively short observation periods of these studies, mainly for three and seldom for more than six months is not usual practice where the women affected are often treated over a much longer time scale. In the assessment concerning the prevention of osteoporosis and cardiovascular disease among healthy women in postmenopause, after careful consideration, the positive effects don't seem to outweigh the negative effects. Thus HT cannot be regarded as a suitable primary preventative measure for healthy postmenopausal women. Strictly speaking, definitive claims cannot be made for forms of HT other than the estrogen-progestin combination, which has been investigated the most. The effects on women in other age groups may also differ from those found here. 4. Economical evaluation ------------------------ ### 4.1 Methods The proceeding of information retrieval and economic assessment correspond to the stated medical assess-ment. The relevant criteria in this context are screened on the basis of the checklist for assessment of health economic studies by the 'German Scientific Working Group: Technology Assessment for Health Care'. ### 4.2 Results Out of the identified 42 citations that were implemented into the evaluation, merely six publications conformed to the inclusion criteria. One more article was identified through hand search. Therefore to address the study questions two publications were evaluated regarding 'Treatment of vasomotor symptoms (hot flashes and night sweats)' and five studies regarding 'Primary prevention of osteoporosis and cardiovascular disease in postmenopausal women'. For the use of HT as a treatment for hot flushes and night sweats the results of one study showed HT as a cost-effective alternative to the 'no therapy'-strategy. One further study evaluated costs and consequences of two alternative supplements for HT, but none of these alternatives proved to be significantly beneficial. However, these studies varied vastly in terms of study groups, study drugs with regard to type and dose as well as to choice of study perspective. Thus the question of whether HT for this medical indication (hot flushes and night sweats) could be economically efficient cannot be answered based on available information. Two of the economic evaluations for the use of HT in primary prevention of osteoporosis and cardiovascular disease showed the HT not as a cost-effective alternative to the 'no therapy'-strategy under almost all scenarios. One further study, which explicitly included the results of the WHI-study regarding estrogen + progestin therapy, accounted for a net harm associated with HT, consequently the consideration of cost-effectiveness was not relevant. By contrast a further study indeed arrived to a positive conclusion regarding the cost effectiveness of HT for primary prevention of osteoporosis and cardiovascular disease, but given that a pivotal assumption of this study from a contemporary point of view proved false, this result should be regarded under reserve. Equally a fifth study with a positive result regarding the cost-effectiveness of HT could not be included, because it only selectively considered the effects of HT on fracture incidence but no other with HT associated events like breast cancer or cardiovascular disease were taken into consideration. This does respectively did not reflect the state of scientific findings -- neither currently nor at the time of the study implementation. ### 4.3 Discussion The included studies assessing the questions of cost-effectiveness of HT as a treatment for vasomotor symp-toms (hot flushes and night sweats) and for primary prevention of osteoporosis and cardiovascular disease in postmenopausal women were mostly cost-benefit-analyses based on Markov modelling. This methodological approach measures up to the object of research, since quality of life is a crucial outcome parameter in the assessment of both medical indications. Furthermore, Markov models allow for simplified pictures of complex structures and especially they enable the consideration of long-term effects in the economic evaluation, as required for showing the antidromic effects associated with HT over a longer time horizon. Since the included economic publications assessing HT as a treatment of hot flushes and night sweats varied vastly in terms of study groups, studied drugs, choice of study perspective and in addition they were not carried out with respect to the German health care system, for Germany a considerable demand of health economic evaluations can be stated. Further economic evaluations particularly should consider the following aspects. The study groups should be differentiated more clearly in respect to peri- and postmenopausal women. Furthermore dose and type of HT under evaluation should be specified in more detail. The validity of model results would be enhanced by usage of empirical data of the considered setting. The implementation of further evaluations with consistently confined subgroups which will consider certain combinations of agents from the same perspective and which in addition will use a standardized methodology to deduct aggregated measures of benefits would be desirable. Of particular interest are - beyond a mere comparison with the alternative 'no therapy' - evaluations of the cost-effectiveness relation of several agents respectively doses with each other given that with the varying impact on bleeding patterns also the compliance with the therapy likely will vary. To address the question of economical effects in the primary prevention of postmenopausal osteoporosis and cardiovascular disease only one health economic study considered the latest scientific findings about positive and negative effects of long-term HT. Because from the medical point of view the results of the Women's Health Initiative (WHI) showed that the benefits of combined HT do not outweigh the risks, it can be assumed, that further evaluations regarding the cost-effectiveness of the HT for primary prevention are not useful. 5. Ethical evaluation and sociological aspects ---------------------------------------------- The current ethical discussion about the menopausal transition and HT has become controversial; Lyerly et. al. \[[@R1]\] mention the feminist approach which on the one hand criticizes the attitude of the medical profession, namely that problems experienced by women during the menopausal transition are categorised as 'psychological' and therefore 'not real'. On the other hand, a large part of the literature classified as feminist criticises gynecologists and society for the predominant medical approach to the menopausal transition, a universal experience of women. By giving the menopausal transition a pathological image they have turned a natural process into a medical condition. This is emphasized in the choice of the term 'estrogen deficit disease' which led to HT being prescribed for life. Since the relative lack of estrogen is part of a natural process and is not a disease as such, the more neutral term, 'hormone therapy' should be used rather than 'hormone replacement therapy', as the former does not imply a pathological condition. Regarding further research, the question of ethical implications needs to be addressed: it would be worthwhile to have more accurate estimates of the frequency of hot flushes and night sweats and also additional symp-toms associated with the menopausal transition in the general female population. There is also a lack of good quality population based studies on how it affects the quality of life of the woman affected. There is also a need for research in order to be better able to estimate the benefit and risk of treating hot flushes and night sweats with HT over a long period. There are very few studies of women with premature menopause or of cured can-cer patients (urogenital or breast cancer) who suffer from hot flushes after their ovaries have been removed. In particular, perimenopausal women are given HT to treat hot flushes and night sweats despite insufficient knowledge from good quality studies about its effects for these women. Given the current research findings, peri- and postmenopausal women will probably be less inclined to take part in hormone therapy studies in the future. This gives rise to another relevant aspect, with the continual development of new or different doses of pharmaceutical therapies; there is a danger of perpetuating the concept of "medicalising" peri- and postmenopausal women. Randomised, controlled studies like those carried out within the framework of the Women's Health Initiative (WHI) and the Heart and Estrogen/Progestin Replacement Study (HERS) are seen as the 'gold standard' in medicine for investigating the efficacy of therapies and preventive measures. Estimates of the positive and negative effects of therapies are usually expressed in relative risks (risk ratio, hazard ratio) which is of little help in choosing a therapy. They should be translated into absolute risks in order to provide an important prerequisite for an informed, responsible and willing participation by the patient in her choice of therapy. This matches the now accepted ideal of the 'responsible' patient as it is promoted within the concept of shared decision making. This model of shared decision making, which is encouraged in the United States and more and more in other health systems, is based on a dialogue of partnership between the medical advisor and the patient, where both take an active part in the decision making process and try through discussion to come to an agreed approach. It is the doctor's responsibility to give the patient comprehensive, helpfully presented medical information and to support the patient so that she can crystallize her valued judgments, preferences and wishes. The responsibility for the therapy that is decided and agreed upon by both parties is shared. As has been shown, decisions based on consent and sound information about treatment options, including possible risks and side effects, lead to higher patient therapy compliance/adherence. It is thus important within a doctor's consultation, when the woman concerned is trying to decide on a therapy, to make use of the autonomy principle in the sense of free will. Within the model of shared decision making, it is most important to include the patient's psychological and social context, that is the patient's subjective outlook concerning the existence and the explanation of problems arising in connection with the menopausal symptoms she is experiencing. The patient's preferences are just as important in evaluating the clinical and economic aspects of HT. In any case, the use of aggregated measures of benefit as are used in considering the outcome parameter quality of life, e. g. with QALY (quality adjusted, additional years of life) hides an underlying problem, which is dealt with by Lyerly et al. \[[@R1]\] aggregated measures of benefit regarding particular health conditions promote generalization about all women and overlook individual differences. The paradox that this approach focuses on the preferences of the women as a group, which may not equate with the actual preferences of the individuals within the group cannot be resolved. But according to Lyerly et al. \[[@R1]\], for an ethically acceptable version of aggregated measures like QALY, the benefits and limits of its claims should be taken into consideration. Moreover this report concludes that the fall off in the cycle of ovarian estradiol production in women is not a pathological condition necessarily requiring hormone therapy. If a woman's quality of life is reduced to the extent that HT will relieve the menopausal symptoms, prescribing HT as a long-term treatment, without careful, individually based consideration of the benefits and risks, seems-based on the new information in this HTA-report -- to have no ethical justification also considering possible pharmacological and non-pharmacological therapy alternatives. In order to achieve an optimal set of parameters for a differentiated information exchange during the medical consultation, a number of points need to be considered. On the one hand there are indications that it can take five to 15 years for scientific knowledge to be completely absorbed into medical practice. Some possible ways to shorten this process markedly would be to produce qualifying measures such as therapy guidelines that include current scientific/medical knowledge about HT as well as alternative pharmacological and non-pharmacological therapy options. On the other hand, many patients seek medical and other information around the whole topic of the menopausal transition from sources other than their doctor and often arrive well informed at their doctor's appointment. In this situation also, helpful information should be made available so that the patient has a sound basis on which to make a responsible decision. 6. Legal considerations ----------------------- Within the parameters of this HTA-report, the authors did not identify any specific legal concerns about the use of HT. 7. Summarising discussion of all results ---------------------------------------- The medical efficacy of HT for hot flushes and night sweats was clearly shown in the present HTA-report. Although the medical studies included considered a large number of different medications and combinations of hormones, only the alternative therapy norethindrone (norethisterone) acetate/ethinyl estradiol (NETA/EE) was investigated in the current economic studies for HT. Therefore future studies should evaluate HT where there is medical evidence for its efficacy in treating hot flushes and night sweats. In doing so, combination preparations, especially those used in Germany should be considered. With this in mind, the information discussed in the medical section of this HTA-report concerning a balanced benefit/risk ratio, needs to be carefully considered to decide whether it is wise to model the area of primary preventive HT application, to avoid osteoporosis and cardiovascular disease using data specific to Germany. Further research is needed into the treatment of hot flushes and night sweats, especially for long-term use (\>1 year) and for perimenopausal women, as little is known about it at this stage. Prescription of HT to treat hot flushes and night sweats should also be limited to cases where the women concerned are suffering a significantly reduced quality of life. Comprehensive, helpful information for the patient about the benefits and risks she can expect in her actual situation should underpin a decision about therapy. This is not only ethically advisable but also important for subsequent therapy compliance/adherence. This significantly influences the medical efficacy and the economic efficiency of the treatment. 8. Conclusions -------------- In the present HTA-report, the concern about using hormone therapy, first to treat hot flushes and night sweats and secondly to prevent osteoporosis and cardiovascular disease in postmenopausal women, was reviewed from the standpoint of medical efficacy and cost effectiveness. It can be concluded, based on the publications identified and evaluated, that HT is medically effective for the treatment of hot flushes and night sweats. Despite a large variation in the medications and combinations of hormones used, all the HT products examined were effective in reducing the number of hot flushes per day. Up until now there has been very little research into the benefits and risks of HT for menopausal problems over the longer term (several years). General, large scale use of HT cannot be recommended due to the possible risks of serious, long-term complications such as thromboembolic events. For shared decision making, (meaning a mutually-agreed decision about therapy) the patient should be given comprehensive information about the expected benefits and risks in her particular situation. Temporarily stopping HT is also recommended currently to evaluate the need for long-term therapy, but there are as yet no results available from good quality studies concerning the optimal procedure when stopping HT. Studies of this kind are currently being carried out in the United States. The currently available health economic evaluations provide no clear statement as to the cost effectiveness of HT for treating hot flushes and night sweats. This is due to the variability in medications examined as well as the lack of studies carried out based on the German health system. However the economic efficiency of this form of HT could be accepted considering the relatively low cost of the medication and the simultaneously significant medical benefit in reducing vasomotor symptoms like hot flushes and night sweats. Further health economic research is necessary to confirm the accuracy of this hypothesis. The results of the articles analysed show that HT is not suitable for the primary prevention of osteoporosis and cardio-vascular disease for postmenopausal women. The pharmacological and non-pharmacological alternatives available need to be looked at more closely for the prevention of these diseases.
This year’s Singapore International Storytelling Festival (SISF 2014) opens on Friday, with the theme Translations: Storytelling from the Word to the Voice. The Festival is organised by The Singapore Book Development Council (SPDC) and celebrates oral traditions and folk tales in an age of reduced attention spans and declining appreciation for books. Kamini Ramachandran, veteran storyteller and SISF 2014’s artistic director, said: "Through the art of storytelling, audiences experience a revival of folklore, myths and legends that they might only have read or heard about in passing. The storyteller plays a critical role in re-imagining a well-loved or popular tale for the modern audience through the nuances of dramatic expression." In line with this ambition of re-imagining well-loved tales, the Festival will open with the Asia premiere of Angerona, The Secret Name of Rome (Angerona). Performed by international storytellers Paola Balbi from Italy and Michael Harvey from the United Kingdom, Angerona is a retelling of the legend of Lucretia - no knowledge of the original is required to enjoy it. Angerona tells how in a niche in the Temple of Pleasure, the Romans kept the statue of one of their most mysterious and ancient deities – the eponymous Angerona, goddess of sadness and silence. These qualities have always marked the lives of abused women; in Classical Antiquity Lucretia was the woman who broke that silence. Her story has been retold many times, including by Shakespeare in his narrative poem The Rape of Lucrece. Balbi and Harvey’s adaptation mixes contemporary words with Shakespeare’s text and promises to be a theatrical feast supported by an original soundtrack composed and performed by Davide Bardi. In the best tradition of modern storytelling Angerona crosses boundaries between cultures and art forms. Claire Chiang, SBDC’s chair said: “The legend of Lucretia is a strident tale. It speaks of passion, sanctity and a woman’s honour. Over the centuries, it has fascinated generations of readers and listeners. It is indeed a treat and a privilege for our audiences to be able to experience the nuances of this tale through watching and listening to Angerona.” Since its inauguration in 2006, SISF has attracted more than 50,000 participants. Last year it was attended by close to 2,000 storytelling fans and practitioners – this year is sure to be even bigger and better! For the full SISF 2014 programme and tickets click here.
http://www.asianbooksblog.com/2014/09/singapore-international-storytelling.html
Is $E=mc^2$ not just $E=m$. What does the speed of light have to do with this other than to give it a really big number so it looks cool? What spectrum of light is used? How can we test the speed of light with out a stationary point to test it from? - $\begingroup$ Related: physics.stackexchange.com/q/19816/2451 , physics.stackexchange.com/q/60091/2451 and links therein. $\endgroup$– Qmechanic ♦Apr 16, 2014 at 4:05 3 Answers The speed of light is there for much more than to look cool, and in fact there are a number of derivations of mass-energy equivalence that shows why $c$ is present; I will say that one basic reason is that the units of mass and energy are different, so we require at least some sort of constant factor to make the units work. I'll also say that we often use units where $c = 1$, making $E = m$ true; this is, however, separate from the question you're asking. The spectrum of light is irrelevant, as all light moves at the same speed $c$, as can be shown from Maxwell's equations. Furthermore, $c$ can be calculated from two fundamental constants: the vacuum permittivity constant $\epsilon_0$ and the vacuum permeability constant $\mu_0$. These constants are the the same in every reference frame, and so the speed of light must be the same in every reference frame, as per the postulates of the theory of relativity. Changing reference frames only changes the apparent frequency of light, that is to say, its location in the spectrum. This is what we call red/blueshifting. - 4$\begingroup$ c is never one meter per second, thats simply wrong. We often use units where c =1, dimensionless one, then E=m is true. It basically means that within these units time and length carry the same dimension (1/Energy). $\endgroup$– NoldigApr 16, 2014 at 11:37 - $\begingroup$ @Noldig Ah, sorry, you're right. I'll fix that now, thank you! $\endgroup$ Apr 16, 2014 at 21:27 The speed of light in a vacuum is invariant: it is the same no matter what point you pick as "stationary". So if I'm on a train, and you're on the ground, and we both measure $c$, we'll get exactly the same number. The speed of light does not depend on the wavelength. Gamma rays travel at the same speed $c$ as radio waves. The frequency $f$ and wavelength $\lambda$ change according to $c = \lambda f$. The fact that the speed of light is invariant leads to a long chain of implications - along the way comes $E = mc^2$. The presence of $c$ is not just for making it look cool, but actually a necessary consequence of special relativity. - $\begingroup$ So to measure the speed of light we take two fixed points a and b and time how long it takes light to go from a to b. Since the speed of light is fixed if points a and b are moving in the same direction as the light and let's say at 1/10 the speed of light would this not give false measurements and also give us red shift and blue shift depending on which position a and be are in? $\endgroup$– Neo1979Apr 16, 2014 at 4:18 - $\begingroup$ @neo1979 no. That is the crux of "relativity". The speed of light is constant regardless of what inertial frame of reference you are in when you measure it. This leads to time dilation (clocks change), Lorentz contraction, ... $\endgroup$– FlorisApr 16, 2014 at 4:24 - $\begingroup$ Red shift will be observed when the source is moving relative to the destination, or vice-versa (which is equivalent). In your scenario, the observer at point b will observe no redshift. It's important to realize that that statement isn't a derivation of any sort (although it can be tested empirically). It's the founding axiom of relativity: in any inertial (0-acceleration) reference frame, the laws of physics (and $c$) are the same. From that stems many interesting consequences, like Lorentz contraction (which you've almost re-discovered). $\endgroup$ Apr 16, 2014 at 4:24 - $\begingroup$ So I may be lost here but let's say that I am on a ship traveling at near the speed of light and I do a speed of light test on board said ship I would get the same speed of light as if I did the same test on earth and this is due to time dilation. $\endgroup$– Neo1979Apr 16, 2014 at 4:39 - 1$\begingroup$ @Neo1979 Yes, you've definitely got it, but I would like to nitpick and say that the invariance of the speed of light (that is, the reason the speed of light is always measured as $c$) isn't really caused by time dilation. Time dilation is an effect of the invariance of the speed of light. $\endgroup$ Apr 16, 2014 at 5:54 One shoul think as c as a kind of space-time conversion constat, massless energy travel at this speed. Light and gravity are kinds of massless energy. The idea of E=mc^2 is that mass converts to energy like this.
https://physics.stackexchange.com/questions/108581/whats-the-purpose-of-the-speed-of-light-in-e-mc2
How is Animal Farm a utopian society? A utopia is beautiful and peaceful place or state which is perfect for everybody. What is a utopia for animals? According to the animals in Animal Farm, it is a place where there are no cruel humans killing and using them for their selfish needs. What is the main concept of Animalism? What is Animalism? Napoleon, Snowball and Squealer develop Old Major’s idea that animals have a right to freedom and equality into “a complete system of thought” (Chapter 2) which they call Animalism. The central beliefs of Animalism are expressed in the Seven Commandments, painted on the wall of the big barn. What is Mollie’s perspective about Animal Farm? Mollie can be seen as a symbol of pride over equality, really only concerned with her own vanity and foolishness to think about the true meaning of Animalism or the animals’ rebellion. What is the major theme of Animalism in Animal Farm? equality and inequality. power, control and corruption. Who gave the philosophy of Animalism? The name ‘animalism’ was conferred by Snowdon (1991: 109) and has been widely adopted. The view is also sometimes referred to as “the organism view” (e.g., Liao 2006), the “biological criterion” (e.g., D. Shoemaker 2009), or “the biological approach” (e.g., Olson 1997). What are the rules of Animalism in Animal Farm? The commandments are as follows: - Whatever goes upon two legs is an enemy. - Whatever goes upon four legs, or has wings, is a friend. - 3.No animal shall wear clothes. - 4.No animal shall sleep in a bed. - 5.No animal shall drink alcohol. - 6.No animal shall kill any other animal. - All animals are equal. Why is Mollie concerned about Animalism? 2 – Why does Mollie seem concerned about Animalism? Mollie is concerned that she won’t get to wear hair ribbons or enjoy lump sugar after the rebellion. Chap. 2 – What prompts the rebellion? How did the animals find out about Mollie’s defection? Eventually, Clover discovers that Mollie is being bribed off Animal Farm by one of Pilkington’s men, who eventually wins her loyalties. Mollie disappears, and the pigeons report seeing her standing outside a pub, sporting one of the ribbons that she always coveted. What is the most important message in Animal Farm? The grand theme of Animal Farm has to do with the capacity for ordinary individuals to continue to believe in a revolution that has been utterly betrayed. Orwell attempts to reveal how those in power—Napoleon and his fellow pigs—pervert the democratic promise of the revolution.
https://www.resurrectionofgavinstonemovie.com/how-is-animal-farm-a-utopian-society/
|Fund administration in real time. | Data refreshed . |Portfolio of all Countries/Regions | |Countries By Alphabetical Order Countries By Regions| | | |Portfolio of all Participating Organizations| | || || | |Portfolio of all Contributors/Partners| | || || | |Portfolio of all Funds/Joint Programmes| |Funds & Joint Programmes Funds by Category Completed & Closed Funds| | | |Factsheet Bookmarks | |Project ID:||00093083||Description:||HR Institutions Support| |Fund:||Start Date *:||17 Dec 2014| |Theme:|| | Governance and Human Rights |End Date*:||31 Dec 2017| |Country:||Moldova, Republic of||Project Status:||Financially Closed| |Participating Organization:||Multiple| | | About The overall project's goal is to contribute to the effective protection and promotion of human rights, equality and non-discrimination in the Republic of Moldova with particular attention to women, minorities, marginalised and vulnerable groups. This will be done through building the capacities, independence and empowerment of the of the two major National Human Rights Institutions (NHRIs): Centre for Human Rights (Ombudsperson Institution, CHR) and the newly established Equality Council (EqC), which is antidiscrimination enforcement body. The project will maximise their impact in mainstreaming human rights and equality, including gender equality, and in acting on strategic issues and for the resolution of individual cases. The project was drafted in strong consultations with the management of both CHR and EqC to ensure its alignment with the NHRIs strategy plans and to ensure the commitment of both institutions to the implementation of the project. The project was also build on and will contribute to the implementation of the following national strategies and plans: National Human Rights Action Plan for 2011-2014, United Nations - Republic of Moldova Partnership Framework for 2013-2017, Justice Sector Reform Strategy 2011-2016. Effective combatting of discrimination is one of the conditions on the way of the European integration, which was set as a country's priority. Diversity and equality, including on gender equality, minorities inclusion, as well as Human Rights Based Approach application are the core and basic principles, which will be applied and promoted throughout the project implementation. The project activities were grouped into 3 major thematic components: 1) Support an enabling environment for the National Human Rights Institutions; 2) Strengthening organisational capacities and sustainability of NHRIs; 3) Support for maximizing the power of the Ombudsperson Institution and Equality Council to act as Moldova’s premiere national human rights institutions. The activities under each component include: 1) Analysis of the new law on Ombudsperson and the EqC related legislation will be done in light of international standards and recommendations with further public debates and advocacy for the improvement of the legislation. Supporting CHR's application for "A" NHRI status. Performing 2 studies on human rights and equality perceptions in Moldovan society and 2 studies on the application of equality legislation and implementation of EqC's decisions on practice. All this will enable the NHRIs in terms of legislative frameworks and provide them with objective data for the their further work. 2) Within the second component the work on strengthening managerial and human resources is envisaged. Both management and staff persons of NHRIs will be exposed to the international experience, training on national level on priority equality and human rights issues. ToT will be organised for NHRIs to strengthen their abilities to organise effective training activities following the end of the project. Staff diversity in NHRIs will be strengthened through the consultancy, revision of employment rules and processes, as well as accessibility of NHRIs for persons with disabilities will be promoted. Strong consultancy will be provided for EqC on case management and handling through building case management software, training on legal analyses, argumentation and decisions drafting skills, as well as tracking the implementation of recommendations made. All EqC's decisions will be translated into Russian and English thus making them accessible for the minorities in Moldova and international community. Push Strategy will be drafted, tested and implemented to insure the highest possible rate of decisions implementation, especially on systemic issues and problems. Libraries of both institutions will be endowed with the set of contemporary specialised academic literature. EqC, as a newly created institution, will be provided with the necessary equipment, including printers, scanners, copy machines, computers etc. taking into account environmental and health factors. 3) The power of organisations will be maximised within the third component to act a premiere NHRIs in the areas of human right and diversity mainstreaming in sectorial policies and legislation (in such spheres as justice, employment, education, medicine etc.). NHRIs will be supported in the monitoring of implementation of the UPR recommendations in Moldova and reporting within the 2nd UPR cycle. Staff skills will be built to act on strategic litigation and individual cases around key human rights issues through the improved research and documentation, identification of strategic issues and acting on them, and submissions to the Constitutional Court. New legal revisions on the National Preventive Mechanism will be put on practice through training and consultancy. Mediation services will be developed for the first time within both NHRIs through training and follow up implementation support. Communication strategy will be updated for CGR and elaborated for the EqC combined with their further implementation. Communication strategy, combined with the elaborated public campaigns will strengthen the messages of CHR and EqC on human rights and equality issues and will make then heard in the society and by the authorities. Key governmental bodies, including Ministry of Education, Health, Social Protection, Prosecutors Office, as well as representatives of Local Public Administrations, judges, lawyers, NGOs, mass-media will be trained on relevant equality, diversity and Human Rights Based Approach issues in order to mainstream these issues and to put them in practice. Distance course on equality for professional groups and NGOs in State language and Russian will be elaborated. 6 grants for NGOs and 6 grants for mass-media will be provided through a public competition to mainstream equality and human rights, public awareness and submission of complaints to the EqC on at least following prohibited grounds: disability, ethnicity (including Roma), sex and gender, sexual orientation, language, HIV/AIDS. The direct target groups of the project are 1) Ombudsperson office and staff persons of the institution, and 2) Five members of the Equality Council and staff persons of the institution. Both institutions will benefit with strengthening of their capacities and their staff persons will improve their knowledge and skills, consultancy services, training, exposure to international experience. The final target groups are: 1) persons facing with violations of their human rights, persons belonging to minorities and vulnerable groups facing with discrimination - these are victims and/or potential victims, who can benefit with the strengthened protection of their rights with the assistance of Ombudsperson office and Equality Council; 2) authorities, public officials, management of business companies - these are those who can both protect and respect or violate human rights; they will benefit with the increased understanding of equality and human rights, as well as their capacities to respect, protect and fulfil the human rights through Ombudsperson and Equality Council work with them. Recent Documents | | Key Figures Report by Financials Latest Vouchers Contacts If you have questions about this programme you may wish to contact the RC office in Moldova, Republic of or the lead agency for the programme. The MPTF Office Portfolio Manager (or Country Director with Delegation of Authority) for this programme:
https://mptf.undp.org/factsheet/project/00093083
Distributed Video Coding (DVC) is a very active research field that aims to provide simple encoders, needed by many low resources applications. Unfortunately, all the proposed implementations of this type of coding claim that there is no way to design such a system as... This contribution presents the results of a work in progress, attempting to use discrete curvature for triangle meshes in order to automatically identify specific structures in remote sensing data. Specifically, the focus was on determining isolated trees, on the basis of data acquired&... Fashion design expresses modernness, reflects changes in a society, economy, politics and culture. As a result, fashion also changes very fast and distinctively, and for that reason the improvement and creativity are indispensable. Nowadays, there are numerous fashion design systems/tools. H... Liveness tests are techniques employed by face recognition authentication systems, aiming at verifying that a live face rather than a photo is standing in front of the system camera. In this paper, we study the resilience of a standard liveness test under imposter photo attack... Studying links between phenotype/genotype and agricultural practices is one of the main topics in agronomy research. Phenotypes can be characterized by informations like age, sex of animals/plants and more and more often with the help of image analysis of their morphology. From... The paper presents Immersive 3D Visualization Lab (at the Faculty of Electronics, Telecommunications and Informatics at Gdańsk University of Technology in Poland) and its applications prepared after its launch in December 2014. The main device of the lab is a virtual reality cubic ... In this paper, we propose a new method to solve a problem of crystal lattice parameter identification. The developed method is based on applying the gradient steepest descent method. The two algorithms of crystal lattice parameter identification on the basis of the developed method... This paper discuss an intelligent space concepts, goals and developments. The review presents an analysis of seven intelligent meeting rooms, equipment and developed services. One of the main goals of intelligent space is the development of proactive services. Realization of such types ... We present a medical image processing plug-in in this paper. Our plug-in uses Blender’s environment and adds tools for the medical image processing and 3D model reconstructions and measurements. There are several software solutions to provide these tasks, two of which are used for&... In this paper approaches to the evaluation of Software Visualization for Parallel Computing are considered on the examples of representation of call graphs and execution traces of parallel programs. The concept of visualization metaphor is described. The visualization metaphors using to depi... The work deals with an application of the 3D-printing to full-size building the reachable sets in control problems. As an example, a simple car model is considered with nonlinear dynamics, three-dimensional phase vector, and scalar control constrained by modulus (Dubins car). Current st... The COAST project at CEA/IRFU at Saclay involves astrophysicists and software engineers developing simulation codes in magneto-hydrodynamics and generic tools for data structuration and visualization. Thanks to the new generation of massively parallel mainframes, computing in astrophysics had made... This work presents a new spatial verification technique for image similarity search. The proposed algorithm evaluates the geometry of the detected local keypoints by building segments connecting pairs of points and analyzing their intersections in a 2D plane. We show that these intersec... We present a new audio user interface for communication means based on automatic sound zoning (ASZ). Highly directive audio reproduction and capturing devices are combined with the output of a newest-generation depth sensor. We particularly use parametric loudspeaker and microphone arrays fo... The project ISAC@OTH-AW will focus on developing an innovative expert system for data visualization and optimization to produce better manufacturing processes. A mandatory part of the project is the appropriation of industry 4.0 technology benefits, like efficiency, quality and time to marke... This paper describes the design and implementation of a hardware-software embedded system for face recognition applications in images and/or videos. The system has hardware components to speed up the face detection and recognition stages. It is a system suitable for applications requiri... For long-range infrared systems, a new method is proposed in this paper to estimate the shutdown point of ballistic missile. In order to reduce the effect of model error and positioning error of observation point on estimation accuracy, two successive fitting corrections are used... Virtual reality-simulated environments have been used for training for more than 40 years. In recent years, an active development of 3D technologies dealing with medical training, planning and guidance has become an increasingly important area of interest in both research and health-care.... Creating 2D and 3D models in CAD and following creation of drawings are standard procedures for constructing in the current industrial practice. Most models of industrially produced machine parts can be made by basic processes of modeling using construction or hybrid methods. Listed... 3D culling techniques are well established to improve rendering performance, but cannot be applied to 2D games in which the scene is composed of partially transparent textures in a known layer arrangement. Commonly, 2D rendering is achieved in a simple back-to-front blending scheme....
Telehealth in aged care: does it have a promising future? During lockdowns, senior Australians used fewer in-person visits with their GPs and telehealth visits became a core service for non-urgent care. Now, after months of near-normalcy, the COVID-19 delta variant is sending the numbers of daily cases climbing to their highest point yet and driving cities back into lockdown. In this latest wave of the pandemic, ensuring that senior Australians continue to have safe access to health care is an important priority. The government recently approved new Medicare items for telehealth consultations, and in a statement about the new items, Health Minister Greg Hunt cited telehealth’s “important role in supporting Australians through the pandemic”. With these new Medicare guidelines, it’s clear that telehealth is here to stay. But will it be able to effectively meet the needs of elderly Australians? It’s one thing for doctors to offer telephone consultations; it’s another for seniors to utilise them effectively. As an aged-care provider headquartered in Perth and with additional offices throughout the country, we’ve had a front row seat to the impact of the COVID-19 pandemic on healthcare access. We wanted to find out how comfortable Australians are with telemedicine and how likely they are to choose it when other options are available, so we conducted a survey to find out. More than a temporary service? Our survey revealed two key findings: most Australians are comfortable using telemedicine, but they prefer visiting their doctor in person. Our first question asked “Do you feel comfortable using telemedicine technology?”, and we were surprised that a majority in all age groups said yes. Among young adults aged 18 to 39, 78% said they felt comfortable with telehealth software; that percentage decreased to 56% among seniors over age 65. Older people are generally less comfortable using new technology tools, and they may be more concerned about internet privacy, but even among seniors, a majority of those we surveyed said they’re comfortable with telehealth visits. We speculated, however, that this might be a comfort born from necessity. We know that many Australians, including seniors, have been forced to learn how to use telehealth technology due to lockdowns, making it difficult for them to access face-to-face care. This forced use could help them feel more comfortable using telemedicine tools. But do they prefer it, and will they continue using it when in-person care is available? That’s the question we wanted to explore. Therefore, we also asked directly about our survey participants’ preferences. In our second question, we asked, “Would you prefer an in-person visit or remote telehealth consultation by your doctor?” On this question, Australians of all ages agreed: a significant majority would rather see their doctor in person. Specifically, 79% of ages 18–39, 76% of ages 40–64 and 84% of those 65 and older said they preferred to visit their doctor face to face. Given this strong preference for face-to-face healthcare, the question still remains: Is telehealth here to stay? Or will it grow to become an integral part of aged care? The future Telehealth medicine is not a new practice in Australia, although the pandemic has greatly expanded its use. Telemedicine has long been a key component of access to specialist care for the approximately 28% of Australians who live in rural and remote areas. And even though most people who live in rural areas are below age 65, those communities include a significant percentage of elders, especially Indigenous elders. As the trend continues for more Australians to age in place instead of moving to residential communities for aged care, the number of seniors living in remote areas will continue to grow. For seniors who are already receiving home care, especially in remote and rural areas, telehealth visits can offer the best of both worlds. Home care packages can be used for virtual doctor visits with a GP as well as a specialist. When in-person care is necessary, combining a virtual GP video call with a home visit with a nurse could provide many clinical assessments and even treatments. Combining home care with telemedicine could fill in many gaps in access, enabling elderly Australians to live comfortable, healthy lives in remote communities and minimising the need to travel for care.
https://www.hospitalhealth.com.au/content/agedhealth/article/telehealth-in-aged-care-does-it-have-a-promising-future--289071937
In November 2005, Goldman Sachs established our Environmental Policy Framework, which articulated our belief in the importance of a healthy environment and our commitment to addressing critical environmental issues. At that time, we were one of the first financial institutions to acknowledge the scale and urgency of challenges posed by climate change. In the decade since, we have continued to build upon our commitment to the environment across each of our businesses. See our 10-Year Milestones for highlights of our progress. Our ten-year juncture offers an opportunity to review progress both within Goldman Sachs and broadly across the market, and identify opportunities for us to do more. Our commitment to helping address critical environmental challenges and promoting sustainable economic growth remains unchanged, while our initiatives and progress will continue to advance. This updated document serves as a roadmap for us in that journey and a foundation on which we will continue to build as we look to the future. Key Tenets: We believe that a healthy environment is necessary for the well-being of society, our people and our business, and is the foundation for a sustainable and strong economy. We recognize that diverse, healthy natural resources – fresh water, oceans, air, forests, grasslands and agro-systems – are a critical component of our society and economy. We believe that technological and market innovation, driven in large part by the private sector working in concert with the public sector, is central to positive economic growth and environmental progress. Innovation will continue to play a critical role in solving societal challenges, including those relating to the environment. From advancements in clean technology to resource efficiency and the shared, connected economy, innovation can accelerate the transition to a low-carbon economy and sustainable future while creating new jobs and greater economic prosperity. We take seriously our responsibility for environmental stewardship and believe that as a leading global financial institution we must play a constructive role in helping to address environmental challenges. To that end, we will work to ensure that our people, capital and ideas are used to help find innovative and effective market-based solutions to address climate change, ecosystem degradation and other critical environmental issues, and we will seek to create new business opportunities that benefit the environment. In pursuing these objectives, we will not stray from our central business objective of creating long-term value for our shareholders and serving the long-term interests of our clients. Climate Change: Goldman Sachs acknowledges the scientific consensus, led by the Intergovernmental Panel on Climate Change, that climate change is a reality and that human activities are responsible for increasing concentrations of greenhouse gases in the earth’s atmosphere. We believe that climate change is one of the most significant environmental challenges of the 21st century and is linked to other important issues, including economic growth and development, poverty alleviation, access to clean water, food security and adequate energy supplies. Delaying action on climate change will be costly for our natural environment, to humans and to the economy, and we believe that urgent action by government, business, consumers and civil society is necessary to curb greenhouse gas emissions. How governments and societies choose to address climate change will fundamentally affect the way present and future generations live their lives. Markets are particularly efficient at allocating capital and determining appropriate prices for goods and services. Governments can help the markets in this regard by establishing a clear policy framework that, among other things, provides transparency around the costs of greenhouse gas (GHG) emissions and creates long-term value for GHG emissions reductions and investments in new technologies that lead to a less carbon-intensive economy. In addition to mitigation, which is a critical component of any strategy, governments and societies need to improve adaptability and strengthen resiliency as part of a comprehensive solution. We recognize that we have an impact on the environment through our operations, our investments, and the production and services we finance on behalf of our clients. As an institution that brings providers and users of capital together, we believe that capital markets can and should play an important role in addressing environmental challenges including climate change. To that end, we are committed to catalyzing innovative financial solutions and market opportunities to help address climate change. The Environmental Policy Framework articulates our initiatives across each of our business areas. The following are key highlights: - Climate Mitigation: We will expand our clean energy target to $150 billion in financings and investments by 2025 to facilitate the transition to a low-carbon economy. To increase access to climate solutions, we will launch a Clean Energy Access Initiative that will target the deployment of clean energy solutions, such as distributed solar and clean cookstoves, to underserved markets. We will look to facilitate the efficient development of carbon markets and other climate-related market mechanisms as opportunities emerge. - Climate Adaptation: We will help our clients more effectively manage exposure to climate impacts through capital market mechanisms, including weather-related catastrophe bonds, and identify opportunities to facilitate investment in infrastructure resiliency. We will also seek opportunities to promote financings and investments to address growing water and wastewater infrastructure needs. Where feasible, we will look to harness green infrastructure solutions such as forests as a complement to traditional infrastructure. - Climate Risk Management: We will conduct a carbon footprint analysis across our Fundamental Equity business in Goldman Sachs Asset Management and work with our clients to analyze and understand the impacts of their portfolios. Across relevant advisory, financing and investing transactions, we will continue to apply a high standard of care in our Environmental and Social Risk Management, which includes guidelines and enhanced review of carbon intensive sectors (e.g., coal power generation, coal mining, oil & gas, forestry and palm oil) as well as climate change-related risk factors. - Climate Approach in Our Operations: We will minimize our operational impact on climate change, strengthen our operational resiliency, and seek smart, sustainable solutions. We will achieve carbon neutrality across our own operations from 2015 onwards and target 100 percent renewable power to meet our global electricity needs by 2020. We will also target $2 billion in green operational investments by 2020.
https://www.goldmansachs.com/s/environmental-policy-framework/?view=mobile
The Technology Development Pilot Program, made possible with funding from the Office of the Vice-Principal, Research (VPR), aims to advance selected inventions with commercial potential to position them for other funding opportunities and make them attractive to potential licensees or investors. Queen’s Partnerships and Innovation, which supports the VPR’s mission to be an essential catalyst for advancing research and knowledge mobilization, strengthening Queen’s local, national and global impact, will assess the applications and oversee delivery of the pilot program. Who is eligible to apply? Queen’s faculty members who: - are tenured, tenure-track or clinicians with protected research time, - hold external funding (or have held external funding in FY 2019-20 or 2020-21) from one or more domestic or international sources such as governments (including Tri-council agencies), companies, foundations, or other not-for-profit organizations - have a research or technology that satisfies or exceeds the definition of Technology Readiness Level 3 , and - have submitted an Invention Disclosure Form to QPI in advance of this competition or as part of the Application to this Pilot Program. Apply to the program here! Invention Disclosure Form (if not previously submitted) Submit completed applications: . [email protected] Deadline to apply: July 9th, 2021 at 4:00 pm EDT How much funding is available and when? Successful applicants may receive $10,000 - $30,000 for their approved project, with funding to be made available in January 2022. Projects are expected to commence in January 2022 and be completed by December 31, 2022. The total pool of available funding is $60,000; specific project allocations will depend upon number of applications received and individual project requirements. It is anticipated that a minimum of two projects and a maximum of six projects will receive funding. What criteria will be used to assess applications? QPI will review and assess applications using the following criteria: - Is the applicant eligible for this program? - Does the invention currently meet or exceed the definition of Technology Readiness Level 3 ? - Will the proposed project and statement of work advance the TRL of the invention and meaningfully strengthen its commercialization potential? - Is the proposed budget in-line with the proposed activities? - Can the work be completed in one year and the budget spent by December 31, 2022? Are the human resources in place to complete the proposed activities by December 31, 2022? - Were other organizations (academic, not-for-profit, government or industry) involved in creating the invention? If yes, do they have any rights to the invention or do they have rights to commercialize the invention? - Have any third-parties expressed interest in the invention? What is the adjudication process? QPI will review, evaluate, and rank the applications using the above criteria, and will recommend the most promising applications, and the amount of funding to be awarded to each, to a review committee, to be organized by the VP Research, for approval. Recommendations will carefully consider the anticipated positive impacts to the development of the technology and the advancement of its commercialization potential in a reasonable period of time. When will I be notified of funding decisions? It is anticipated that faculty members will receive notification of project and funding decisions by November 15th, 2021. What are my obligations if I am awarded funding? Successful applicants will be required to submit an interim report (due June 30th, 2022) and a final report (due January 31st, 2023) to QPI, which summarizes the progress made relative to the approved statement of work, the technology development and TRL objectives, expenses incurred relative to the approved budget, and any inquiries received from third-parties regarding the technology. Who do I contact if I have questions? Please contact Queen’s Partnerships and Innovation if you have questions about the program or application process.
https://www.queensu.ca/partnershipsandinnovation/news/qpi-introduces-new-technology-development-pilot-program-and-funding-opportunity-queens-faculty
By Value Hawk : Investment Thesis The healthcare information technology (HCIT) industry grew at a 9.3% CAGR since 2009, dragging healthcare providers into the 21st century with the help of the Affordable Care Act. Healthcare providers will take the driver's seat as the benefit of digital systems increases their demand. However, healthcare providers have also become more price sensitive when negotiating HCIT contracts. Well-positioned companies (those who focus on large hospital systems and interoperable solutions) will grow at a 6-12% CAGR through 2017. Risks to Thesis · Changes in product requirements: Changing government specifications will significantly increase costs for HCIT companies as they rework products to meet new requirements. · Significant security issues: Security breaches could create patient mistrust of the system, reducing demand for HCIT solutions and increasing costs for the industry. Industry Description The HCIT industry includes companies specializing in software development, technology consulting, medical device integration, records maintenance, and revenue cycle management. The industry had revenues of $35 billion in 2013, low market share concentration, and it was the beneficiary of increasing government regulation since 2004. 3 Companies compete by selling software products and winning long-term service contracts. The HCIT industry is widely regarded as a growth industry, which has benefited significantly from government regulation. Companies who specialize in providing electronic medical record services for doctors and medical facilities saw the largest gains, as this was the goal set out by legislators. While most of the growth has occurred in the United States due to the transitioning health care system, opportunities also exists internationally. 1 With many health care providers meeting the first benchmark established by the HITECH Act, HCIT companies transitioned to meeting the second benchmark, improving care by using EMRs established in Stage 1. This transition will shift revenues from software sales to systems management and maintenance. This is because healthcare providers have already adopted EMR software, and HCIT companies will have to provide new services to generate secondary revenue streams. For many companies, this secondary revenue stream is built in to their contracts through maintenance and services. For others, secondary revenues will come from the additional products they market to existing customers. Companies compete in this industry in several ways. First, some companies benefit from working with health care providers of a specific size. For instance, Allscripts Healthcare Solutions Inc. ( MDRX ) made gains in market share by working primarily with small and mid-sized medical practices. In contrast, Epic Systems Corporation gained the largest market share by focusing on large health care organizations and academic medical centers. Companies also compete through their product offerings, where successful companies tailor their software to the needs of clients. HITECH Act The key topic discussed throughout this report is government regulation. This trend started with the American Recovery and Reinvestment Act (ARRA) of 2009, specifically the provision titled the HITECH Act. The Federal government established this policy to promote the use of EMRs in the United States' health provider system. The ultimate goal was to create a universal electronic health record (EHR) system. The universal EHR system would enable health care providers to access a database of EMRs not only from their system, but also from organizations across the country. The HITECH Act is characterized by three stages, which healthcare providers are required to meet. The chart below lists the requirement timeline for healthcare providers based on when they initiated the process. The numbers in the middle identify which stage a company should be in. For instance, a provider who started the process in 2011 should be in Stage 3 by 2016. Deadlines for Stage Attestation Source: Centers for Medicare and Medicaid Services According to the Centers for Medicare and Medicaid Services Stage 1 involves adopting EMRs and using them in ways that positively affect patient care. 4 For attesting to Stage 1, providers can receive Medicare incentives up to $43,720 spread over five years and Medicaid incentives up to $63,750 spread over six years. Additionally, eligible providers who choose not to participate face Medicare payment reductions of 1% per year (up to a maximum of 5% annual adjustment) beginning in 2015. Many healthcare providers initially focused on meeting Stage 1 goals, which represented the first wave of industry demand. However, it is the second wave of demand that will drive the HCIT industry forward. Stages 2 and 3 represent the second wave of demand, and it is characterized by the focus on interoperable systems. Stage 2 meaningful use involves increasing the amount of digitally available data and using secure electronic communication between doctors, labs, and pharmacies. Healthcare providers have until 2016 to meet Stage 2 requirements. The transition to Stage 2 will result in software sales below those recognized during Stage 1. As a result, companies will need to sell new products to their customers. Cerner ( CERN ) is a company who has made this transition. Their revenue cycle management product is offsetting slowing software sales, and it generates reoccurring revenues, unlike the software sales in Stage 1. Stage 3 meaningful use is focused on improving "quality, safety, and efficiency through decision-support tools and patient self-management tools." 5 The deadline for early adopters to attest to Stage 3 is 2017. Companies who invested early in patient-friendly products will benefit most from Stage 3. Companies who have a proven track record in data security will also benefit from the transition to Stage 3. Transition to Stage 2 Data source: Wells Fargo Equity Report15 As of December 2014, almost 89% of hospitals and 63% of physicians attested to Stage 1 meaningful use requirements, signaling an industry shift to Stage 2. The December 2014 data also reported that 55% of required hospitals and 8% of physicians met Stage 2 requirements (physicians have until March 2015 to meet Stage 2.) The push to meet Stage 2, and eventually Stage 3, will benefit the HCIT industry because it puts pressure on providers to adopt new technologies. Stage 2 involves installing interoperable programs, which connect hospitals to improve communication of EMRs. Stage 3 will also benefit HCIT companies, because it requires providers to communicate medical information with patients through secure methods. One example of this is eClinicalWorks' Patient Portal, which gives patients access to their medical records and a means to communicate with their doctors. 6 As providers attest to Stage 3 requirements in 2017, expect the maturing industry to grow 5-6% annually. Companies that provide interoperable systems will see the strongest growth through 2017. However, only companies that can effectively sell their business efficiency products will see above-average growth beyond 2017. Consolidation of Healthcare Providers According to IBISWorld, "Healthcare reform will likely lower industry prices and enforce reimbursement models that create powerful incentives for hospitals to form large systems of care… As a result of these changes, reform is expected to increase both the number and size of [hospital] mergers." 13 In 2014, there were 95 hospital mergers, which represents a 44% increase since 2010. This is in addition to the 105 deals reported in 2012 and 98 reported in 2013. 14 These mergers are a result of the increased efficiency of large hospital systems. This is important in the post-Affordable Care Act environment, where there is strong pressure to reduce healthcare costs. While the industry is still largely fragmented, the increased consolidation will create more and larger contracts for HCIT companies. Additionally, companies who hold contracts with the purchasing healthcare provider increase their market share because of a merger. Ultimately, this trend will lead to more consolidation in the HCIT industry. SaaS and Cloud Computing Focus SaaS models involve providing clients with software solutions through a subscription system. Software is made available online, and updates are conducted through the cloud. SaaS is projected to account for 30% of industry revenue in 2014. 1 The resulting effect of SaaS on HCIT companies is a decline in revenues from maintenance services. Since providers are able to update their own software through the cloud, their demand for a HCIT company's support will decline. On the other hand, maintaining a cloud is an expensive endeavor, and the spending in cloud-based solutions is expected to grow at a compound annual growth rate of 20.5% from 2012 to 2017. 3 The benefits of a SaaS and cloud system are widely recognized by healthcare providers, and the decline in software revenues from SaaS models will be offset by service revenues from cloud maintenance. Markets and Competition Data Source: Company Financials The 2009 HITECH Act opened the field of competition in the healthcare technology industry. As healthcare providers work to meet the stage goals, they depend heavily on HCIT companies to provide solutions that meet government requirement, make their businesses more efficient, and improve their patient outcomes. In this market, companies vie for contracts through price, effectiveness of services, support, maintenance of systems, and the ability to tailor solutions to their client's needs. While some companies benefit from working with large or merging healthcare providers, as discussed above, others are able to grow by working with small to mid-sized providers. In doing so, they provide more tailored solutions for their clients. In general, the market concentration is low. While some companies hold a large share of a specific client type, few are able to provide solutions that meet the needs of all client segments. 1 Investing in research and development (R&D) and skilled programmers is very important in the industry. This enables HCIT companies to meet a client's specific need and provide new, differentiated software solutions. However, this also has a significant impact on their cost structures, so a fine balance must be achieved in this regard. Finally, as with many growing industries, the competition level is very high. As shown in the discussion of recent contract moves, the type of competition HCIT companies face is mostly within the industry. The availability of substitutes gives clients bargaining power, and the poor performance of an HCIT company's software or support could lead to the shift of an important contract. Data Source: FactSet Epic Systems Corporation Epic is a privately owned company, headquartered in Verona, WI,that specializes in mid to large hospitals and integrated healthcare organizations. As such, the company focuses on providing enterprise-wide solutions for their clients. Epic leads its competitors with 97% of hospitals attesting to Stage 2 and is second in physician attestation at 26%. 15 Epic is well-positioned in the industry with an estimated 22% of market share. 1 In 2014, Epic made significant gains by winning new contracts at the expense of their competitors including McKesson, Cerner, and Allscripts. Epic focuses on many of the key industry drivers listed above. Specifically, they provide interoperable solutions and have a strong hold on the large hospital market. Epic has the best opportunity to increase their market share and they pose the largest threat to the industry's publicly traded companies. Data Source: FactSet McKesson Technology Solutions ( MCK ) McKesson, located in San Francisco, CA, operates in two major segments. The first, McKesson Distribution Systems, focuses on the distribution of pharmaceuticals for biotech and pharmaceutical companies. The second segment, McKesson Technology Solutions, "delivers enterprise-wide clinical, patient care, financial, supply chain, strategic management software solutions, as well as connectivity, outsourcing and other services, including remote hosting and managed services, to healthcare organization." 16 Their 2014 annual revenues were $137.6 billion, with $3.18 billion coming from the McKesson Technology Solutions segment. They also reported an operating margin of 1.2% and R&D expenses of $456 million in 2014. 16 Finally, MCK has a higher portion of debt compared to their peers, which will put them at a disadvantage if they wish to increase their market share through debt-financed acquisitions. McKesson's market share will not increase significantly, and I do not see them as a positive investment opportunity in this industry. Their strengths and efficiencies lie in their distribution business segment, which is a significantly different business model from the typical HCIT company. Cerner Corporation Cerner, headquartered in Kansas City, MO, provides HCIT solutions to healthcare providers of all sizes, including hospitals, ambulatory facilities, and small physician practices. The company provides cloud-based systems and statistical algorithms aimed at improving patient outcomes. They also provide software that improves billing cycle management, helping hospitals collect payments more effectively. Cerner's 2014 revenues were $3.4 billion and operating margins were 22.4% . 17 They accomplished this high margin due to their popular brand name, successful performance with large hospital systems, and comparatively large-scale operation. While CERN is able to compete for major contracts within the United States, it has also seen significant growth in the United Kingdom, Middle East, and Australia. Cerner is the best-positioned publicly traded company in the HCIT industry because they also focus on many of the drivers identified above. Cerner's low debt level enables them to finance any future acquisitions with debt, even though previous deals were financed with cash. CERN may be stuck at the number two position, because of the positive outlook for Epic, the current industry leader. Data Source: Company Financials CareFusion Corp (CFN) CareFusion, located in San Diego, CA, focused on "areas of medical management, infection prevention, operating room effectiveness, respiratory care, and surveillance and analytics." 18 CareFusion operates in a slightly different environment than Epic, Cerner, and McKesson by providing products that improve efficiency and medical performance, where their competitors look to provide enterprise-wide EMR solutions. CFN has a market share of 11% and revenues of $3.84 billion, and their operating margins are about 18%. 18 While CFN presents an appealing opportunity for growth based on their niche offering, the trends towards an enterprise-wide HCIT solution give their competitors a better long-term strategy. As hospitals merge, they will force their smaller components to adopt their HCIT system. This could decrease CareFusion's market share while strengthening their competitors'. Data Source: FactSet Catalysts for Growth The main growth factor in the HCIT industry is the HITECH Act and the increased focus on reducing healthcare costs. The incentives and penalties will continue to drive investment into IT solutions. When new software or system sale growth slows, HCIT companies will be needed to transition to a maintenance-based model. The other important growth catalyst is the trend towards reducing healthcare costs, while improving patient care. With the use of data analytics, increased communication among providers and patients, and ability to remotely diagnose and treat patients, the HCIT industry will help achieve both of these goals. Investment Positives · The HCIT industry will continue to benefit from an increased awareness of healthcare costs in the United States. As pressure is applied to healthcare providers to become more efficient, they will seek HCIT companies to help improve their business model. · The implementation of new IT solutions will generate immediate and long-term revenues for HCIT companies. As they transition from installing new systems to improving and maintaining those systems, they will continue to generate profits. Investment Negatives · As important as the HITECH Act was in spurring a growth opportunity in the HCIT industry, a change in legislation could significantly affect the industry. A change in the incentive/penalty policy would reduce the demand for some healthcare providers, changing the outlook for industry. · Cloud storage is widely regarded as a more secure storage system than the previous, paper copy system used in hospitals. However, the threat of a data breach is very real; any significant events in this regard would cause Americans to reexamine the industry. Valuation In the coming years, there will be transition in the industry towards system improvement and maintenance. I expect current growth rates of 6-12% to continue until 2017, when many healthcare providers will meet Stage 3 requirements. Beyond 2017, growth rates to slow to around 6%. Companies that focus on large hospital systems, interoperable products, and improving additional revenue streams are best positioned to take advantage of industry trends. Companies like Epic and Cerner will increase their market share as hospitals merge and look to reduce costs. Epic and Cerner will grow closer to 12% per year through 2017, but even they will be subject to increasing pricing pressure and declining demand in the long-term. In 2018 and beyond, Epic and Cerner will slow to 6% annual growth. References 1. "Electronic Medical Record Systems in the US" IBISWorld. 2014. 2. NCHS Data Brief, 2013: http://www.cdc.gov/nchs/data/databriefs/db129.pdf 3. Barr, James. "Healthcare IT Systems Market Leaders" Faulkner Information Services. 2015. 4. Centers for Medicare and Medicaid Services: http://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/index.html 5. The Office of the National Coordinator for Health Information Technology: http://www.healthit.gov/providers-professionals/how-attain-meaningful-use 6. Keston, Geoff. "Electronic Medical Records: Trends" Faulkner Information Services. 2013. 7. Wayne, Alex; Webb, Alex. "Cerner to Buy Siemens Health Data Business for $1.3 Billion" Bloomberg. 06 August 2014. 8. Herper, Matthew. "Cerner to buy Siemens Health IT for $1.3 Billion" Forbes. 05 August 2014. 9. Mead, Charles; Rana, Anurag. "Cognizant's TriZetto M&A Feeds Fastest-Growing Health-Care Unit" Bloomberg. 15 September 2014. 10. Boulton, Guy. "Mayo Clinic picks Epic Systems for Electronic health records" Milwaukee Journal Sentinel. 21 January 2015. 11. Armour, Stephanie. "Accenture Wins New HealthCare.gov Contract" The Wall Street Journal. 29 December 2014. 12. "Health IT Law & Industry Rep.: Elsewhere in the News" Bloomberg. 15 January 2015. 13. IBISWorld - Hospitals in the US, 2015. 14. Hirst, Ellen Jean. "Hospital mergers continued to create larger systems in 2014" Chicago Tribune. 10 February 2015. 15. "Epic and Athena distance themselves in stage 2" Wells Fargo Equity Research. 03 February 2015. 16. McKesson Technology Solutions 2014 10-K report. 17. Company press release, 10 February 2015: http://www.cerner.com/erner_Reports_Fourth_Quarter_and_Full_Year_2014_Results/ 18. CareFusion Corp 2013 10-K report. 19. "Total health expenditure" IBISWorld. 2014. See also W.R. Berkley ( WRB ) Q1 2015 Results - Earnings Call Webcast on seekingalpha.com The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc. The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.
https://www.nasdaq.com/articles/growth-still-possible-some-hcit-industry-2015-04-05
The first mention of the word in English cited in the Oxford English Dictionary was in 1936. The English word is borrowed from Modern Greek πίτα, in turn from the Byzantine Greek πίτα "bread, cake, pie, pitta" (attested in 1108) and possibly from the Ancient Greek πίττα or πίσσα "pitch/resin" (for the gloss), or Ancient Greek πικτή (pikte), "fermented pastry", which may have passed to Latin as "picta" cf. pizza. It was received into Levantine Arabic (as fatteh, since Arabic lacks the sound /p/). Other hypotheses trace the word back to the Classical Hebrew word patt פת (literally "a morsel of bread"). It is spelled like the Aramaic pittəṭā/pittā (פיתה), from which it was received into Byzantine Greek . Hypotheses also exist for Germanic or Illyrian intermediaries. The word has been borrowed by Turkish as pide, and appears in the Balkan languages as Serbo-Croatian pita, Romanian pită, Albanian pite, Bulgarian pitka or pita. In Arabic, the phrase خبز البيتا (pita bread) is sometimes used; other names are simply خبز 'khubz, bread' or الخبز العربي 'Arab bread' or خبز الكماج 'al-kimaj bread'. In Egypt, it is called ʿaish (عيش) or ʿaish baladi (عيش بلدي). Preparation Most pita are baked at high temperatures (450–475 °F (232–246 °C)), which turns the water in the dough into steam, thus causing the pita to puff up and form a pocket. When removed from the oven, the layers of baked dough remain separated inside the deflated pita, which allows the bread to be opened to form a pocket. However, pita is sometimes baked without pockets and is called "pocket-less pita". Regardless of whether it is made at home or in a commercial bakery, pita is proofed for a very short time—only 15 minutes. Modern commercial pita bread is prepared on advanced automatic lines. These lines have high production capacities, processing 100,000 pound (45,000 kg) silos of flour at a time and producing thousands of loaves per hour. The ovens used in commercial baking are much hotter than traditional clay ovens—800–900 °F (427–482 °C)—so each loaf is only baked for one minute. The pita are then air-cooled for about 20 minutes on conveyor belts before being shipped immediately or else stored in commercial freezers kept at a temperature of 10 °F (−12 °C). Culinary use Pita can be used to scoop sauces or dips, such as hummus, or to wrap kebabs, gyros, or falafel in the manner of sandwiches. It can also be cut and baked into crispy pita chips. In Turkish cuisine, the word pide may refer to three different styles of bread: a flatbread similar to that eaten in Greece and Arab countries, a pizza-like dish where the filling is placed on the (often boat-shaped) dough before baking, and Ramazan pide. The first type of pide is used to wrap various styles of kebab, while the second is topped with cheese, ground meat, or other fresh or cured meats, and/or vegetables. Regional variations in the shape, baking technique, and toppings create distinctive styles for each region. In Cyprus, pita is typically rounder, fluffier and baked on a cast iron skillet. It is used for souvlakia, sheftalia, halloumi with lountza, and gyros. Flat breads rarely appear in Greek cuisine; the Greek word pita means "pastry". Various cakes and pastries are pitas, such as spanakopita (spinach pie) and karydopita (walnut cake). Traditional breads in Greek cuisine are leavened loaves such as the round καρβέλι karvéli or the oblong φραντζόλα frantzóla. The full name of the flat bread known in English as pita bread is aravikē pita (lit. 'Arabic pastry'), though it is also called simply "pita". In Greece, pita bread is almost exclusively used as a component of pita-souvlaki sandwich consisting of souvlaki or gyros with tzatziki, tomatoes, onions, french fries, and condiments stuffed into a pita bread pocket.
http://www.morshedgohar.com/fa/blogs/single/20/Pita
Jacobs is one of the world’s leading providers in technical,professional and construction services. We specialise in water, architecture,engineering and construction, operations and maintenance, as well as scientificand consulting. Our client portfolio includes industrial, commercial, andgovernment clients across multiple markets and geographies. About the opportunity With a strong pipeline of work, we are looking for a highlymotivated historical heritage consultant to join our Cultural Heritage team,which forms a part of our broader Environment and Spatial operations centre inour Melbourne office. The successful candidate will play a key role in thedevelopment and successful performance of our historical heritage practice inVictoria, NSW and nationally. This role will provide technical input into historicalheritage assessments and management plans, as well as supporting the deliveryof projects through the application of project management experience. The rolewill also be integral in the development and management of client relationshipsboth internal and external. The role also provides the opportunity to mentorand train heritage staff within the team in the practice of historicalheritage. About you You will hold an honours or postgraduate degree in arelevant field (eg heritage architecture, history, heritage management,archaeology). You will also have 5-7 years consulting experience, with a focuson historical heritage, and have the following key knowledge and skills: Comprehensive technical knowledge of State (preferablyVictoria, NSW or Queensland) and Commonwealth heritage protection legislationand guidelines and ability to apply this Excellent written and verbal communicationskills Demonstrated ability to complete heritageprojects in a timely manner, within budget, and to a high level of quality Demonstrated ability to independently managesmall technical teams in the delivery of cultural heritage services andprojects Demonstrated ability to provide heritage adviceand guidance to internal and external clients, to fellow team members, and to othertechnical specialists Excellent reporting and research skills Demonstrated ability in developing relationshipsand business with existing and new clients, both internal and external Demonstrated ability in the preparation ofproposals and successfully bidding for heritage projects Why Jacobs The Jacobs Cultural Heritage team comprises 11 heritagespecialists with experience in historical archaeology, Aboriginal archaeology,maritime archaeology, and built heritage. We work on projects from large-scalemulti-disciplinary infrastructure development to smaller heritage-specificprojects. Our key clients include state government road, rail and transportagencies, infrastructure construction contractors, water utilities companies, localgovernment and Commonwealth government departments including Department ofDefence. We provide services including preparing historical heritageassessments, statements of significance, heritage impact statements, archivalphotographic recording and conservation management plans; preparingapplications for heritage permits and approvals; and field survey,archaeological excavation and analysis of historical artefacts. At Jacobs we offer rewarding careers with ongoingdevelopment opportunities, flexible working arrangements and a culture that iscollaborative and inclusive. We believe in collaboration and knowledge sharing,from global virtual teams to local work sharing options.
http://aus.jobs/melbourne-aus/senior-historical-heritage-consultant/E30607CFE14642E0AA99900A60D7C731/job/?vs=28
Iftar and sahur, the pre and post fasting meals at Ramadan, were once symbolised by the pastries known as böregi because they offered sustaining and satisfying snacks at the appropriate moment. The Turks in particular worry that their ancestral iconic snack food is a thing of the past, yet the prevalence of the börek across the ancient Ottoman lands, the Balkans and western Europe would seem to suggest that these pastries are still relevant. Known as byrek in Albania, banica in Bulgaria, buréki in Greece and burek in Serbia, the Turkish böreği tradition epitomises the relationship people have with their produce and place. For example Turkish pastries include the following. - Paçanga Böreği (air-dried beef and cheese) - Hamsili Bôregi (Black Sea anchovy) - Patlicanli Böreği / Börek (aubergine) - Kiymali Börek (beef) - Gül Böreği (beef and cheese) - Peynirli Börek (cheese) - Muska Böreği (cheese and spinach) - Sigara Böreği (cheese and parsley) - Su Böreği (cheese and parsley) - Pırasalı Alt Üst Böreği (leek) - Patatesli Börek (potato) - Ispanaklı Börek (spinach) - Kol Böreği (various) The beef and onion pastry of Bosnia and Herzgovina is so popular it is made across the region despite indigenous versions like the half-moon beef and onion pastries of Anatolia. Böreği are fried and baked, known as tepsi böreği in Turkey, where the fillings include beef and onion, cheese and parsley and spinach and onion. Generally böreği are made with yufka, the thin pastry dough, but they are also made with puff pastry and these are generally filled with beef and onions or with cheese and parsley. In the coastal and central regions of Anatolia these pastries are also defined by the use of wild plants like baldiran (black lovage), gelincik (corn poppy), hardalotu (wild mustard), kazayağı (sicle weed), keçi körmeni (a type of wild garlic), kuşotu (chickweed), iğnelik (cranesbill) and yabani sarımsak (wild garlic) as fillings. Yabani Yeşiller Böreği wild greens pastries - 1 kg yufka - 500 g mixed greens (poppy, nettle, cress, wild garlic) - 3 eggs - 150 g white (feta) cheese, grated - 100 ml milk - 100 ml olive oil - 60 g scallions, chopped - 3 garlic cloves, crushed - Salt, large pinch Wash, pick over and chop greens, knead with a little salt. Chop onion and garlic fine, mix into the cheese. Combine the milk, egg and olive oil. Divide yufka into four pieces. On each piece, spread some of the oil-milk-egg mixture and a spoonful of the cheese-greens mixture. Roll up and place in a round baking pan, starting from the middle. When you have rolled all the pieces, drizzle the remaining oil-milk-egg mixture over the pastries and cook in a medium oven for 40 minutes. Muska Böreği triangle pastries with cheese and spinach - 20 sheets yufka dough - 500 g spinach, cooked, liquid squeezed out - 250 g white (feta) cheese, mashed with a fork - 3 eggs, beaten - 45 ml olive oil - 1 tsp green peppercorns, ground - Water to seal the pastry Preheat oven to 180ºC. In a large bowl combine cheese, 1 tablespoon of olive oil, two of the beaten eggs and spinach. In a separate bowl, mix remaining egg and 2 tablespoons of olive oil. Grease a rectangular baking tray with a little olive oil. Lay the pastry sheets on a clean surface and cut into 10 cm x 25 cm stripes. Lay two stripes of filo sheets at top of one another. Place 1 heaped tablespoon of the spinach mixture at one end of the pastry strip. Fold to form a triangle, fold over and continue until you are left with a triangular pastry, seal with water. Repeat with remaining pastry and filling. Put the pastries on the greased tray, brush the top of each one with the egg-oil mixture. Bake for 25 minutes, until they are is golden brown. Preheat oven to 180ºC.
https://fricoteurope.com/2019/07/26/legendary-dishes-boregi-filled-pastries/
Alithia Inc. is a registered charity that will provide a space for re-imagining and re-examining learning. Our foundation begins with contemporary scientific understandings of child development, neuroscience and communication techniques. With an understanding of the how the brain is wired we can begin to support a learning experience that equips children with the tools, skills and abilities to continue to learn, adapt and contribute to a rapidly changing world. Alithia aims to launch a unique learning space in January 2019. This will be the world's first Young Innovators Hub (YIH) where children are encouraged to instigate projects, develop ideas together and put them into action, using collaborative innovation and communication techniques. Skilled mentors and facilitators will run various workshops and hands-on actives (creative, cultural, environmental, educational STEAM, and holistic). We are offering a space to families and the global community where we can evaluate, research and showcase what a new approach to learning looks like. To cultivate the skills that will be needed for the future, children need to have their ideas and curiosities valued. Alithia Learning will work closely with funding bodies, researchers and the parents to document the issues, approaches, learning journey, and overall development. We will work collaboratively with experts and academics to publish information, research results and articles that will increase awareness and understanding new approaches to learning in Australia and the world. Alithia Learning was formed in response to research on anxiety and behaviours that underlie many social issues. Major factors lie in the way we interact and the way we currently approach learning. With our support, families can be supported in their efforts to raise children who communicate and collaborate with others on a respectful and compassionate level, creating positive social change. We provide an opportunity of creating significant change in the way learning and the socialisation of children is approached. Alithia Learning is open for investment and we are taking applications from interested researchers. Alithia is financed entirely by donations from individuals and organizations who support the cause. All members of the Board of Directors are volunteers, who receive no financial remuneration for their work for the Alithia. Donations to Alithia go to the establishment and running of the world's first Young Innovators Hub.
http://alithialearning.org.au/investors-research/
Kiel \[[@RSPB20171281C1]\] provides an interesting analysis of connectivity among bivalve and gastropod assemblages at hydrothermal vents, cold seeps and whale falls, suggesting a role for sedimented vents as evolutionary stepping stones between vents and seeps, but providing no support for whale falls playing a similar role. We caution that the dataset for whale falls used in Kiel \[[@RSPB20171281C1]\], as well as further available data, are insufficient for network analysis to yield conclusions regarding lack of connectivity between whale falls, vents and seeps. Although Kiel \[[@RSPB20171281C1]\] listed some limitations of the study, we present several more below to highlight the weaknesses of the whale-fall analysis. A global connectivity analysis based on presence--absence data \[[@RSPB20171281C1]\] relies on the validity of at least two assumptions: (i) each vent, seep or whale-fall site included in the analysis has been well sampled; i.e. if a taxon does not occur in the data from a particular site, it is because it is absent from that site, and not a consequence of undersampling. Thus, absence from the data can be assumed to provide 'evidence of absence'; and(ii) the sites representing a particular habitat type are broadly (in fact, globally) distributed, with a substantial sample size within regions (e.g. ocean basins) to avoid geographical biases. In other words, there should be multiple (well sampled) sites for each habitat type in all regions across the globe so that patterns within regions, and within and between habitats, can be revealed.The datasets for vent and seep habitats in Kiel \[[@RSPB20171281C1]\] may well meet these assumptions, with 32 and 37 apparently well-sampled sites, respectively, distributed broadly across the ocean basins. However, the deep-sea whale-fall dataset fails to meet these assumptions owing to a small number (7) of whale-fall sites, with only two sampled thoroughly for epifauna and sediment infauna. The sampling of sediment infauna is particularly important because sedimented vent habitats are identified as a linkage between vents and seeps. To make clear the limitations of the Kiel \[[@RSPB20171281C1]\] dataset, it is important to discuss the whale-fall sites used in more detail. Kiel \[[@RSPB20171281C1]\] included data from seven purported deep-sea whale-fall sites. One of the 'whale-fall' sites is actually an artificially implanted cow carcass \[[@RSPB20171281C2]\]. Cow carcasses are dramatically smaller, and have different bone sizes and characteristics (e.g. lipid content), than the carcasses of great whales (see Smith *et al.* \[[@RSPB20171281C3]\] for a discussion of the important characteristics of great whale falls) so there is little reason to expect that the full suite of species responding to a large whale carcass would be found at a single cow carcass. Two of the 'whale-fall sites' used in Kiel \[[@RSPB20171281C1]\] off New Zealand and Iceland were actually isolated bones recovered in trawls from unobserved locations at the seafloor \[[@RSPB20171281C4]--[@RSPB20171281C6]\]. Because these trawl samples (i) collected only portions of whale skeletons, (ii) lack any indication that the bones came from a large intact whale fall, (iii) probably sustained significant loss of whale-bone fauna during the trauma of trawl recovery (approx. 90% of bone-associated species can fall off whale bones even when carefully collected by submersible, C. Smith 1988--2005, personal observations), and (iv) did not include sediment infauna, only 'presence data' have any meaning for these 'whale-fall' sites. In other words, the absence of a taxon from a site could easily be owing to insufficient sampling. The use of trawled bones as adequate samples of entire whale-fall sites (which can contain tens of thousands of individuals and hundreds of species distributed both in sediments and over hundreds of bones \[[@RSPB20171281C3],[@RSPB20171281C7],[@RSPB20171281C8]\]), is similar to dredging rocks from a hydrothermal vent and interpreting the attached fauna as representative of the full suite of species likely to be found at the vent site. Finally, the datasets for the whale-fall sites in Monterey Canyon \[[@RSPB20171281C9]\] and the Southern Ocean \[[@RSPB20171281C10]\] in Kiel \[[@RSPB20171281C1]\] included little or no infaunal sampling. Since vesicomyid bivalves (a key vent--seep--whale-fall molluscan taxon) at whale falls typically live buried within sediments underlying the bones and can only be fully identified with substantial sediment-sampling effort \[[@RSPB20171281C11],[@RSPB20171281C12]\], these two sites are also very likely undersampled, especially with respect to vesicomyids. In fact, very recent infaunal data from the Monterey canyon whale fall reveal the presence of vesicomyid genera also found at sedimented vents and seeps \[[@RSPB20171281C13]\]. This leaves two well-sampled whale-fall sites on the northeast Pacific margin in Kiel\'s \[[@RSPB20171281C1]\] 'global' analysis of connectivity among vent, seep and whale-fall habitats. The small number and restricted distribution of these whale-fall sites provides minimal opportunity to explore global connectivity patterns, yielding little basis for Kiel\'s \[[@RSPB20171281C1], p. 1\] statement that: 'The hypothesis that decaying whale carcasses are dispersal stepping stones linking these environments is not supported.' Although several other whale-fall sites, not included in Kiel \[[@RSPB20171281C1]\], have been sampled well \[[@RSPB20171281C12],[@RSPB20171281C13]\], the whale-fall dataset is still too sparse to support a global network analysis \[[@RSPB20171281C3]\]. However, shared habitats (e.g. sulfidic hard substrates, sediments and bacterial mats \[[@RSPB20171281C3]\]), taxa and phylogenetic histories do implicate whale falls as ecological and evolutionary stepping stones for deep-sea reducing habitats such as vents and seeps. For example, one northeast Pacific whale fall harbours 10 genera also known from seeps (Annelida, Dorvilleidae: *Paurougia*, *Ophryotrocha*, *Schistomeringos*, *Exallopus*; Mollusca, Mytilidae: *Idas*; Vesicomyidae: *Archivesica*, *Pliocardia*, *Calyptogena*; Hyalogyrinidae: *Hyalogyrina*; Arthropoda, Isopoda: *Illyarachna*) and eight genera also known from vents (Annelida, Dorvilleidae: *Parougia*, *Ophryotrocha*, *Exallopus*; Polynoidae: *Bathykurila*; Mollusca, Mytilidae: *Idas*; Vesicomyidae: *Archivesica*, *Calyptogena*; Hyalogyrinidae: *Hyalogyrina*) \[[@RSPB20171281C12]\]. Smith & Baco \[[@RSPB20171281C7]\] report 10 species found on northeast Pacific whale falls that also occur at hydrothermal vents, and 20 species that also occur at seeps. On a whale skeleton in the abyssal South Atlantic, Sumida *et al.* \[[@RSPB20171281C14]\] collected four genera of annelids shared with hydrothermal vents and/or cold seeps. Many of the genera and species shared between whale falls, vents and seeps are annelid worms which constitute a substantial portion of the diversity in deep-sea chemosynthetic habitats (e.g. \[[@RSPB20171281C15]--[@RSPB20171281C17]\]), suggesting that a full network analysis of faunal connectivity across deep-sea chemosynthetic habitats should include the Annelida. The faunal overlaps across whale falls, vents and seeps, and the role of whale falls as ecological stepping stones, may well have been greater before the vast reduction of whale populations and the loss of whale falls resulting from human whaling activities \[[@RSPB20171281C18]\]. Palaeo-ecological and phylogenetic studies of taxa associated with deep-sea vents and seeps also provide evidence for evolutionary connectivity with whale falls (reviewed in Smith *et al.* \[[@RSPB20171281C3]\]), including the occurrence of basal clades of bathymodiolin mussels at whale falls \[[@RSPB20171281C19]\] and indications of adaptive radiation at whale falls of taxa common at vents and seeps (e.g. the annelids in Siboglinidae \[[@RSPB20171281C20]\] and Dorvilleidae \[[@RSPB20171281C21],[@RSPB20171281C22]\]). Because vents, seeps and whale-fall fauna share taxa and phylogenetic histories, there is a strong need for intensive sampling of deep-sea whale-fall communities (including the sediment infauna) in multiple ocean basins to support network analyses of the type conducted by Kiel \[[@RSPB20171281C1]\]. This will allow us to fully elucidate global connectivity patterns among these deep-sea reducing habitats, and evaluate the roles that whale falls have played in supporting biodiversity and maintaining other ecosystem functions across the global ocean \[[@RSPB20171281C23]\]. The accompanying reply can be viewed at http://dx.doi.org/10.1098/rspb.2017.1644. Competing interests {#s2} =================== We declare we have no competing interests. Funding {#s3} ======= Work on this paper was supported by the NSF OCE grant no. 1155703 to C.R.S. [^1]: Comment on Kiel (2016): A biogeographic network reveals evolutionary links between deep-sea hydrothermal vent and methane seep faunas. *Proc. R. Soc. B* **283**: 20162337.
Grasses utilizing the C4 photosynthetic pathway have evolved repeatedly over the last ∼32 Ma (Christin et al. 2007, 2008; Vicentini et al. 2008; Bouchenak-Khelladi et al. 2009). These species play a major ecological role at the global scale, dominating warm climate grassland ecosystems (Still et al. 2003), and are important as agricultural crops (e.g. millets, maize, sugarcane), forage (Brown 1999) and biofuel feedstocks (Heaton, Dohleman & Long 2008). The potential importance of contrasts between C3 and C4 photosynthesis in determining ecological patterns, at scales up to and including the continental and global, has long been recognized and debated (Hatch, Osmond & Slatyer 1971; Osmond, Winter & Ziegler 1982; Pearcy & Ehleringer 1984). Controlling for phylogeny is crucial when comparing the ecophysiological traits of C3 and C4 grasses (Edwards, Still & Donoghue 2007; Edwards & Still 2008; Taylor et al. 2010). Molecular phylogenies place most commonly studied C3 grasses from temperate climates into a clade known as BEP (three subfamilies, Bambusoideae, Ehrhartoideae and Pooideae, exclusively C3), whilst C4 photosynthesis has arisen only in its largely tropical sister clade known as PACMAD (six subfamilies, Panicoideae, Aristidoideae, Chloridoideae, Micrairoideae, Arundinoideae and Danthonioideae, including both C3 and C4 photosynthetic types). These two clades diverged more than 50 Ma ago (Christin et al. 2008; Vicentini et al. 2008; Bouchenak-Khelladi et al. 2009), and recent work has shown that evolutionary divergences both between and within these clades may explain ecophysiological differences that were previously attributed to differences between C3 and C4 photosynthetic types and subtypes (Taub 2000; Edwards et al. 2007; Cabido et al. 2008; Edwards & Still 2008; Edwards & Smith 2010). Comparative analyses based on large molecular phylogenies indicate that C4 grasses tend to occupy a drier niche than their C3 relatives, and that the evolution of C4 photosynthesis facilitated ecological transitions into drier, open habitats (Edwards & Still 2008; Osborne & Freckleton 2009; Edwards & Smith 2010). These results are consistent with the experimental observation that, in mesic high irradiance conditions, C4 grasses typically achieve higher rates of net leaf photosynthesis (A) than their closest C3 relatives, whilst their stomatal conductance to water vapour (gs) is markedly lower (Taylor et al. 2010), i.e. C4 grasses exhibit higher intrinsic water-use efficiency (A/gs = iWUE). The ratio of A to leaf nitrogen content per unit area, photosynthetic nitrogen-use efficiency (A/Narea = PNUE), also tends to be greater for C4 grasses. However, recent experiments have suggested that such physiological advantages of C4 photosynthesis may not persist under drought. In two independent comparisons of C3 and C4 grasses from the subfamily Panicoideae, drought eliminated differences in A that were observed between C3 and C4 species under well-watered control conditions (Ibrahim et al. 2008; Ripley, Frole & Gilbert 2010). In pot-based studies, under well-watered conditions, gs in C3 species was initially higher, but under drought declined to values similar to or smaller than those observed in their C4 relatives (Ripley et al. 2007, 2010). Whilst stomatal limitation explained a large proportion of the total decline in A in C3 species, metabolic limitation was proposed to be the dominant effect on A in their C4 relatives. Although these experiments considered only a limited range of C4 species, their results undermine the hypothesis that the iWUE advantage of C4 grasses under mesic conditions, which is associated with high A and PNUE (Long 1999), persists under drought. We tested the generality of these findings with a comparative experimental approach, using a grass phylogeny to sample species representing multiple comparisons between independent C4 lineages and C3 sister groups. We concentrated on comparisons between C4 NADP-me and C3 photosynthetic types, which contribute the majority of phylogenetic diversity in photosynthetic types within the Poaceae (Christin et al. 2009). Our design did not control for habitat or life history, but relied on random sampling of species to represent the ecological diversity of the photosynthetic types. We imposed a drought treatment that consisted of a controlled decline in soil water content and addressed the key question: does drought have differential effects on A and gs in C3 and C4 species? In addition, to investigate the extent to which plants differing in photosynthetic type were tolerant of drought, we measured the response to drought of: (1) photosynthetic resource use efficiency (iWUE and PNUE); and (2) leaf senescence and leaf water potential.
http://onlinelibrary.wiley.com/doi/10.1111/j.1365-3040.2010.02226.x/full?globalMessage=0
Newcomers moving into the Texas Panhandle may consider our region’s climate and conditions harsh due to its windy nature and periods of insufficient and sporadic precipitation. Many people find difficulty gardening within the limits of our climate and soil conditions, having become accustomed to climates that receive more moisture that are not subject to our winds. Our normal climatic conditions are not extreme, especially when viewed in comparison to areas of the Southwest that experience drier and hotter conditions. Extreme climatic conditions occur everywhere from time to time. If our climate continues to warm as many think, we may be forced to be a great deal more efficient in our use of water. The Texas High Plains region receives approximately 17 – 21 inches of moisture a year. This statistic classifies our area as a semi-arid region. Many areas of the Texas Panhandle have been able to supplement the annual rainfall with irrigation from underground wells and aquifers, chiefly, the Ogallala Aquifer. However, these resources are being depleted faster than they can naturally be replenished. Water is one of the most precious resources on Earth, a commodity more in demand as the world’s population and consumers’ needs and wants increase. Although population and people’ needs and wants are not theoretically finite, our water resources are. Effective Precipitation Can Be Much Less Our rainfall is sporadic, never dependable. And when it does come, sometimes not all of it is usable. All precipitation is added into the total annual rainfall amount of 17 - 21". Our effective, actual, useable, rainfall is less. How much less depends on gardening practices, soil management and our weather. - Rainfall in quantities of less than a quarter inch, especially during the heat of summer does scant good. Much is lost through evaporation or moistens only the very top of the soil. - If turf has more than a half-inch of thatch, this rainfall may not even reach the soil and the root zone. - If the ground is hard and compacted, lacking organic matter and a healthy biological system that promotes enhanced water absorption and drainage, much of it may run off instead of soaking in. - Sometimes our rainfall comes in intense cloudbursts, coming too quickly for much of it to soak in, and is lost to runoff. Xeriscape Gardening -- Focus on Water Conservation Greater efficiency of water use can and needs to be made on all levels of our existence. Xeriscape gardening is creative gardening with the goal of water conservation in mind. A successful xeriscape gardener will implement water-conserving techniques in each of the 7 principles, not just with the principle pertaining to irrigation. Xeriscape gardening in the Texas Panhandle includes high, medium and low water-use zones. Depending on water availability, you may decide to plant mostly low water-use plants for greatest water conservation. But there are many more ways in which gardeners can use water more efficiently besides the use of xeric plants, without sacrificing the beauty of their gardens. This principle of xeriscape gardening has actually been termed, Efficient Irrigation, but I think of it as “Efficient Use of Water.” It encompasses more than just installing a drip irrigation system (the most efficient irrigation system for our dry, hot, sunny and windy climate). Most books and catalogs I've read in regards to our typical medium and high water-use plants tell us to keep the soil "evenly moist" for best results. Even watering will yield good results, but just adding water to our Panhandle soils is far from the complete answer of what is required for best gardening results. In an attempt to achieve best results, we often over-water. Over time, this causes salinity problems and waterlogged, leached and/or oxygen starved soil. Early pages of this website show that through better garden design, by amending the soil, by limiting high water-use areas of bluegrass and fescue turf, water use can be greatly curtailed. The use of mulches, proper maintenance and choosing appropriate low water-use plants will also conserve water. Reduce Voluntarily to Avoid Severe Water Restrictions Many utilities in Colorado (and the Southwest) have prioritized water use according to three main categories out of necessity. - Indoor water use is defined as essential (regardless of inefficiency or waste). - Business use is defined as essential (includes car washes, bottling companies and golf courses). - Landscaping is defined as non-essential (regardless of the impact on plant related business and homeowners investment in the landscape). (Information from, the Colorado Gardener, article by Mikl Brawner, Feb/March 2003.) Water authorities have learned that instituting water restrictions on landscapes isn’t the most effective way to reduce water consumption, it is only one way to regulate it. Voluntary reductions of water use by more efficient use is preferred to mandatory restrictions. However, mandatory restrictions are still a practice that is instituted when supplies become limited during peak demand periods and drought. Many of our residential landscapes are composed of turf grass that requires an average of an inch of water, or more, per week during the growing season. It has been calculated that for every 1000 square feet of lawn, 626 gallons per week are required, or about 10,000 gallons over the summer. Locally, "during the dry summer of 2002, the city of Amarillo pumped an average of 65 million gallons of water a day," according to Emmett Autrey of the city's water department." Of that, an average of 40 million gallons a day went to watering lawns." (Amarillo Globe-News, January 18, 2003, www.amarillo.com). Over sixty-one percent of our treated water is used for our home landscapes. It is no wonder restrictions are first placed on irrigating home landscapes during droughts or during peak demand periods. Water is a limited resource in our area as it is in many other areas. Residential landscapes use anywhere from 50 –70 percent of the drinkable water during the hot, growing season. The Southern Nevada Water Authority has similar water restriction guidelines, and drought stages related to water availability and demand. Outside water restrictions are among the first restrictions implemented by the cities within the SNWA. Even though reductions in water use have been effected by the Southern Nevada Water Authority’s campaign and incentives for more efficient use of water, in severe cases it is not enough. SNWA also places restrictions on the type of vegetation that is prohibited as the drought stages progress (www.snwa.com/html/drought_stages.html). With the implementation of water efficient practices in our residential landscapes, we can all work together to reduce consumption, especially during peak periods, to avoid water use restrictions in our home landscapes. A home landscape with the 7 principles of xeriscape gardening practiced, which includes drought tolerant plants and turf, will be better able to survive if watering is severely restricted.
https://www.highplainsgardening.com/efficient-water-use/xeriscape-focus-water-conservation
Aventine operated in a volatile, narrow margin business environment. Employees needed be armed with the most up-to-date information so that they could make data-informed decisions on the fly. Unfortunately, decision making and strategic planning capabilities were severely diminished due to limited visibility to financial data. Static reports, which were prepared by the finance department, were: - Delivered once per month, which was not frequent enough - Backward-looking, always a month behind - Limited, excluding critical financial metrics - Summarized, showcasing numbers in aggregate with no ability to drill down into data Not only were these reports inadequate, they were manual and time-consuming to produce. Further, because the reports did not meet needs, the finance department often had to spend additional time completing ad-hoc reports and analysis for specific requests. Action I led effort to implement business intelligence software tools. Once tools were implemented, I developed a series of scorecards, dashboards, and reports to allow teams to monitor operational and financial performance. Specific activities are summarized here and detailed below: - I worked with business teams that needed access to data to understand requirements - I worked with finance department to understand data architecture of financial system and their process. - I partnered with IT to identify and evaluate business intelligence tools that could integrate with the company’s financial system (Oracle ERP) - I led effort to integrate financial system with business intelligence software - I developed robust, dynamic scorecards and dashboards for each business audience which allowed teams to monitor operational & financial performance. Results - Increased visibility to real-time, actionable data - Better, faster decision making - Improved strategic planning and financial reporting activities - Reduction in time spent by finance teams; increase in accuracy and usefulness of data Additional Details Step 1: Conducted one-on-one stakeholder interviews with all parties: - C-Level Officers - Senior executive team members - Management team members (multiple departments) - Front-line employees (multiple departments) Step 2: Organized all information, preparing various self-referencing documents that articulated business goals and requirements: |DOCUMENTS||CONTENTS| |GOALS & OBJECTIVES| · Enterprise business objectives · Departmental strategic objectives · Supporting KPIs and metrics |PERSONAS| · Persona details · Goals · List of tasks (What decisions does persona make with data?) |TASK ANALYSIS| · Steps involved in each task · Data needed for each task |DIMENSIONS & MEASURES| · List of dimensions and measures required · For each dimension and measure, indicated: · Related tasks · Related personas · Data permission level (to restrict visibility of sensitive data) · Must have vs. nice-to-have Step 3: Created wireframes which illustrated possible dashboard layouts This was an iterative process which required continuous feedback from stakeholders Step 1: Conducted stakeholder interviews with select members of finance team to discuss: - Process used to create monthly reports - Data points included on reports - Financial assumptions - Manual calculations - Process used to conduct ad-hoc analysis - Most common ad-hoc requests - Pain points and challenges to compiling report and conducting ad-hoc analysis Step 2: Updated previously created requirement documents with new information Step 1: I developed scoring criteria based on business and technical requirements Step 2: I evaluated multiple business intelligence tools using a pugh matrix Step 3: I conducted a NPV analysis on the top 3 options Step 4: I provided analysis and recommendations to C-level executives. Ultimately, my recommendations were selected: Step 1: Worked with members of IT team to construct “as-is” data schema. This schema visualized data architecture of the company’s Oracle ERP system (Oracle SQL Modeler) Step 2: Conducted gap analysis; compared data schema to requirements to identify: - Missing data points - Missing dimensions - Data that needed more granularity Step 3: Worked with members of IT to construct and document the “to-be” data model. This model included several considerations: Developed 20+ custom dashboards and reports for various stakeholders: - C-Level officers - Senior executive team members - Management team members (multiple departments) - Front-line employees (multiple departments) Each dashboard included: - Filters to change dimensions (date, categories, etc.) - Drill-down data for deeper analysis To minimize cognitive load and increase understandability, every dashboard had 4 common sections.
https://www.mstiffanybritt.com/case-study/brought-it-finance-and-business-teams-together-to-build-business-intelligence-reports-and-dashboards/
This article gives a quick overview of how the Deep Packet Inspection (DPI) analysis tool works on the EdgeRouter. | | NOTES & REQUIREMENTS: Deep Packet Inspection was introduced in EdgeOS firmware 1.7.0 and is available on all EdgeRouter models with the latest firmware. Deep Packet Inspection on the EdgeRouter Compared to traditional packet analysis tools which only give a glimpse of packet information such as port number and IP address, the Deep Packet Inspection method is used to analyze and report the actual data contents in the IP packet, in some cases even encrypted traffic. When enabled, the DPI engine drills down to the core of the packet, collecting and reporting information at the Application-layer, such as traffic volume of a particular application used by the host. To omit information about application type, select hosts only. Compared to the expensive and slow DPI methods in today’s router market, Ubiquiti’s proprietary DPI tool integrates with EdgeRouter’s hardware offload feature. This means the DPI supports the most common network traffic and protocols, including IPv4, VLAN tags, PPPoE, and more. EdgeRouter works behind the scenes to automatically update these inspection signatures to ensure traffic is categorized as accurately as possible. Note that by default, the DPI engine recycles data after 30 minutes of inactivity. However, the DPI engine still retains data for any combination of host and application that passes traffic again within 30 minutes of inactivity.
https://help.ubnt.com/hc/en-us/articles/204951104-EdgeMAX-mecanismo-de-inspe%C3%A7%C3%A3o-profunda-de-pacotes-para-EdgeRouter
This movie of datasets is a companion to a flat-screen data visualization, Coral Reefs in Hot Water, produced by the American Museum of Natural History's Science Bulletins program. Corals are extremely sensitive to water that is too warm - even temperatures just 1°C above the highest average summertime temperature. If corals bathe in water above this critical threshold for just four weeks (or at higher temperatures for even shorter durations), the accumulated heat stress can induce coral bleaching, a condition where coral polyps expel their beneficial algae and starve. Bleached coral turns white and can die or remain weakened for years. Although coral can bleach for reasons other than warm water, in recent decades a worrisome pattern has emerged. Episodes of global coral bleaching are becoming more frequent. These widespread events are thought to be among the earliest distinct signs of climate change's effects on Earth's organism According to NOAA scientists, 2010 tied with 2005 as the warmest year on record. Throughout the year, satellite monitoring from NOAA's Coral Reef Watch program detected that sea-surface temperatures exceeded the bleaching threshold for several weeks in various regions of the world. Scientists and reef managers soon began to observe excessive bleaching in many areas in which it was predicted by satellite. The 2010 global bleaching event was the second ever recorded, with the first occurring during 1997-98. The following datasets from the flat-screen visualization are included for spherical display. The movie contains: - A global image showing the location of reef-building corals (indicated in blue) between 35°N and 35°S latitudes. The location data are from the World Resource Institute's Reefs at Risk Revisited report - A 1 year (January to December 2010) time series of Degree Heating Weeks data from the NOAA Coral Reef Watch program. The dataset is based on sea-surface temperature measurements taken every three days from the AVHRR sensor on NOAA's polar-orbiting satellites. The Degree Heating Weeks dataset shows how much heat stress has accumulated in an area over the past 12 weeks. Scientists have calculated thresholds of accumulated heat stress that puts corals at risk for bleaching and death. These thresholds are represented by the indicated colors. - A single Degree Heating Weeks image that represents the accumulated heat stress for all of 2010. - A map indicating observations of bleached and dead coral in 2010. These observations were reported to ReefBase and the NOAA Coral Reef Watch program. Coral Reefs in Hot Water was produced in collaboration with the NOAA National Environmental Satellite, Data, and Information Service (NESDIS), the NOAA Climate Program Office, and the Coral Reef Watch program at the NOAA National Environmental Satellite, Data, and Information Service. The flat-screen visualization and associated educator resources are available at http://sciencebulletins.amnh.org/?sid=b.v.coral_reefs.20110511 Length of dataset: 1:46 Next Generation Science StandardsPermalink to Next Generation Science Standards Cross-cutting ConceptsPermalink to Cross-cutting Concepts Grades 6–8 C2 Cause and Effect. Students classify relationships as causal or correlational, and recognize that correlation does not necessarily imply causation. They use cause and effect relationships to predict phenomena in natural or designed systems. They also understand that phenomena may have more than one cause, and some cause and effect relationships in systems can only be described using probability. C3 Scale Proportion and Quantity. Students observe time, space, and energy phenomena at various scales using models to study systems that are too large or too small. They understand phenomena observed at one scale may not be observable at another scale, and the function of natural and designed systems may change with scale. They use proportional relationships (e.g., speed as the ratio of distance traveled to time taken) to gather information about the magnitude of properties and processes. They represent scientific relationships through the use of algebraic expressions and equations C7 Stability and Change. Students explain stability and change in natural or designed systems by examining changes over time, and considering forces at different scales, including the atomic scale. Students learn changes in one part of a system might cause large changes in another part, systems in dynamic equilibrium are stable due to a balance of feedback mechanisms, and stability might be disturbed by either sudden events or gradual changes that accumulate over time Grades 9–12 C1 Patterns. Students observe patterns in systems at different scales and cite patterns as empirical evidence for causality in supporting their explanations of phenomena. They recognize classifications or explanations used at one scale may not be useful or need revision using a different scale; thus requiring improved investigations and experiments. They use mathematical representations to identify certain patterns and analyze patterns of performance in order to re-engineer and improve a designed system. C2 Cause and Effect. Students understand that empirical evidence is required to differentiate between cause and correlation and to make claims about specific causes and effects. They suggest cause and effect relationships to explain and predict behaviors in complex natural and designed systems. They also propose causal relationships by examining what is known about smaller scale mechanisms within the system. They recognize changes in systems may have various causes that may not have equal effects. C3 Scale Proportion and Quantity. Students understand the significance of a phenomenon is dependent on the scale, proportion, and quantity at which it occurs. They recognize patterns observable at one scale may not be observable or exist at other scales, and some systems can only be studied indirectly as they are too small, too large, too fast, or too slow to observe directly. Students use orders of magnitude to understand how a model at one scale relates to a model at another scale. They use algebraic thinking to examine scientific data and predict the effect of a change in one variable on another (e.g., linear growth vs. exponential growth). C7 Stability and Change. Students understand much of science deals with constructing explanations of how things change and how they remain stable. They quantify and model changes in systems over very short or very long periods of time. They see some changes are irreversible, and negative feedback can stabilize a system, while positive feedback can destabilize it. They recognize systems can be designed for greater or lesser stability Disciplinary Core IdeasPermalink to Disciplinary Core Ideas Grades 6–8 ESS2.D Weather & Climate. Complex interactions determine local weather patterns and influence climate, including the role of the ocean. ESS3.C Human Impact on Earth systems. Human activities have altered the biosphere, sometimes damaging it, although changes to environments can have different impacts for different living things. Activities and technologies can be engineered to reduce people’s impacts on Earth. ESS3.D Global Climate Change. Human activities affect global warming. Decisions to reduce the impact of global warming depend on understanding climate science, engineering capabilities, and social dynamics. LS1.C Organization for Energy Flow and Matter in Organisms. Plants use the energy from light to make sugars through photosynthesis. Within individual organisms, food is broken down through a series of chemical reactions that rearrange molecules and release energy. LS2.A Interdependent Relationships in Ecosystems. Organisms and populations are dependent on their environmental interactions both with other living things and with nonliving factors, any of which can limit their growth. Competitive, predatory, and mutually beneficial interactions vary across ecosystems but the patterns are shared. LS4.D Biodiversity & Humans. Changes in biodiversity can influence humans’ resources and ecosystem services they rely on. Grades 9–12 ESS2.A Earth Materials and Systems. Feedback effects exist within and among Earth’s systems.The geological record shows that changes to global and regional climate can be caused by interactions among changes in the sun’s energy output or Earth’s orbit, tectonic events, ocean circulation, volcanic activity, glaciers, vegetation, and human activities. ESS2.D Weather & Climate. The role of radiation from the sun and its interactions with the atmosphere, ocean, and land are the foundation for the global climate system. Global climate models are used to predict future changes, including changes influenced by human behavior and natural factors ESS3.C Human Impact on Earth systems. Sustainability of human societies and the biodiversity that supports them requires responsible management of natural resources, including the development of technologies that produce less pollution and waste and that preclude ecosystem degradation. ESS3.D Global Climate Change. Global climate models used to predict changes continue to be improved, although discoveries about the global climate system are ongoing and continually needed. LS1.C Organization for Energy Flow and Matter in Organisms. The hydrocarbon backbones of sugars produced through photosynthesis are used to make amino acids and other molecules that can be assembled into proteins or DNA. Through cellular respiration, matter and energy flow through different organizational levels of an organism as elements are recombined to form different products and transfer energy. LS2.A Interdependent Relationships in Ecosystems. Ecosystems have carrying capacities resulting from biotic and abiotic factors. The fundamental tension between resource availability and organism populations affects the abundance of species in any given ecosystem. LS2.C Ecosystem Dynamics, Functioning and Resilience. If a biological or physical disturbance to an ecosystem occurs, including one induced by human activity, the ecosystem may return to its more or less original state or become a very different ecosystem, depending on the complex set of interactions within the ecosystem LS4.B Natural Selection. Natural selection occurs only if there is variation in the genes and traits between organisms in a population. Traits that positively affect survival can become more common in a population.
https://sos.noaa.gov/catalog/datasets/coral-reefs-in-hot-water/
Lunar Lava Caves Might Offer Shelter For First Moon Colony Lunar caves hollowed out by ancient lava flows might offer humanity the perfect place to establish the first base on the moon, according to researchers from Purdue University. Geophysicist Dave Blair and his university colleagues theorize that there could be huge caverns under the moon’s surface hollowed out by rivers of molten rock that may be wide enough to fit an entire lunar colony. On Earth, lava tubes formed when rivers of molten rock hollowed out the surrounding stone leaving behind empty channels. The lava tubes running under Hawaii and Iceland are cylindrical in shape and measure no more than 98 feet in diameter, but the moon’s lower gravity means there could be massive underground caverns underneath the lunar surface. Blair’s research paper, presented in the January, 2017 issue of Icarus, presents data on an analysis of the moon’s gravity pulled from NASA studies that indicates there could be mile-wide lava tubes under the lunar surface. [Image by TannerLewis/Shutterstock] Data pulled from the Gravity Recovery and Interior Laboratory indicates the cavernous lava tubes, up to 3 miles wide, could be structurally sound with a roof six feet thick. These cavernous lava tubes could offer lunar explorers protection from the harsh conditions on the moon and enable humanity to establish the first lunar base, science consultant Andrew Daga told the National Geographic. “Nothing that we can build on the surface using reasonably available technologies could provide the same protection as the interior of a lava tube.” Entrances to the lunar lava caves, known as skylights, were first photographed by the Japanese spacecraft Kaguya in 2009 and indicate the presence of underground caves, aerospace researcher Junichi Haruyama told the National Geographic. “Lava tubes … provide ready-made protection from the harsh lunar environment: meteorite bombardment, radiation from space, and the large changes in temperature through the lunar day.” The European Space Agency has long expressed a desire to build a lunar colony and international research station, which they’ve dubbed the “Moon Village.” [Image by Stocktrek Images/Shutterstock] Scientists and engineers from across the globe would gather at this international oasis on the moon to study the mystery of deep space. The ESA is exploring various 3D printing construction methods that utilize components of the lunar soil to build the lunar base. Russia, China, and the ESA all are planning to return to the moon with rovers and astronauts in order to explore the lunar surface and establish a base of operations, which could be used to coordinate research and commercial efforts. Discussion is underway at the international level to build a space station in orbit around the moon to coordinate lunar research and mining efforts in conjunction with the developing cislunar economy. NASA also has plans to return to the moon, but only as a proving ground for an eventual manned Mars mission sometime in the 2030’s. As part of their Journey to Mars agenda, the national space agency plans to land rovers on the lunar surface and experiment with 3D printing technology. There are also several commercial companies looking to establish a presence on the lunar surface. Resources extracted from the moon and within passing asteroids promise to be a potential windfall to the companies who can successfully extract them. Boosting cargo up from Earth is prohibitively expensive with each pound of material costing several thousand dollars to launch into space. Material mined from space rocks, however, including water in the form of ice, which can be used as rocket fuel, can be used in orbit at a much lower cost. What do you think about using lunar lava caves to build humanity’s first moon base? [Featured Image by mikiell/Thinkstock]
What is Minecraft? Minecraft is a sandbox video game using voxel graphics developed by Mojang Studios. It is currently available on pretty much any gaming platform imaginable, and it has eventually become the best-selling game of all times, with its popularity still growing. It was first created using the Java programming language, and it was officially released for the public in November 2011. ⛏️ What can you do in Minecraft? There are plenty of things that you can do in Minecraft, but the main focus of the game is, as the name itself implies, mining and crafting. Players start off in a randomly and procedurally generated world where they explore, gather resources, farm crops, raise animals, fight enemy NPCs, and more. The end-goal of Minecraft is tied to that last part, as the player has to gather enough resources to equip himself to face off the Ender Dragon. However, the story is non-linear, so the player can do pretty much whatever they please. The game itself can be played in both singleplayer and multiplayer mode, with the latter being available in one of two ways: ❓ How can I improve my experience playing Minecraft? There are two versions of Minecraft available, the Java Edition, and the Windows 10 Edition. Since the Java Edition is written in Java, its performance relies solely on the power of your machine’s CPU, meaning that a powerful gaming CPU is all that is needed to run the game smoothly. On the flip side, the Windows 10 Edition is optimized for the Windows 10 OS, and it runs like pretty much any other game, so a perfect balance of the following will provide the best gameplay: Of course, these are all necessary only if you want to use shaders, or turn RTX on and transform Minecraft from a common voxel game into a visual wonder. Lastly, those of you looking to play Minecraft in multiplayer, regardless of which edition they own, will need a stable Internet connection. Using a VPN is also helpful since they not only reduce lag, but they can also help you avoid otherwise bothersome issues like packet losses. There are many VPNs that are great for Minecraft, so picking one should be easy based on what you need. 🔧 What are the main Minecraft issues reported by players? Minecraft is still a game, after all, so the occasional bug and issue is bound to arise at some point. Some are more frequent than others, so we decided to cover them all on our dedicated Minecraft Fix Hub. Here are some of the most frequent issues encountered by Minecraft players:
https://windowsreport.com/minecraft/
Indonesia hit by 873 natural disasters until March 21: BNPB Jakarta- Indonesia experienced 873 natural disasters during the period from January 1 to March 21, 2021, according to the National Disaster Management Agency (BNPB). Natural disasters to have struck the country were largely floods followed by whirlwinds and landslides, the agency noted in a statement on Sunday. The natural disasters comprised 16 earthquakes, 80 incidents of forest and land fires, 369 incidents of floods, 175 incidents of landslides, 220 incidents of whirlwinds, and 12 incidents of tidal waves as well as abrasions. The disasters affected and displaced a total of 4,138,853 people, claimed the lives of 277 people, rendered 12 people missing, and injured 12,421 others. Some 54,430 houses were damaged, including 4,984 houses that incurred serious damage, 5,907 houses that suffered moderate damage, and 43,539 houses that were slightly damaged. Furthermore, 1,709 public facilities were damaged, including 860 school buildings, 663 worship facilities, 186 health facilities, 290 office buildings, and 106 bridges. Head of BNPB Doni Monardo had earlier pressed for intensifying disaster literacy in order to boost disaster risk preparedness. "Disaster literacy must be conducted since elementary school, because the earlier one gains information on disaster management, the better it is," he affirmed. Monardo expressed belief that imparting literacy since an early age can provide knowledge to the community on the importance of disaster mitigation. "Information will continue to be disseminated at any time to remind the public, as disasters can occur at any time," he remarked.
https://indonesiatribune.com/2021/03/22/indonesia-hit-by-873-natural-disasters-until-march-21-bnpb/
--- abstract: 'We find the finite-width, i.e., the layer thickness, of experimental quasi-two dimensional systems produces a physical environment sufficient to stabilize the Moore-Read Pfaffian state thought to describe the fractional quantum Hall effect at filling factor $\nu=5/2$. This conclusion is based on exact calculations performed in the spherical and torus geometries, studying wavefunction overlap and ground state degeneracy.' author: - 'Michael. R. Peterson$^1$, Th. Jolicoeur$^2$, and S. Das Sarma$^1$' title: 'Finite Layer Thickness Stabilizes the Pfaffian State for the 5/2 Fractional Quantum Hall Effect: Wavefunction Overlap and Topological Degeneracy' --- [*Introduction*]{}: Two dimensional (2D) electrons strongly interacting in the presence of a perpendicular magnetic field experience the fractional quantum Hall effect [@tsui] (FQHE) at certain fractional electronic Landau level (LL) filling factors $\nu$ characterized by an incompressible state with fractionally charged quasiparticles with anyonic, rather than fermionic, statistics, the observation of which requires clean (high mobility) samples, low temperatures, and high magnetic fields. The FQHE abounds in the lowest Landau level (LLL) with the observation of over 70 odd denominator FQHE states, the most famous being the Laughlin state [@laugh] describing the FQHE at $\nu=1/m$ ($m$ odd) – the odd denominator a consequence of the Pauli exclusion principle. We are concerned with the FQHE in the second LL (SLL) where the FQHE is scarce with only about 8 observed FQHE states which tend to be fragile with low activation energies. The most discussed FQH state in the SLL occurs at the even-denominator filling factor $\nu=5/2$ [@52exp], thought to be described by the Moore-Read Pfaffian [@Moore91] state (Pf) which intriguingly possesses quasiparticle excitations with non-Abelian statistics providing the tantalizing possibility of topological quantum computation [@tqc]. The presence of this state challenges our understanding and suggests the condensation of bosons (perhaps fermion pairs) in a new type of incompressible fluid. Although the Pf state is the leading candidate for the observed 5/2 FQHE, the [*actual*]{} nature of the state is currently debated [@toke1; @toke2; @wojs]. Considering the importance of this state, our apparent lack of understanding of its precise nature, more than 20 years after its discovery, is both embarrassing and problematic. This is particularly true in view of the existence (for more than 15 years) of a beautiful candidate 5/2 FQHE state, viz. the Pf state [@Moore91]. The Pf is not as successful in describing the FQHE at $\nu=5/2$ as the Laughlin theory is in describing the FQHE in the LLL indicated by the modest overlap between the Pf wavefunction and the exact Coulomb Hamiltonian wavefunction [@morf; @Rezayi00] (approximately $\sim0.9$ compared to $\sim 0.999$ for the Laughlin theory wavefunctions). However, changing [@Rezayi00] the short range components of the Coulomb interaction can produce an exact wavefunction with near unity overlap with the Pf. Furthermore, the actual electron-electron interaction in the FQHE experimental systems is not purely Coulombic due to additional physical effects such as disorder, LL mixing, finite-thickness due to the quasi-2D nature of the system, etc. A natural question arises: can any of these effects be incorporated to produce an exact state that is accurately described by the Pf wavefunction? We answer this question affirmatively with one of the simplest extensions of the pure Coulombic interaction, namely, the inclusion of finite-thickness effects. We find, by including the finite-thickness effects perpendicular to the 2D plane, the exact ground state is very successfully approximated by the Pf model. We consider two different complementary compact geometries–the sphere [@haldane-sphere] and torus [@Haldane85L]. Throughout this work we assume the electrons exactly fill half of the SLL yielding an electron filling factor of $\nu=2+1/2=5/2$ (2 coming from completely filling the lowest spin-up and -down bands). Furthermore, we assume electrons in the SLL to be spin-polarized since the current consensus supports that conclusion (in any case the Pf describes a spin-polarized state) and ignore disorder or LL mixing effects (neglecting LL mixing effects may not be a very good approximation for the 5/2 FQHE [@dean]). Hence, the Hamiltonian is merely the spin-polarized electron interaction Hamiltonian. Haldane [@haldane-sphere] showed the Hamiltonian, of interacting electrons confined in the SLL, can be parameterized by pseudopotentials $V^{(1)}_m$–the interaction energies between any pair of electrons with relative angular momentum $m$ $$\begin{aligned} V^{(1)}_m=\int_{0}^{\infty}dk k [L_1(k^2/2)]^2 L_m(k^2)e^{-k^2}V(k)\;, \label{vk}\end{aligned}$$ with $V(k)$ the Fourier transform of the interaction potential and $L_n(x)$ Laguerre polynomials. We model the quasi-2D nature of the experimental system (finite-thickness) by an infinite square-well potential in the direction perpendicular to the electron plane, since the best experimental system for the observation of the 5/2 FQHE is typically the GaAs quantum well structure well described by this model (discussed elsewhere [@longpaper]), given by $$\begin{aligned} V(k)=\frac{e^2l}{\epsilon}\frac{1}{k}\frac{\left(3kd+\frac{8\pi^2}{kd}-\frac{32\pi^4(1-\exp(-kd))}{(kd)^2 [(kd)^2+4\pi^2]}\right)}{(kd)^2+4\pi^2}\;,\end{aligned}$$ where $\epsilon$ is the dielectric constant of the host semiconductor and $l=\sqrt{\hbar c/eB}$ is the magnetic length. Eq. (\[vk\]) applies to the planar geometry, hence, for the torus it is exact, however, we also use it on the sphere since (i) it can be argued it better represents the thermodynamic limit, and (ii) convenience. We do not expect any qualitative error arising from using these pseudopotentials for spherical system diagonalization. [*Spherical Geometry*]{}: This geometry consists of $N_e$ electrons confined to the spherical surface with a radial magnetic field produced by a magnetic monopole of strength $N_\phi/2$ at the center yielding a total magnetic flux piercing the surface of $N_\phi$ ($N_\phi$ is an integer due to Dirac’s quantization condition). The total angular momentum $L$ is a good quantum number and incompressible states are uniform states with $L=0$ and a non-zero energy gap. The filling factor is $\nu=\lim_{N_e\rightarrow\infty}N_e/N_\phi$. Using Eq.( \[vk\]) we calculate entirely within the LLL. The relationship between $N_e$ and $N_\phi$ for the Pf state, describing filling 1/2 in the SLL, is $N_\phi=2N_e-3$ with $-3$ known as the “shift". An appropriate measure to determine the accuracy of the Pf description of the 5/2 FQHE is the overlap between the exact ground state and the variational Pf wavefunctions. An overlap of unity (zero) indicates the two states are completely alike (different). Overlap calculations have been influential in establishing the nature of the FQHE in the LLL – in particular, the primary reason for the theoretical acceptance of the Laughlin wavefunction as the appropriate description for the 1/3 FQHE is the large ($>99\%$) overlap it has with exact finite size numerical many-body wavefunctions. In the upper, middle, and lower panels of Fig. \[overlaps\] we show the overlap between the exact ground state for some finite-thickness value $d$ ($=$ quantum well width) and the Pf wavefunction for $N_e=$8, 10, and 12 electrons, respectively (note that the $N_e=$12 system is [*aliased*]{} with a FQH state at filling 2/3 and its identification with 1/2 is dubious [@morf]). In the zero width case the overlap is relatively modest but encouraging ($\sim 0.9$). Surprisingly, the inclusion of finite width causes the overlap to [*increase*]{} to a maximum before inevitably decreasing to zero for large $d$. Furthermore, the maximum occurs at nearly the same value of $d=d_0\sim4 l$ for different system sizes indicating this conclusion survives in the thermodynamic limit. Work by the authors [@longpaper] showed this effect is true for other models of finite thickness with a similar $d_0$. Thus, the increase of the Pf overlap with the well width is a generic qualitative phenomenon, independent of the finite thickness model employed. It appears that weakening the Coulomb coupling by increasing $d$ (to about $4l$) creates an interaction Hamiltonian favorable to the Pf description. We mention (emphasized in Ref. ) that increasing overlap with increasing layer thickness does [*not*]{} happen at all for the LLL FQHE where it is known [@xie] that increasing layer thickness strongly suppresses the overlap, leading eventually to the destruction of the FQHE – e.g., the Laughlin $1/m$ overlap is always maximum at $d=0$. [*Torus Geometry*]{}: To test the robustness of this conclusion we study the Pf state on the torus. These results are, in a sense, our main results because they (i) have less system-size dependence, and (ii) are more general, i.e., independent of the detailed form of the Pf wavefunction and dependent only on the topological nature of the underlying 5/2 ground state. In fact, the finite $d$ spherical geometry results serve as our motivation and inspiration to investigate the ground state topological degeneracy on the torus at finite $d$, finding the remarkable topological degeneracy – the hallmark of a non-Abelian state. On the torus, there is no “shift” in the relation between $N_e$ and $N_\phi$ making a direct comparison possible between various quantum phases at a given filling factor, such as a Pf state, composite fermion (CF) Fermi sea, or a stripe phase, all possible at $\nu = 1/2$ in the SLL. These competing states have different spectral signatures identified by using periodic rectangular geometry with sides $a$ and $b$. The magnetic field prevents standard translation operators from commuting, however, Haldane [@Haldane85L] showed one can construct many-body eigenstates with two conserved pseudo-momenta associated with the translations. The two-dimensional pseudo-momentum ${\bf K}$ exist in a Brillouin zone containing only $N_0^2$ points where $N_0$ is the greatest common divisor of $N_e$ and $N_\phi$ and $K_x$ ($K_y$) are in units of $2\pi\hbar/a$ ($2\pi\hbar/b$). There is an exact (trivial) degeneracy $q$ due to the center of mass motion at filling factor $p/q$ which we ignore as it is unrelated to the physics (independent of the Hamiltonian). In the rectangular unit cell the discrete symmetries relate states at $(\pm K_x,\pm K_y)$ and as a consequence, we only consider states with pseudo-momenta in the range $(0\dots N_0/2,0\dots N_0/2)$. ![Low-lying eigenenergies as a function of the pseudo-momentum $\sqrt{K_x^2+K_y^2}$ (in physical units) for an aspect ratio of $a/b=0.75$. The left panels refer to zero width while the right panels correspond to the SQ potential of width $d=4 l$ for (a)-(b) $N_e$=10, (c)-(d) $N_e$=12, (e)-(f) $N_e$=14.[]{data-label="TorusFig"}](fig2.ps){width="65mm"} To render the Laughlin state periodic [@Haldane85] the recipe is to replace the Jastrow factor $(z_i-z_j)^m$ by a Jacobi theta function $\theta_1 (z_i-z_j|\tau)^m$ where $\tau =ib/a$ with $z$ the complex electron position. The Jacobi theta function has the quasi-periodicity required to construct a Laughlin state at $\nu =1/m$ observed in numerical studies [@Haldane85]. This recipe fails when applied to the Pf since there are denominators of the form $(z_i-z_j)$ present and the quasi-periodicity of the theta function does not appear as an overall factor [@Chung07]. The correct substitution [@Greiter92] is $$1/(z_i-z_j)\rightarrow \theta_a (z_i-z_j|\tau)/\theta_1(z_i-z_j|\tau), a=2,3,4.\label{PfTorus}$$ leading to *three* candidate ground states. This degeneracy is topological in origin and a signature of the special properties of the Pf state. To our knowledge, no earlier work in the literature has directly discovered this topological degeneracy on the torus for the 5/2 state in spite of its great significance. In the Pf phase of the real system, such as electrons interacting via Coulomb, we expect the degeneracy should be approximate for finite size systems and should become clearer with increasing system size. Note this trend is opposite to the overlap trend shown in Fig. \[overlaps\] where the overlap decreases slowly with increasing system size (i.e., from $N_e=8$ to 12) since the Pf is a variational approximation. The wavefunction of the Pf, when written on the torus using Eq. \[PfTorus\], has pseudo-momenta that are half reciprocal lattice vectors. For electrons at $\nu =1/2$ these states have ${\bf K}=(0,N_0/2),(N_0/2,0),(N_0/2,N_0/2)$. To explicitly separate these states we consider the rectangular unit cell (a hexagonal unit cell has discrete symmetries that render all corners of the magnetic Brillouin zone equivalent resulting in a trivial Pf degeneracy). We have performed exact diagonalizations for $N_e=10,12,14$ electrons at half filling in the SLL. Using a pure Coulomb potential ($d=0$) we find, for all system sizes, that the spectra are very sensitive to the unit cell aspect ratio and $N_e$ consistent with previous evidence [@Haldane00] for a nearby compressible stripe phase. The CF Fermi sea also displays sensitivity to boundary conditions and changes of the ground state vector ${\bf K}$ with $N_e$. This is what we observe for the same systems with the LLL Coulomb potential at zero width. No obvious ground state degeneracy can be discerned in our $d=0$ results. Switching to nonzero width (using the SQ potential) we find the appearance of a threefold quasidegenerate set of states with the right Pf predicted quantum numbers. This phenomenon is best seen in the region of maximum overlap found in the spherical geometry, i.e. $d=d_0\sim 4l$, see Fig. \[TorusFig\]. In this regime, the spectra are much less sensitive to the aspect ratio, moreover, this behavior is observed for all finite-width models. This is the first time the Pf degeneracy has been observed in an electronic topological system, i.e., a system described by a two-body electron-electron interaction Hamiltonian. For bosons at $\nu =1$ with delta function interactions the degeneracy appears at $d=0$ [@Chung07-2]. Another feature pointing to the appearance of the Pf is related to the role of the particle-hole (p-h) symmetry. The Pf wavefunction is not invariant under this symmetry [@Levin07; @Lee07] and its p-h conjugate has been termed the anti-Pfaffian. While the filling factor $\nu =1/2$ is p-h invariant, the consequences depend upon the geometry. On the sphere, while the Coulomb two-body Hamiltonian has exact p-h symmetry, the Pf wavefunction has a nontrivial shift $-3$ implying that its p-h conjugate requires a different flux $N_\phi =2N_e+1$. The zero shift on the torus leads to the coexistence of these two states, each having exactly the same threefold topological degeneracy with the same quantum numbers. In a finite system there is no spontaneous breaking of a discrete symmetry and we expect tunneling to lead to symmetric and antisymmetric combinations as eigenstates. We thus expect a doubling of the Pf states due to the p-h symmetry. This is best observed (cf. Fig. \[massfig\]) on the torus at finite width with a nearly square unit cell where the two states at $(0,N_0/2)$ and $(N_0/2,0)$ (exactly degenerate for the square unit cell) are very close in energy and all three members of the Pf multiplet have exactly one partner at a slightly higher energy. This does not happen at zero width ($d=0$) and is strong evidence for the stabilization of the Pf physics in the SLL by finite width effects. The observation of the topological degeneracy on the torus [*only*]{} for finite thickness, $d\sim 4l$, precisely where the overlap is also a maximum on the sphere is, in our opinion, compelling evidence that the 5/2 FQHE is likely to be a non-Abelian state. [*Conclusion*]{}: Our results show the, often assumed trivial, effects of the quasi-2D nature of the experimental system produce an exact state [*better*]{} described by the Pf. The fact that this conclusion is reached in different finite sized systems for two different geometries (for several models of thickness) is compelling. Our results are not inconsistent with previous work [@toke1; @toke2; @wojs] in the $d=0$ limit showing the absence of the Pf. Further, since we find a robust Pf at finite $d$ the transport gap, seen experimentally, would be weaker than predicted in $d=0$ theoretical studies since finite width “trivially” reduces energy gaps, see the inset of Fig. \[overlaps\]. Thus, the supposed fragility of the 5/2 state may not necessarily be due to it being close to a phase boundary, perhaps between a CF Fermi sea and stripe phase, instead, it may come from the relatively wide quasi-2D system needed to produce a stable Pf. In this context, it is useful to mention that although earlier theoretical work [@morf; @Rezayi00] pointed to the importance of tuning the pseudopotential ratio $V^{(1)}_1/V^{(1)}_3$ in stabilizing the Pf, finite width affects [@longpaper] [*all*]{} pseudopotentials, not just $V^{(1)}_1/V^{(1)}_3$. Tuning $V^{(1)}_1$ and/or $V^{(1)}_3$, while theoretically convenient [@morf; @Rezayi00], is an ambiguous technique for understanding the stability in real quasi-2D systems where pseudopotentials cannot be tuned arbitrarily. Therefore, our work establishing the optimal stability of the 5/2 Pf at the relatively large width of $d\sim 4l$, is important in view of the fact that the Pf is an exact eigenstate only of a three-body interaction Hamiltonian not expressible in terms of pseudopotentials. As shown in Fig. \[overlaps\], the current quasi-2D samples typically have $d\sim2.5l$ where the wavefunction overlap is large, yet not optimal, as it would be for thicker samples with $d\sim4l$. Our direct numerical finding of the appropriate topological degeneracy of the 5/2 FQHE state on the torus and the recent experimental demonstration [@dolev] of the expected $e/4$ quasiparticle charge in shot noise measurements at $\nu=5/2$, taken together, provide convincing necessary and sufficient conditions supporting the contention that the 5/2 FQHE state is indeed the Moore-Read Pfaffian wavefunction (or some other equivalent state connected adiabatically) belonging to the (SU(2))$_2$ conformal field theory description, which obeys the non-Abelian anyonic statistics appropriate for topological quantum computation [@tqc], provided the 2D samples are not too thin. MRP and SDS acknowledge support from the Microsoft Q Project. [10]{} D. C. Tsui, H. L. Stormer, and A. C. Gossard, Phys. Rev. Lett. [**48**]{}, 1559 (1982). R. B. Laughlin, Phys. Rev. Lett. [**[50]{}**]{}, 1395 (1983). R. Willett [*[et al]{}*]{}., Phys. Rev. Lett. [**59**]{}, 1776 (1987). G. Moore and N. Read, Nucl. Phys. B**360**, 362 (1991). S. Das Sarma, M. Freedman, and C. Nayak, Phys. Rev. Lett. [**94**]{}, 166802 (2005). C. Toke and J. K. Jain, Phys. Rev. Lett. [**96**]{}, 246805 (2006). C. Toke, N. Regnault, and J. K. Jain, Phys. Rev. Lett. [**98**]{}, 036806 (2007). A. Wojs and J. J. Quinn, Phys. Rev. B[**74**]{}, 235319 (2006). R. H. Morf, Phys. Rev. Lett. [**80**]{}, 1505 (1998). E. H. Rezayi and F. D. M. Haldane, Phys. Rev. Lett. **84**, 4685 (2000). F. D. M. Haldane, Phys. Rev. Lett. [**51**]{}, 605 (1983). F. D. M. Haldane, Phys. Rev. Lett. **55**, 2095 (1985). M. R. Peterson, T. Jolicoeur, and S. Das Sarma (unpublished); see arXiv:0801.4891v1 (2008). C. R. Dean [*et al*]{}., Phys. Rev. Lett. **100**, 146803 (2008). S. He [*et al*]{}., Phys. Rev. B. [**42**]{}, 11376 (1990). F. D. M. Haldane and E. H. Rezayi, Phys. Rev. B**31**, 2529 (1985). S. B. Chung and M. Stone, J. Phys. A**40**, 4923 (2007). M. Greiter, X. G. Wen, and F. Wilczek, Nucl. Phys. B**374**, 567 (1992). F. D. M. Haldane, E. H. Rezayi, and K. Yang, Phys. Rev. Lett. **85**, 5396 (2000). B. Chung and Th. Jolicoeur, arXiv:0712.3185 (2007). M. Levin, B. I. Halperin, and B. Rosenow, Phys. Rev. Lett. **99**, 236806 (2007). S.-S. Lee [*et al*]{}., Phys. Rev. Lett. **99**, 236807 (2007). M. Dolev [*et al*]{}., Nature **452**, 829 (2008). W. Pan [*et al*]{}., Phys. Rev. B **77**, 075307 (2008). H. C. Choi [*et al*]{}., Phys. Rev. B **77**, 081301(R) (2008).
There's really no arguing with a time-and-space-bending vortex, especially the supermassive specimen at the heart of our Milky Way galaxy known as Sagittarius A*. So it doesn't much matter what one star did to offend our local black hole. Only that there will be no appeal — and the punishment lasts for a virtual eternity. That's the situation a star recently spotted by astronomers finds itself in. Researchers say it was kicked out of the heart of our galaxy and banished with such ferocity that it's bound to leave the Milky Way altogether. And it's likely that old tyrant Sagittarius A* made the call. In research published this week in the Monthly Notices of the Royal Astronomical Society, astronomers describe the ultimate shooting star — one that appears to have been flung clear across the galaxy. "We traced this star's journey back to the center of our galaxy, which is pretty exciting," notes study co-author Gary Da Costa of The Australian National University in a news release. "This star is travelling at record-breaking speed — 10 times faster than most stars in the Milky Way, including our Sun." In fact, at 3,728,227 mph, it's the third fastest star ever measured — and the first hypervelocity star ever detected exiting the galactic heart. The star, dubbed S5-HVS1, should catapult right out of our galaxy in the next 100 million years. Along the way, scientists may glean a few details from its dramatic banishment. "The two really special features of this star, though, are that its speed is much higher than other similar stars that were previously discovered and it's the only one where we can be almost certain that it has come directly from the center of the Milky Way," Da Costa explains. "Together those facts provide evidence for something called the 'Hills mechanism' which is a theorized way for the supermassive black hole in the center of the Milky Way to eject stars with very high velocity." But this star's crime may forever remain a mystery. Was it something the star did? Maybe. But more likely, astronomers say, it was the company it kept. About 5 million years ago, the star likely had a mate in another star. Together, they formed a binary system, essentially two stars that revolve around each other for life. And let nothing come between them. Except for a black hole. And none shall defy Sagittarius A*. (Photo: National Science Foundation) Scientists suggest the binary system may have wandered a little too close to the cranky chasm at the heart of the Milky Way. And the black hole's punishment was as swift as it was severe. "If such a binary system approaches a black hole too closely, the black hole can capture one of the stars into a close orbit and kick out the other at very high speed," study co-author Thomas Nordlander of Australian National University, explains. Basically, Sagittarius A* broke up that lifelong relationship with devastating authority. It put one of the stars on its dinner plate, and spit the other across the galaxy, where its lonesome, never ending sentence is just beginning. "In astronomical terms, the star will be leaving our galaxy fairly soon," Da Costa adds. "And it will likely travel through the emptiness of intergalactic space for an eternity." You can almost hear a grumble escape the inescapable maw at the heart of our galaxy: Good riddance.
https://www.mnn.com/earth-matters/space/stories/star-kicked-out-black-hole-milky-way
Perfect for a nice summer evening, this meal is one you can share with the family. 2) Marinate steak with olive oil and Montreal Steak Seasoning. Cooking Tip: Marinate tri-tip a couple hours before to allow the meat to tenderize and absorb the flavors of the marinade. 3) Sear both sides of the tri-tip on your bbq for about 2 minutes each at about 400° F. 4) Cook the tri-tip for 50 minutes using indirect heat at about 350° F. Flip half way through. 5) Once the tri-tip is about done, place the ears of corn (with outer dressing still on) on the grill for about 7-10 minutes. 6) As the meat and corn are cooking place the broccoli and cauliflower in a covered bowl with a tablespoon of water inside. 7) Microwave the bowl and its content for 5 minutes on high. Once complete, drain the excess fluid and disperse the vegetables accordingly. 8) Check to see that the meat and corn are fully cooked. Cut accordingly and share this great meal with the family!
http://www.californiacalisthenics.com/grilled-tri-tip--veggies.html
It is now apparent that the maoists are increasingly dependent on IEDs to further their asymmetric war on the Indian State. In fact, given that India's paramilitary forces seem to be getting their act together on firepower, jungle warfare training and patrolling, contact battles are no longer the best option for relatively lightly equipped maoist cadre. Instead, remotely triggered IEDs of increasing sophistication and explosive power represent for the maoists a far superior proposition in the risk-return space. Concomitantly, Indian security forces need to focus their energies on directly countering the proliferating use of IEDs by the Reds. However any strategy to neutralize the IED threat has to look beyond mere technological solutions and actually needs the elevation of counter-IED methods to the level of a strategic culture within the State security forces. One of the painful lessons of Iraq and Afghanistan learnt by United States (US) forces deployed in those countries was that insurgents quickly switched to IED driven tactics to cause disproportionate casualties rather than engage in too many direct fire fights, just like the experience with maoists here in India. As such the Pentagon set up the Joint IED Defeat Organization (JIEDDO) to specifically redress the imbalance on the IED side of things. The idea of setting up a single organization to accomplish this seemed attractive enough - JIEDDO would help pool together a range of ideas under a single umbrella, oversee consolidated research into this critical area as well as procure necessary equipment by eliminating unnecessary duplication of expenditure. Accordingly, JIEDDO's mission comprises of three vectors: 'Attack the Network', 'Defeat the Device', and 'Train the Force'. Read more here: Saurav Jha's Blog : Countering Improvised Explosive Devices (IEDs) in the Indian Context Read more here:
https://defenceforumindia.com/threads/countering-improvised-explosive-devices-ieds-in-the-indian-context.60575/
PURPOSE: To provide a coating material feed device which can be cleaned without dismantling a slit section easily in a short time and eliminate the mixture of a material of different kind and the variation of coating thickness by dividing removably a slit section discharging the coating material and a coating material feed section feeding the coating material to the slit section. CONSTITUTION: A slit section 32 for feeding a coating material and a coating material feed section 30 for feeding the coating material to the slit section 32 are divided removably in a coating material feed device. When the coating material is switched over to the different kind or cleaned, agglomerates P and a residual coating material in a remaining section 31 of the feed section 30 are removed by cleaning in the state of separating completely the coating material feed section 30. The cleaning can be performed sufficiently and easily as the feed section 30 is separated, and also the coating material feed section 30 can be mounted on the slit section 32 easily. On the other hand, the coating material on the slit 29 can be removed without dismantling the slit section 32 by inserting, for example, a polyethylene terephthalate film 49 into the slit 29 and sliding the film therein. COPYRIGHT: (C)1994,JPO&Japio
This document constitutes a partnership between the moving image production industry and all public and private sector stakeholders affected by location filming in London, including those representing London's citizens. All the signatories to this partnership are working with Film London to achieve the objectives set out below. All signatories are expected to work within the letter and spirit of this document and the Code of Practice and to abide by agreed best practice in these texts. This is in addition to the statutory obligations which apply to all filmmaking activity. The partnership is designed to demonstrate that London is a film-friendly city. This means ensuring that London is a place where location filming can be conducted efficiently and successfully thereby delivering the significant economic benefits associated with filming (including local employment and tourism) while also being sensitive to the needs of those who live and work in London. The partnership will ensure that London's citizens enjoy the economic benefits of filming in London while minimising inconvenience to them. In signing this document we recognise the importance of the moving pictures industries to London and we agree to use our best efforts to abide by and maintain its terms. Status of the Partnership This partnership will have no legal status, but, in signing up to it, all parties recognise its importance and agree that that they will devote best efforts to abide by all its terms. Objectives of the Partnership We, the thirty three Local Authorities, who are signatories to the partnership, undertake to make our collective best efforts to ensure that London is film-friendly and wherever possible to provide the appropriate human and financial resources and to adhere to the shared values set out in this document and the Best Practice Recommendations; We, the public and private organisations in London, who are signatories to this partnership undertake to make our collective best efforts to ensure that London is film-friendly and wherever possible to provide the appropriate human and financial resources to deliver this objective and adhere to the shared values set out in this document; We, the moving image production industries, who are signatories to this partnership undertake to make our collective best efforts to ensure that we adhere to the shared values set out in this document and adhere to the Code of Practice; We, Film London, who are a signatory to this partnership undertake to make our best effort to ensure we monitor and maintain the working relationships which constitute the basis of the partnership and to ensure that the skills needs of personnel within the partnership are supported through a comprehensive training and development plan. Data Collection Protocols The purpose of data collection is to ensure that all stakeholders have the information and data which enable them to demonstrate the economic and cultural benefits of moving image production in London. The stakeholders will collect the information and supply it to Film London, who will receive, analyse, collate and return the information to the various stakeholders in a consistent form. We undertake to treat information received in strictest confidence where required. Communication Protocols All sectors involved in location filming in London will develop a comprehensive understanding of each other's remits and are willing to co-operate freely to ensure that improved communication lies at the heart of this process. Film London undertakes to play a significant role in brokering improved long-term relationships between the industry, local authorities and organisations. Consultation should be undertaken with local citizens and/or their representatives in respect of location filming in all cases where appropriate. The moving image industries recognise that their production activity impacts on a number of organisations and communities in the capital and undertake to openly communicate with them all in co-operation with the other signatories. The boroughs and agencies recognise that well managed film and television productions provide positive benefits for their area or property, whether through direct financial gain, secondary income generated in the local area, employment, increased tourism or cultural engagement of the local community. Recognising the Partnership As signatories, all agree that wherever possible and subject always to the signatories own policies and rules, there should be an appropriate recognition/acknowledgement on the credits or on an appropriate website. - any agency, organisation or borough which has offered significant support to a production; and, - any location which had at least one full days filming during a production. Monitoring of the Partnership Film London will act as the focal point for the sharing of knowledge and information relating to the extent to which the letter and spirit of the partnership are being implemented. Film London will develop a mediation service to resolve any disputes arising from location filming in London and set up a 24 hour filming information phone line. Review of the Partnership The partnership will be reviewed once a year by a group, chaired by Film London, representing all stakeholders. This group will undertake an annual survey which will examine examples of good and bad practice for a range of different types of production across London. Shared Values The moving image production industries, Film London and the private and public stakeholders who are signatories to this partnership are committed to an ethos of partnership which is based on our shared values. We are committed to: - conducting ourselves in a professional and reasonable manner at all times; - act in a honest and ethical fashion; - act in the most efficient and timely manner possible; - openness and transparency in all our dealings; - developing a skilled and professional workforce by ensuring the highest standards of training and education for all personnel within the partnership supporting location filming in London. We will: - accept responsibility for all our actions; - exercise tolerance at all times; - ensure that we foster mutual trust so that this is placed at the heart of all our dealings; - undertake to use best endeavours to be flexible and accommodate the needs of others; - ensure that our actions and behaviour will be proportionate; and, - be accountable for our actions. We accept that we have a duty to respect and to be responsive to the needs and desires of others. Signatories to the Partnership All signatories to the partnership have parity of esteem. - The @BFI Future Film Festival returns from 21-24 Feb! Film London Microwave alumni @GRParris will be speaking on… https://t.co/rBhmQZQD4H - We're very excited to share the online launch of @EllieGocher and @JimmyDeanFilm 's fantastic short film V, commissi… https://t.co/u8WLFTVrhJ - Head to @FoundlingMuseum tonight for a £5 screening of hard hitting drama Fishtank - the first film in their series… https://t.co/dZaRtsXunc News London's 'lost' film history restored online More than 100 hours of restored, unseen footage of London from the past 100 years will be released this month by London’s Screen Archives, thanks to the time and talents of local volunteers and film archivists. In Residence: Young UK artist filmmakers in Sardinia Artists from the FLAMIN Fellowship scheme will participate in an innovative 3-week art residency in the town of Nuoro, Sardinia, starting 27th February 2019. The residency is the result of a new and exciting partnership between the Sardegna Film Commission and the MAN Museum of Modern Art.
http://filmlondon.org.uk/filming_in_london/london-filming-partnership/partnership-agreement
I. Field of the Disclosure The technology of the disclosure relates generally to memory bitcells, and particularly to bitcells storing decoded values. II. Background Processor-based computer systems include digital circuits that employ memory for data storage. Such memory often contains a plurality of bitcells, wherein each bitcell is able to store a single bit value. Memory may also contain other digital circuits that use encoded words to control access to the bitcells according to a memory address in a received memory access request. One example is use of an encoded word to provide way selection in a cache memory. An encoded word of “n” bits enables a digital circuit to store fewer bits to retain the equivalent value of a decoded word, where the decoded word has 2n-bits. Thus, an n-bit encoded word can be decoded into a “one-hot” decoded word of 2n-bits. A word is “one-hot” when only one bit within the word is at a hot logic level, while the remaining bits in the word are each at a non-hot logic level. As a non-limiting example, a 2-bit encoded word “00” may represent a one-hot, 4-bit decoded word “0001,” where the value “1” represents a hot logic level. Because an encoded word has fewer bits than its corresponding decoded word, storing an encoded word in memory is effective at minimizing the number of storage elements employed to store the word, thus also minimizing circuit area. For example, while storing an n-bit encoded word requires ‘n’ storage elements, storing an equivalent 2n-bit decoded word would require 2n storage elements. Thus, the area required for storing an encoded word may be less than the area required to store a corresponding decoded word. However, once the encoded word is read from the memory, decoder logic is required to convert the encoded word into a decoded word. Thus, it is common for a digital circuit to read the encoded word from the memory, which is then decoded by a decoder function into a decoded word for use by the circuit. As an example, FIG. 1 illustrates an exemplary cache memory 10 that stores encoded words for use in memory accesses. As illustrated in FIG. 1, the cache memory 10 includes a plurality of sets 12(0)-12(M−1), wherein ‘M’ is a positive whole number such that the number of the plurality of sets 12 is ‘M’. Each set 12(0)-12(M−1) includes a prediction array 14(0)-14(M−1) that receives a 2-bit encoded word 16(0)-16(M−1) from an encoder 18(0)-18(M−1). Each prediction array 14(0)-14(M−1) is comprised of six transistor (6T) Static Random Access Memory (SRAM) bitcells (not shown) in this example. A decoder 20(0)-20(M−1) is also included in each set 12(0)-12(M−1), wherein the area of the decoder 20(0)-20(M−1) directly correlates to the number of storage elements within the prediction array 14(0)-14(M−1). Further, each set 12(0)-12(M−1) includes a data array 22(0)-22(M−1), wherein each data array 22(0)-22(M−1) is divided into four ways 24(0)-24(3). The way 24 information (not shown) for each set 12(0)-12(M−1) is stored as 2-bit predicted words 26(0)-26(N−1) within each prediction array 14(0)-14(M−1), (wherein ‘N’ is a positive whole number such that the number of the plurality of predicted words 26 is ‘N’). With continuing reference to FIG. 1, using components relating only to set 12(0) of the cache memory 10 as an example, a 4-bit word 28(0) representing a way 24 within the data array 22(0) of the set 12(0) is provided to the encoder 18(0). The encoder 18(0) converts the 4-bit word 28(0) into the 2-bit encoded word 16(0) prior to providing the way 24 information to the prediction array 14(0). Such a conversion is performed, because the prediction array 14(0) stores the way 24 information associated with the data array 22(0) as a 2-bit encoded word (e.g., the 2-bit predicted word 26(0)) to save storage area within the cache memory 10. Upon receiving the 2-bit encoded word 16(0), the prediction array 14(0) determines which way 24(0)-24(3) to select, and provides the 2-bit predicted word 26(0) to the decoder 20(0). The decoder 20(0) converts the 2-bit predicted word 26(0) into a one-hot, 4-bit decoded word 30(0), wherein the hot bit within the 4-bit decoded word 30(0) represents the way 24 to be selected within the data array 22(0). For instance, a value of “0001” may represent way 24(0), while a value of “1000” may represent way 24(3) of the data array 22(0). Once the 4-bit decoded word 30(0) has been provided to the data array 22(0), data within the selected way 24 may be provided to a cache output 32(0). As evidenced by this example, the prediction array 14(0) only requires two storage elements for each way 24 entry, because the 2-bit predicted word 26(0) is encoded in 2 bits. However, when reading the 2-bit predicted word 26(0) from the prediction array 14(0), the 2-bit predicted word 26(0) must be decoded into the 4-bit decoded word 30(0) in order to select the desired way 24 in the data array 22(0). Thus, even though the prediction array 14(0) is configured to store 2-bit words rather than 4-bit words in an attempt to save area, the required decode function increases the latency incurred each time the way 24 information is read from the prediction array 14(0). Moreover, in many applications executed by digital circuits, the read path to read memory is often the critical path. As previously described above, when storing encoded words that represent information such as memory addresses for memory access requests, a decoder is placed within the read path in order to generate the decoded word from the stored encoded word. If the read path is the critical path in memory for memory accesses, the time required to decode the encoded word causes an increase in read latency. Therefore, the overall latency of the memory is increased as a result of decoding the stored encoded word for every read operation.
The City of Boise Office of Community Engagement's role is to foster deeper connections and engagement with citizens of Boise and city employees using modern communication best-practices. The Office works to establish a strategic, citizen-centric communication culture within city government that reflects Boise's vibrant, dynamic and innovative livability, and builds on residents' high satisfaction with the value of the city services they receive. It is part of a larger effort to bring a first-rate customer service mindset to the city's interaction and transactions with citizens. Using the broad-breadth of 21st century communication tools and technology, we will listen and respond to the feedback of our residents and employees more effectively, while proactively delivering the right messages to the right people at the right time. This effort will prioritize, consolidate and streamline the city’s overall citizen and employee communication efforts, while creating more opportunities for residents to engage with their city government on a more personal level. To contact the Office of Community Engagement, please email [email protected] or call 208-972-8500.
https://www.cityofboise.org/community-engagement
Title: A bill to prohibit the issuance of any lease or other authorization by the Federal Government that authorizes exploration, development, or production of oil or natural gas in any marine national monument or national marine sanctuary or in the fishing grounds known as Georges Bank in the waters of the United States. H.R. 6057; To amend the Outer Continental Shelf Lands Act to prohibit preleasing, leasing, and related activities in the Beaufort and Chukchi Sea Planning Areas unless certain conditions are met. H.R. 6251; To prohibit the Secretary of the Interior from issuing new Federal oil and gas leases to holders of existing leases who do not diligently develop the lands subject to such existing leases or relinquish such leases, and for other purposes. H.R. 7051; To prohibit issuance of any lease or other authorization by the Federal Government that authorizes exploration, development, or production of oil or natural gas in any marine national monument or national marine sanctuary or in the fishing grounds known as Georges Bank in the waters of the United States. S. 391; A bill to amend the Outer Continental Shelf Lands Act to permanently prohibit the conduct of offshore drilling on the outer Continental Shelf in the Mid-Atlantic and North Atlantic planning areas. S. 3239; A bill to prohibit the Secretary of the Interior from issuing new Federal oil and gas leases to holders of existing leases who do not diligently develop the land subject to the existing leases or relinquish the leases, and for other purposes. S. 2568; A bill to amend the Outer Continental Shelf Lands Act to prohibit preleasing, leasing, and related activities in the Chukchi and Beaufort Sea Planning Areas unless certain conditions are met. This entry was posted on Saturday, September 27th, 2008 at 2:01 am and is filed under Commentary. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
http://www.bitsofws.com/index.php/2008/09/27/while-we-run-out-of-gas-senator-kerry-submits-bill-to-stop-drilling/
LISTSERV Maestro Tech Tip Q: How do I best use the team collaboration feature in LISTSERV Maestro? Answer by Ben ParkerChief Corporate Consultant, L-Soft The team collaboration feature in LISTSERV Maestro provides an excellent way to share the work on mailings among a group of users, such as members of a department and so on. However, this feature can also have a few unexpected aspects. Let's take a look at a few scenarios and how to take advantage of team collaboration defaults to make group management more smooth. Unexpected Challenges At a university, Bob sent the all-campus newsletter job last week. Bob is out this week so it's Fred's job to send the newsletter this week. But when he logs in to LISTSERV Maestro, he is unable to see last week's job. Bob had forgotten to "share" the job through team collaboration when he set up the job last week. Carolyn recently joined the Public Affairs Department at a company. She is assigned to work on the weekly intra-company email newsletter managed within LISTSERV Maestro. Her colleague Linda, tells her to start by copying the previous week's newsletter job and replacing content articles. Unfortunately, when Carolyn logs in to LISTSERV Maestro, she is unable to see any of the previous newsletter jobs. Case 1: Bob and Fred In the first case above, everyone in Bob's workgroup had been trained that they had to remember to "share" their jobs through team collaboration every time they started a new job, even if it was a copy of a previous job. This works fine, until you forget to do it. But there is a better way. Bob (and every other user in the group) can and should set team collaboration defaults that will automatically be applied every time they create a new job. They don't even have to think about it. Log in as a Maestro user in the usual way. Near the upper right corner, select "Preferences" and then click on "Job Definition". Then define the settings that you want for your other team members: In this case, Bob and Fred will have full collaboration rights on all jobs, while George can access their jobs for follow-up reporting and statistical analysis. Note that the "Use in Reports" setting is what also allows Fred to see all of Bob's jobs in the completed job listing and allows him to create a new job by copying from a previous job that Bob completed. There are a variety of other job definition settings that can be made under Preferences to minimize errors and save time when creating new jobs, but one of the most important is the team collaboration defaults. Have you set yours? Case 2: Carolyn and Linda In the above case, all LISTSERV Maestro users can set their own personal preferences. Dealing with the second case (Carolyn's) involves the LISTSERV Maestro administrator anticipating that group membership may change over time and taking advance action to prepare for that eventuality. To allow for a new member to join the group later and be able to immediately see all previous completed and open jobs in that group, the administrator adds a new account that can be named anything, but we will call it "user1". It's best to do this when the group and its user accounts are first created, but this can also be done at any later time. For security reasons, this "user1" account is granted minimal user rights. (These can be changed later when the account is activated): Then the actual team collaboration defaults are set in preferences by the other group members (as was done in the Case 1 above): Note again that the only ongoing permission needed is "Use in Reports" to allow this user to be able to see completed and open jobs when the account is activated sometime in the future. To activate this account, the administrator simply logs in and renames the account from "user1" to a more appropriate name, then assigns user rights as appropriate. Once that is done, other group members will likely need to adjust their team collaboration defaults. If a group member is reassigned out of the group (for example Linda) and a new person comes in, it is usually easier to simply rename the "Linda" account at that time, holding "user1" in reserve for actual group growth. At that time you may want to add a new account "user2" to allow further expansion in the future. A Better Way? Both cases above share a possible disadvantage in that each relies on all of the group members to set their team collaboration defaults in a similar manner and possibly adjust them again later as the group membership changes. Anything that relies on multiple users taking the same action at the same time may be subject to mistakes. To overcome this, it's important to remember that team collaboration settings can only be made by the owner of any job. So the solution is for the administrator to create a special user account in a group that will be the owner of all jobs created in that group, no matter which user actually creates the job. We'll call this user the "superuser". It has no special powers or differences from any other user, except that it will be the owner of all jobs and thus the one account that can define team collaboration defaults for all other users in the group. This is most easily assigned from Global Settings > Maestro User Interface > User Rights: Then log in to the group as a normal user but using the "superuser" account and assign team collaboration defaults as appropriate: This way all the team collaboration defaults are set by one user and are easily changed by the same one user, whenever changes are necessary. We hope this gives you some insight into how to resolve issues that sometimes seem confusing and will allow you smoother control over managing team collaboration jobs in LISTSERV Maestro groups. References Subscribe to LISTSERV at Work. © L-Soft 2015. All Rights Reserved.
http://www.lsoft.com/news/techtipMAE-issue2-2015.asp
Greetings again from the darkness. I can’t recall being as recently thankful for a good laugh as I was after watching this expert 13 minute satire from writer-director Poppy Gordon and co-writer Aldo Arias. The dialogue is sharp, the performances spot on, and the topic couldn’t be more relevant to the moment. Heather (Samantha Robinson) is enjoying her daily yoga in the backyard of her Beverly Hills mansion when she arranges a meeting with her two friends, Stacia (Juliette Goglia) and Christa (Ava Capri). The purpose of their meeting, held at a private club, is to create the perfect Oscar-caliber movie using the “Sundance formula” – a message movie addressing contemporary issues. Of course, the best intentions of these three white privileged women is nothing short of cringe-inducing and hilarious. They are simply clueless on how their lot in life skews their perspective, and they clearly are in this for the recognition, rather than to make a difference. Director Gordon divides the film into three Phases (Concepting, Focus Groups, Pre-Production) mirroring the movie development process. During the Focus Group phase, there is a TANGERINE reference, a welcome shout-out to Sean Baker’s 2015 film shot with an iPhone. Satire is not an easy form of comedy, as it requires terrific writing on ‘touchy’ subjects, and a full buy-in from performers. This little film wraps it all into a tight package that makes its point, while delivering laughs.
https://redcarpetcrash.com/movie-short-review-for-your-consideration/
Extensive experimental results on a public database for detection of daily activities in a home environment, show that the overall highest recognition accuracy is achieved by the STFT magnitude representations. References SHOWING 1-10 OF 22 REFERENCES The SINS Database for Detection of Daily Activities in a Home Environment Using an Acoustic Sensor Network - Computer ScienceDCASE - 2017 A database recorded in one living home, over a period of one week, containing activities being performed in a spontaneous manner, which make use of an acoustic sensor network, and are recorded as a continuous stream is introduced. Sound Event Detection in Domestic Environments with Weakly Labeled Data and Soundscape Synthesis - Computer ScienceDCASE - 2019 The paper introduces Domestic Environment Sound Event Detection (DESED) dataset mixing a part of last year dataset and an additional synthetic, strongly labeled, dataset provided this year that’s described more in detail. Deep Convolutional Neural Networks and Data Augmentation for Acoustic Event Recognition - Computer ScienceINTERSPEECH - 2016 This work introduces a convolutional neural network (CNN) with a large input field for AED that significantly outperforms state of the art methods including Bag of Audio Words (BoAW) and classical CNNs, achieving a 16% absolute improvement. Audio tagging with noisy labels and minimal supervision - Computer ScienceDCASE - 2019 This paper presents the task setup, the FSDKaggle2019 dataset prepared for this scientific evaluation, and a baseline system consisting of a convolutional neural network. FSD50K: An Open Dataset of Human-Labeled Sound Events - Computer ScienceIEEE/ACM Transactions on Audio, Speech, and Language Processing - 2022 FSD50K is introduced, an open dataset containing over 51 k audio clips totalling over 100 h of audio manually labeled using 200 classes drawn from the AudioSet Ontology, to provide an alternative benchmark dataset and thus foster SER research. Acoustic design guidelines for dementia care facilities - Medicine - 2014 The role of noise on the ability of people with dementia to interpret and understand their surroundings is explored and examples are provided of acoustical design and management practices that contribute to increased levels of agitation and aggression among residents who have dementia. Scalogram Neural Network Activations with Machine Learning for Domestic Multi-channel Audio Classification - Computer Science2019 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT) - 2019 This paper looks at domestic multi-channel audio classification through a comparison of various combinations of existing pre-trained Neural Network (NN) models, with Support Vector Machine (SVM) for classification. Open-source Multi-speaker Speech Corpora for Building Gujarati, Kannada, Malayalam, Marathi, Tamil and Telugu Speech Synthesis Systems - Linguistics, Computer ScienceLREC - 2020 We present free high quality multi-speaker speech corpora for Gujarati, Kannada, Malayalam, Marathi, Tamil and Telugu, which are six of the twenty two official languages of India spoken by 374… A Dataset and Taxonomy for Urban Sound Research - Computer ScienceACM Multimedia - 2014 A taxonomy of urban sounds and a new dataset, UrbanSound, containing 27 hours of audio with 18.5 hours of annotated sound event occurrences across 10 sound classes are presented.
https://www.semanticscholar.org/paper/DASEE-A-Synthetic-Database-of-Domestic-Acoustic-and-Copiaco-Ritz/208d028f0af3c6f63fbe2ffc0f86a376c4451e34
25 kms from Mangalore, Kateel Shri Durga Parameshwari Temple is one of the most revered places for Hindus. This temple draws lot of devotees daily and is on the list of every tourist who is on a pilgrimage tour to Dakshina Kannada. Routes to reach Kateel: Mangalore - Bajpe - Kateel - 25 Kms Udupi - Mulki - Kinnigoli - Kateel - 45 Kms Moodabidri - Kinnigoli - Kateel - 22 Kms BC Road - Kaikamba - Bajpe - Kateel - 35 Kms Kateel Shri Durgaparameshwari TempleKateel temple, abode of Goddess Shri Durga Parameshwari is located in the middle of Nandini river. The place surrounded by river and greenary all around, thus making it a picturesque location. To enter into the temple, you have to cross a small bridge. Sound made by the flowing river in the temple premises is enchanting to hear. Just being in the calm temple surroundings , gives you a nice experience. Small Bridge On Way To TempleAs per the Kshethra Purana, this place was once faced severe draught. At that time, Maharshi Jaabaali made a request to Nandini, daughter of Kamadhenu to come to Bhoo Loka (earth) to help him in performing yajna to get rid of the draught. When she dishonoured his request, he cursed her to flow in the earth as a river. When Nandini pleaded for mercy, Maharshi asked her to pray to Adi Shakthi to relieve her from the curse. Adi Shakthi after hearing to Nandini told her that she would take birth as her daughter to relieve her from the curse. Meanwhile, Arunasura, one of the rakshasas, had got a boon from Brahma that he will not be killed by Thrimurthis(Brahma,Vishnu,Eeshwara), devas, men, women or by any two leggged or four legged animal. Kateel Temple PremisesHe started troubling the sages and spoiled their yajnas. At last, Adi shakthi in the form of a big, furious bee called 'Bhramara' stung him to death. Devi, then emerged in the form of a 'Linga' in the middle of the river Nandini. From then on, this place is known as Kateel ('kati' in Kannada means waist and 'ila' means earth/place). Other gods worshipped in the temple are Raktheshwari, Mahaganapathi, Shaastaara, Kshethrapaala, Naaga, Brahmaru, Chaamundi. Nandini RiverAnnadaana or free meals to the devotees is served daily in the afternoon and night. To go to the Bhojana Shaale(or Dining hall), which can accomodate upto thousand people at a time, one has to cross another bridge from the temple. Temple elephant, is an attraction among the devotees who come here from far away places. Yakshagaana, a popular folk art in coastal Karnataka, is performed by the temple troupes. 4 such troupes are already booked for the next 10 years to perform at various places. These troupes travel mostly in Dakshina Kannada, Udupi and Kasaragod districts for performance. Temple ElephantKateel is a must visit place for all those visiting coastal Karnataka. Green surroundings is for the pious and also for the nature lovers! I am sure you will definitely wish to visit the place for second time if you had come here once. Routes to reach Kateel:
https://www.raveeshkumar.com/2008/11/kateel-shri-durgaparameshwari-temple.html
WHITE PAPER: This white paper introduces a new class of tools designed to optimize today's virtual infrastructure and combat these challenges. In this paper, discover how IT managers are overcoming concerns regarding fibre channel and the cloud, I/O challenges and more. You'll also receive today's best practices to ensure the future success of your cloud. WHITE PAPER: This white paper outlines the benefits associated with virtualizing Microsoft applications, and discusses how a newly-introduced infrastructure solution can simplify the management and cost associated with this process. Read on to learn more about this emerging technology and its benefits. WHITE PAPER: This white paper highlights some of the crucial aspects of enterprise-wide file sync and share and proposes a solution that meets the challenges many organziations face. WHITE PAPER: This white paper is here to help you understand the important factors to be considered when selecting drives for your storage infrastructure. In this paper you'll receive 6 key tips and considerations to examine before deciding on a storage solution, and discover which technologies are better-suited for various types of data and/or organizations. WHITE PAPER: This white paper provides an introduction to the EMC Isilon OneFS operating system, the foundation of the Isilon scale-out storage platform. In this paper, you'll learn about the benefits of storage scaling and gain an overview of the architecture of this platform. WHITE PAPER: This brief resource breaks down Big Data in a way that's understandable and shows how adopting Big Data analytics early can be beneficial to organizations in any industry. WHITE PAPER: This white paper is here to break down two of today's cloud offerings - Egnyte and Box. Read on for a full comparison of the features and benefits these two solutions offer, discover best practices to help you ensure a smooth transition to the cloud and more.
https://www.bitpipe.com/rlist/term/type/white+paper/Storage-Encryption.html
My work currently focuses on the collection and analysis of large healthcare databases, including clinical, genomic, and operational data. I especially enjoy developing software and designing systems to accelerate this work. I also specialize in visualizing and communicating insights from complex data, to both interdisciplinary groups of stakeholders and non-experts. I am a proponent of open access and reproducibility in research. My Background I’m a data scientist, epidemiologist, and software engineer. I have more than a decade of experience building software, designing and implementing studies, and analyzing data for healthcare-related research. I’m currently an Assistant Professor of Precision Health at Geisinger Research where I focus on risk prediction, communicating and visualizing complex information, and applications of clinical informatics and bioinformatics. I previously led the data science team at a data-drive healthcare software startup. My PhD is from University of Maryland Baltimore’s epidemiology department. (Epidemiology is a generalist field related to public health research; my training included statistics, study design, survey methods, and causal inference.) My dissertation involved UX research and data presentation optimization for public health information. Please see my CV for more on my background and experience. Current Research My current research includes: - Predicting germline variant pathogenicity using large genomic and clinical data sources. - Automated methods for abstracting clinical data from electronic medical records. Past Research Some of my past work includes: The cometsoftware suite for collecting discrete choice experiment data in low-connectivity settings. This work was done in collaboration with Jan Ostermann, PhD at the University of South Carolina. It is not publicly accessible, but if you would like more information please contact me. My dissertation, titled Assessing and Improving Patient Understanding of Publicly Reported Healthcare-Associated Infection-Related Hospital Quality Measures. Open Source Software sinatra-contact-form A quick, open-source replacement for a service like FormKeep, which can run for free using Heroku and Postmark. sasfix A tool for fixing the formatting of SAS output so it can be used in presentations, emailed, etc. without funny characters or unnecessary white space. Documentation & Teaching Personal Knowledge Base This is my “outboard brain” for structured notes on both technical and more mundane topics. Some of my favorite pages include my R and SciPy reference notes, my exhaustive analysis of Mac and iOS notetaking apps, and my list of resources for learning git. Tech Notes Blog I post one-offs that are super specific or don’t fit cleanly in my knowledge base to my Tech Notes blog. For example: Thoughts on Reference Management Software What reference management software should you use? I wrote the first version in 2015 and have updated it constantly since then. Organizing Data Analysis Projects Best practices for organizing data analysis code, data, and other related documents. Data Presentation Tips Best practices for presenting data, including examples and links to reference materials. SQL Joins Explained A screencast and accompanying written explanation of how the different kinds of SQL joins work. Tools for Epidemiologists A curated list of online resources and software for epidemiologists. Retired Projects Beautify A rubygem that makes it easier to output pretty tables with Stata. Useful but not for the faint of heart. Pitchfork music reviews + Rdio mashup An easy way to see what new music is available on Rdio and how good it is (according to Pitchfork). The Survey Software Review A systematic, independent analysis of online survey software for researchers. Combine A quick, open source app for accepting credit card payments for invoices online (read more). No longer maintained because Harvest now has similar functionality built-in.
https://masnick.me/work/
CROSS REFERENCE TO RELATED APPLICATION FIELD OF INVENTION This application is a non-provisional claiming the benefit of U.S. Provisional Pat. App. No. 63/081,157, filed Sep. 21, 2020 and also titled “METHODS AND SYSTEMS FOR CROSS-DOMAIN TWO-WAY COMMUNICATION BY DYNAMIC WEB CONTENT,” which is hereby incorporated in full by reference. The present innovations generally address web frame and graphic compositing, and more particularly, to systems and methods for a Supra Boundary Web Compositor, directly linking via two-way communication a digital advertisement or other imported content displayed on a first website to server-side functionality of a second website, such as a shopping cart of that website. BACKGROUND The “holy grail” of the advertising industry has always been the ability to prove a direct connection between an advertisement and a purchase. But this goal has eluded advertisers. The industry's most famous quote, attributed to John Wanamaker, reads: “I know that half my advertising budget is wasted; the problem is I don't know which half!” Even with the subsequent advent of the Internet, when digital advertising enabled a plethora of types of measurement of consumer behavior in relation to digital advertisements, sophisticated measurements of other consumer behavior on the Internet, and extremely elaborate uses of both online and offline data to target consumers with more effective advertising—all in hopes of documenting a direct connection between an advertisement and the consumer's subsequent action—it has only been possible to draw an indirect, inferred connection between the consumer's exposure to an advertisement and that consumer's subsequent behavior. Further, even when an indirect connection is able to be inferred, it is almost impossible to attribute credit exclusively to the particular advertisement that was viewed, versus acknowledging the possibility that many other sources of motivation may have been at work, such as the viewing of other/different advertisements, exposure to other marketing mechanisms both online and offline (from Linear Television to Roadside Billboards), etc. Additionally, interactive methods of advertising are often hampered by security settings in users' web browsers, which prevent AJAX (Asynchronous Javascript and XML) function calls from one domain to another. These function calls are the backbone of interactive web browsing and enable web pages to dynamically load content from an external source. Because the loading of content from an arbitrary source can be a significant security risk, many modern browsers allow only AJAX calls to the same domain. Thus, all data sources must be stored (or at least appear to be stored) in a single domain, creating an arbitrary restriction on dynamic content, forcing unnecessary architectural design decisions such as server-side workarounds to deal with the limitation, and hampering cooperation/integration from multiple data sources that could be best used to serve the consumer. Alternatively, a user who wants to access a website that attempts to draw from multiple data sources will have to use a browser that permits ultra-low security settings and deal with the inconvenience and danger of exposure to cross-domain scripting attacks while browsing the Internet. Thus, there are advantages to having a system that can create advertisements that more directly link the advertisement to the purchase, as well as to systems that facilitate cross-domain communication for more functional ads. SUMMARY Ad-to-Cart (“A2C”) communication provides an elegant solution for linking a digital advertisement directly to a purchase on a merchant's website by literally linking the advertisement to the website's shopping cart and vice-versa (creating bidirectional communication and passing of data), such that a consumer can shop the merchant's online store directly within the advertisement itself, selecting products for purchase and adding them directly to a shopping cart within the advertisement that is in fact a direct instantiation of the shopping cart on the merchant's website. The shopping cart within the digital advertisement contains all the functionality and behaviors of the cart on the website itself, such as displaying an automatically-incrementing counter showing the number of items in the cart, and when the consumer clicks or taps the cart icon to proceed to check-out, the consumer is taken directly into the shopping cart on the merchant's website with all the selected products still populating the cart. The direct communication between advertisement and cart is bidirectional, so that if the consumer further modifies the cart on the merchant's website and then returns to the advertisement, all such modifications remain reflected in the cart within the advertisement. And if further modifications are made within the advertisement, they are also reflected in the cart on the website. In this way, the advertisement is being linked directly/literally to the purchase, with no room for doubt that the advertisement is directly driving the purchase. Further, this invention represents an orders-of-magnitude leap in advertisement effectiveness, providing the consumer with instant gratification and providing the advertiser with a shorter, direct funnel from advertisement exposure to purchasing. The presently-described systems and methods enable the consumer to add to cart continuously within the advertisement, and then check out once with a single click or tap. This bidirectional communication between advertisement and cart is enabled by an iFrame (“inline frame”), placed on the merchant's website, which is able to communicate directly with the website's shopping cart application in the same way that an actual consumer would be able to communicate directly/manually with the shopping cart. In one aspect of this disclosure, a method for modifying a first webpage to enable cross-domain two-way communication is disclosed. The method includes loading, in the first webpage, dynamic content comprising one or more user interface elements. An iFrame is embedded into the first webpage whose source is set to a second webpage. A script file including an executable program, such as (but not limited to) a Javascript file, is loaded using the iFrame and one or more functions defined by the script file are bound to the one or more user interface elements. When the user interacts with the one or more user interface elements, the one or more functions are used to send a message to the second webpage, a response is received from the second webpage, and to update the one or more user interface elements in response to the received response. BRIEF DESCRIPTION OF THE DRAWINGS Other aspects, features and advantages will become more fully apparent from the following detailed description, the appended claims, and the accompanying drawings, provided solely for purposes of illustration without restricting the scope of any embodiment: FIG. 1 depicts a network of computing devices to be used in a system for providing webpages that comprise interactive content with two-way cross-domain communication capabilities; FIG. 2 depicts a pair of user interfaces that are modified and combined by means of methods and systems described herein; FIG. 3 depicts a method for augmenting a web-based user interface to add two-way, cross-domain communication with a different website from the one being augmented; FIGS. 4A-4G depict a concrete user experience that may be accomplished using the presently described technology; and FIG. 5 is a high-level block diagram of a representative computing device that may be utilized to implement various features and processes described herein. DETAILED DESCRIPTION In order to address the issues described above, methods and systems are provided to facilitate two-way communication between an advertisement or other content dynamically loaded in a first website and a second website that provides a user interface that a user might wish to interact with via the advertisement or other content. FIG. 1 depicts a network of computing devices to be used in a system for providing webpages that comprise interactive content with two-way cross-domain communication capabilities. 100 120 100 A user computing device with software including a web browser is used to connect to the Internet or another similar network. The user computing device may be a PC, a mobile phone, a gaming console, or any other device capable of running a web browser to display a webpage to a human user. 105 200 110 FIG. 2 The browser may be navigated to a URL at a first domain provided by a first web server . A webpage at the first domain ( in ) may incorporate instructions to the user's browser to load an advertisement or other dynamic content supplied by an advertisement server or other content server. 115 220 FIG. 2 Finally, a second web server may provide a second webpage or other user interface ( in ) that a user would be interested in interacting with. In a preferred embodiment, the second webpage would be a merchant website/shopping cart, but any interactive, web-based user interface could be involved, whether that is a web-based email client, a search engine, a social media page, online banking, or an interface for making an appointment or reservation. 105 110 115 100 Although in most typical use cases the servers , , and are likely to be different devices, there is no reason that a single device could not serve the function of two or even three of the devices, since each device's primary role is to provide information when the browser on the user's computing device requests it. FIG. 2 depicts a pair of user interfaces that are modified and combined by means of methods and systems described herein. 200 205 210 The first webpage includes a normal region for content , as well as, in a preferred embodiment, a top banner or side banner region for displaying the advertisement or other dynamic content . 220 115 225 210 215 220 The second webpage or user interface provided by the second web server includes some number of buttons or other user interface elements . In a preferred embodiment, the advertisement or other dynamic content incorporates an identical set of buttons or user interface elements so that the advertisement or other content almost appears to be a window directly into the second webpage or user interface , with the same look and feel. FIG. 3 depicts a method for augmenting the first webpage to add two-way, cross-domain communication with the second webpage on a different site and domain from the one being augmented. 200 100 200 210 110 300 When a user visits the webpage in a browser on the user's computer , in accordance with the code of webpage , the browser downloads an advertisement or other content from the advertisement server and displays it within the webpage (Step ). 200 305 115 After the content is loaded, a hidden iFrame is added to the webpage (Step ). In a preferred embodiment, the iFrame is completely hidden from the user and is invisible. In other embodiments, if browser or security settings prevent an invisible element from being created or from loading content, the iFrame may instead be arbitrarily difficult to see (for example, 1 pixel by 1 pixel in size) or placed in a location where it is unlikely to be seen (such as a bottom corner of the webpage). The hidden iFrame loads a webpage from the second webserver , from a URL at the second domain. For example, while implementation of is not limited to use of Javascript, this could be accomplished with Javascript code as follows: var helperIframe = document.createElement(“iframe”); document.body.appendChild(helperIframe); helperIframe.src = “https://www.example.com/”; 310 The newly-created iFrame is then used to load a script file including an executable program, such as a, such as (but not limited to) a Javascript file (Step ) containing custom code for facilitating the cross-domain communication. 210 For example, a function that is within the Javascript file and becomes accessible to the digital advertisement might read: function sendMessage(itemid, index) { var msg={“itemId” : itemid, “itemIndex” : index}; helperIframe.contentWindow.postMessage(msg,‘https://www.example.com’); } 210 315 The function(s) provided in the custom code are bound to one or more elements of advertisement (Step ) so that interactions with the advertisement, including clicking, mouseovers, typing, dragging, etc. will cause the function(s) to be called. For example: var button1 = document.getElementsByClassName(“button1”)[0]; button1.addEventListener(“click”, function( )} sendMessage(123472, 0) button1.innerHTML=“Adding...”; }); 210 Additionally, the advertisement or other dynamic content may be updated to include information from the second webpage, such as the items that are already in a shopping cart from a previous visit to the second webpage. For example: this.getcartContents = function( )] var xhr = new XMLHttpRequest( ); xhr.open(“GET”, “www.example.com/cartinfo”, true); xhr.setRequestHeader(“Content-Type”, “application/json”); xhr.onreadystatechange = function( ) { if (xhr.readyState == 4 && xhr.status == 200){ var json = JSON.parse(xhr.responseText); if (json === “undefined” || json == null){ return; } else if (json.cartInfo.items === “undefined” || json.cartInfo.items == null){ return; } else{ that.cartCallBack(json); } } } xhr.send(null); } 220 320 210 Within the second webpage in the iFrame, a “message listener” is set up (Step ). The message listener waits to receive messages specifically from the dynamic content and only from that source. Any messages sent to the message listener from another source will be ignored. For example: window.addEventListener(“message”, function(event) { var cartTotal = 0; if (event.origin == “https://www.example.com”){ event.data.cartInfo.items.forEach(function(item){ cartTotal += item.qty; }); that.cartCount.style.display=“block”; that.cartCount.innerHTML=cartTotal; if (that.buttons[event.data.itemIndex] != null) that.buttons[event.data.itemIndex].cartButton.innerHTML = “Add to cart”; } }); 210 210 325 When a user interacts with the dynamic content (for example, clicking on a product within an advertisement that has an “Add to cart” label), a message is sent from the dynamic content to the message listener (Step ). The message may contain information to be acted upon, such as a product id of the product to be added to the cart, credentials or tracking information from the user, or other relevant data. 220 330 115 210 When the message is received at the message listener, the iFrame containing the second webpage makes an AJAX call or other function call that causes server-side code to be executed (Step ) to fulfill some command or query that was requested by the user (for example, actually adding items to the user's cart on the second website). From the point of view of the second web server , there is no difference between the user navigating directly to the second website to perform the action and the user interacting with the advertisement or dynamic content . For example: this.addTocart = function(itemid, index)} var msg={“items”:[{“itemId” : itemid, “qty”: 0}]}; var xhr = new XMLHttpRequest( ); var url = “https://www.example.com/addcart”; xhr.open(“POST”, url, true); xhr.setRequestHeader(“Content-Type”, “application/json”); xhr.onreadystatechange = function ( ) { if (xhr.readyState === 4 && xhr.status === 200) { var json = JSON.parse(xhr.responseText); json.itemIndex = index; that.cartCallBack(json); } }; var data = JSON.stringify(msg); xhr.send(data); } } Code running within the iFrame receives and processes the request: window.addEventListener(“message”, function(event) { if (event.origin == “http://localhost:5100”){ cart.addTocart(event.data.itemId, event.data.itemIndex); } }); 335 After the server-side code is executed, a response is provided (Step ) to indicate the success or failure of processing the request, and any other relevant information in the response. For example, a response to a request to add to a shopping cart may include both an indicator of success and a current number of items in the cart and/or the total value of purchases in the cart. 210 340 When the response is received, the advertisement or dynamic content is updated (Step ) to reflect the information received. There may be, for example, a shopping cart displayed that looks identical to how the shopping cart would appear if the user had directly navigated to the second webpage to shop for items. The update may also include a success or error message or other information in addition to mimicking the second webpage. As a result of these features, a user visiting a first webpage can interact with advertisements or other dynamic content that not only have the “look and feel” of a second webpage, but that actually update the user's cart or other session on the second webpage, without the user having to navigate the browser to the second webpage. FIGS. 4A-4G depict a representative concrete user experience that may be accomplished using the presently described technology. FIG. 4A 500 In , a digital advertisement is loaded on a webpage, including a “hamburger” menu icon . FIG. 4B 500 505 510 In , upon clicking or tapping the icon , multiple product categories are displayed, as well as an exit icon and a cart icon . FIG. 4C 515 In , clicking a product category causes a second row of elements to be added, indicating a series of items that can be added to a cart. FIG. 4D 510 In , clicking any of the “Add to cart” buttons will actually cause cart icon to be updated to indicate items have been added. FIG. 4E 510 In , if the user clicks on the cart icon , the browser will navigate to the second website (pictured below) and display the traditional cart, containing the items the user had added through the advertisement. FIG. 4F 520 In , the user may change the number of items in the cart using increase or decrease buttons . FIG. 4G 510 In , if the user returns to the original webpage and views the advertisement again, the cart icon will have updated to reflect the current number of items in the cart, even though the advertisement was not used to add or remove the items most recently changed. Any changes that are made in either interface are reflected in the other automatically, resulting in a seamless user experience despite the webpages being on two different domains. Other possible applications of the presently-described technology may include interactive polling (where a user can vote on a political/sports/entertainment option and see the votes of others in real time), interactive games with other players, streamlined communications interfaces such as instant messaging or social media, and FIG. 1 FIG. 5 Although depicts a preferred configuration of computing devices to accomplish the software-implemented methods described above, those methods do not inherently rely on the use of any particular specialized computing devices, as opposed to standard desktop computers and/or web servers. For the purpose of illustrating possible such computing devices, is a high-level block diagram of a representative computing device that may be utilized for each of the computing devices and/or systems to implement various features and processes described herein. The computing device may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types. FIG. 5 500 510 515 510 500 As shown in , the components of the computing device may include (but are not limited to) one or more processors or processing units , a system memory , and a bus that couples various system components including memory to processor . 515 Bus represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus. 500 510 500 Processing unit(s) may execute computer programs stored in memory . Any suitable programming language can be used to implement the routines of particular embodiments including C, C++, Java, assembly language, etc. Different programming techniques can be employed such as procedural or object oriented. The routines can execute on a single computing device or multiple computing devices. Further, multiple processors may be used. The computing device typically includes a variety of computer system readable media. Such media may be any available media that is accessible by the computing device, and it includes both volatile and non-volatile media. removable and non-removable media. 510 520 530 540 515 510 System memory can include computer system readable media in the form of volatile memory, such as random access memory (RAM) and/or cache memory . The computing device may further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, storage system can be provided for reading from and writing to a non-removable, non-volatile magnetic media (not shown and typically referred to as a “hard drive”). Although not shown, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”), and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can be provided. In such instances, each can be connected to bus by one or more data media interfaces. As will be further depicted and described below, memory may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments described in this disclosure. 550 555 510 Program/utility , having a set (at least one) of program modules , may be stored in memory by way of example, and not limitation, as well as an operating system, one or more application software, other program modules, and program data. Each of the operating system, one or more application programs, other program modules, and program data or some combination thereof, may include an implementation of a networking environment. 570 560 The computing device may also communicate with one or more external devices such as a keyboard, a pointing device, a display, etc.; one or more devices that enable a user to interact with the computing device; and/or any devices (e.g., network card, modem, etc.) that enable the computing device to communicate with one or more other computing devices. Such communication can occur via Input/Output (I/O) interface(s) . 580 580 515 In addition, as described above. the computing device can communicate with one or more networks, such as a local area network (LAN), a general wide area network (WAN) and/or a public network (e.g., the Internet) via network adaptor . As depicted, network adaptor communicates with other components of the computing device via bus . It should be understood that although not shown, other hardware and/or software components could be used in conjunction with the computing device. Examples include (but are not limited to) microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data archival storage systems, etc. The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may use copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It is understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
The study of popular literature can enrich the understanding of our life conditions. So says Yvonne Leffler, professor of comparative literature, who heads the research programme Religion, Culture and Health. She argues that today's best selling vampyre and so-called chick lit novels clearly reflect ideas and ideals of health in our time. – Modern vampyre literature centers on the significance of being human, both morally and physically. For example, Stephenie Meyer's Twilight Series reflects a fixation on health and beauty and a wish to overcome death, but also thoughts on what consequences eternal youth might lead to. Chick lit, in its turn, fuctions as a kind of fictitious self-help. It revolves around the woman in crisis – both privately and professionally – who against all odds attains happiness and success. These stories of young career women can function as advisory illustrations on how a woman can deal with her problems, process her romantic disappointments or depression and turn adversity into success at work. Your work is part of the research programme Religion, Culture and Health? – Yes, within the programme there are researchers in comparative literature, film, religion as well as political science, investigating the role of culture and philosophy of life vis-á-vis health and well-being. As I said, I have personally investigated the importance of the fictitious popular stories of our time in this context. I analyse how basic values are shaped in best sellers such as Stieg Larsson's Millenium Trilogy, the Twilight Series and Kajsa Ingemarsson's chic lit novels, as well as how and why this particular type of story feels so important and meaningful for so many people today. Could you say that literature reflects conceptions about illness and health of our time? – Fictitious stories always reflect conceptions of their time, while they also contribute in confirming and shaping conceptions of various phenomena. Concerning illness and health, time period has its "fashionable diseases" – certain disease conditions that are more accentuated than others. This says something about what people are familiar with and want to delve deeper into. Are there any particularly topical themes of illness and health in popular fiction? – If it was TB and syphilis that were described in the late 19th century, the focus today is on mental illness. The fictitious principals of our time wrestle with depression or feelings of meaninglessness. Often, the characters' suffering is linked with a general societal state. It is probably no coincidence that popular detectives, such as Kurt Wallander, have to sacrifice family life because of their job. Because of the police work, Wallander neglects his health: he is overstressed, sleeps too little, eats and drinks unhealthily, forgets to excercise, et cetera. Whom among us cannot identify with this? How can the meeting with fiction contribute to people's well-being? – Fictitious stories' ability let us see the world from the principal's perspective, is an effective way to indirectly gain experiences and imagine how something might be. We receive new knowledge of various human conditions, but also of emotional and cognitive experiences without exposing ourseleves to mental and physical risk. One can, for example, learn how it would feel to be inflicted with mortal cancer without having to die from it. Instead, the experience contributes to our being enriched with new thoughs and experiences. Could you say that the connection between fiction and health is a research area for the future? – Absolutely! Modern health care has to a large extent focused on rapidly curing, operating and medicating tangible bodily afflictions such as injuries caused by accidents or cardiac problems. Often one forgets the human aspects of being inflicted with serious illness and being subject to extended treatment. In order to achieve better rehabilitation, we must better understand how people experience disease and what is meaningful at a deeper level in life. It is precisely this we can learn from fictitious stories, while we also can work more actively with film and literature in health care, so-called biliotherapy. Employment: professor of comparative literature, history of ideas and religion Family: husband and cat, siblings and siblings' children Research interests: narratology and reception theory, at present our need for ficticious stories in the shape of horror novels, horror movies, crime fiction. Also working with research on the early Swedish 19th century novel and how it was spread across Europe.
https://ckh.gu.se/english/profiles/yvonne-leffler
This protocol is a standardised rapid method for the collection of geomorphological, physical habitat, riparian and basic water quality data. It can be used to assess the physical condition of rivers and streams and to predict the local scale habitat features that should be present at a site. Method logic The AusRivAS physical assessment method incorporates aspects of several existing physical assessment methods into a method that can be implemented to construct AusRivAS style predictive models. Predictive models are typically derived from the AusRivAS macroinvertebrate models. Physical, chemical and habitat information collected from reference sites is used to construct the predictive models, which are then used to assess the condition of test sites. Large scale catchment characteristics are used to predict local scale features. The physical assessment protocol doesn’t provide prescribed predictive measures, but contains the provisions and datasheets for various measures of physical habitat. Criteria groupings of the method AusRivAS physical criteria are based on Habitat Predictive Modelling with sampling design, data collection and analytical components from other stream assessment methods i.e. AusRivAS freshwater biological monitoring. Large scale catchment characteristics are used to predict local scale features. The physical chemical criteria are grouped under control and response categories Data required Catchment and local scale physical, chemical and habitat variables measured using standardised methods for reference and test sites. Resources required Expertise required This methodology requires trained operators in field survey, laboratory techniques, assessment and database management. Materials required Field sheets, dedicated database, water quality sampling and habitat assessment equipment, access to a laboratory for water quality analysis, the AusRivAS model. Method outputs Outputs Uses Criteria by category Physical and chemical Review Recommended user The results would be useful to catchment managers, natural resource managers and government agencies. Strengths Limitations Case studies Australia-Wide Assessment of River Health: Queensland AusRivAS Sampling and Processing Manual, Monitoring River Heath Initiative Technical Report no 12. Links References Last updated: 7 February 2019 This page should be cited as:
https://wetlandinfo.des.qld.gov.au/wetlands/resources/tools/assessment-search-tool/australian-river-assessment-system-ausrivas-geoassessment-physical-chemical/
Many non-believers argue that the only things we can know are what can be proved by science. Yet many believers say that one of the main reasons they believe is because of their experience of God, something science cannot easily examine. So how can we approach this question reasonably? How we know Philosophers say there are several ways we can know things (see, for example, Robert Audi’s Epistemology: A Contemporary Introduction to the Theory of Knowledge): - perception – what we experience via our five senses; - memory; - introspection or self knowledge – what we experience in our body, such as happiness or pain; - intuition – what we can simply ‘see’ is true, such as 1 + 1 = 2; and - testimony – what others tell us. It is clear that science draws on several of these ways of knowing, principally perception via observation and measurement. Personal experience When I have a toothache, see a dog run across the road, or remember what it was like to fall into a cold river, I know these things without using science. Yet few would contest that these experiences are, generally, nevertheless real. But also few would deny that our memories and perceptions can play tricks on us. It becomes more complicated when the experience is unusual, more complicated still when we try to explain the experience. Our perception may be correct, but our explanation of what we perceived may be faulty. When people believe they have experienced God Many people believe they have experienced God in some way, and I have outlined a few examples on this website: - Some believe they have been miraculously healed. In most cases, it is clear something unusual has happened, but was it God or was it spontaneous remission? (See Healing miracles and God) - Some believe God has appeared to them in a vision. But while we may accept that they ‘saw’ something, was it God or a hallucination? (See Visions of Jesus?) - For some, God is seen as the source of the power to make changes in their lives that they would have been powerless to make themselves (for example, for Winston to instantly come off hard drugs, in I still keep to Jesus this night). - And many see God’s hand in the help they received when they needed it, and the rescue they experienced from situations and thought patterns that were destructive – see the stories of Jordan and Laura. What are we to make of these stories? True perception? It is impossible for an outsider to fully understand these experiences, for we only observe them second hand. But for the person experiencing them, and often for their close friends and family, these experiences are as real as anything that ever happened to them. And so they believe them just as they believe their other perceptions – and who can blame them? Most of us will trust practical experience more than some theoretical belief. And in many of these cases, the philosophers would say that they are justified in believing that these experiences came from God (that is, they have good reason to believe this). For example, if the doctor said a recovery was extremely unlikely, yet recovery occurred quickly after prayer, then a reasonable person could conclude that it was the prayer, and the resulting action of God, that led to the unusual outcome. But can outsiders justifiably believe that God has acted? Some people disbelieve from the start that any such experiences could be from God, but it seems to me that this pre-judges the matter and dismisses the evidence without considering it. We make it impossible to believe and learn from the experiences, even if they are actually true. I think each case must be decided on its merits – by asking how reliable is the person, how good is the evidence, etc? We may not be able to be certain about any one experience, but each experience that offers reasons why we might believe it adds to the probability that God is at least behind some of them. We may not have the perception, memory and self awareness of those who actually experience what seems to be God acting in their lives, but we can, if we are willing, accept their testimony of what has happened. Read more - Truth, proof and certainty – can we know the truth about God? - How should we assess alleged healing miracles? – is it possible to test whether miraculous healings actually occur? - True life stories – check out a bunch of stories of people finding hope and healing.
https://www.is-there-a-god.info/blog/life/can-we-trust-our-experience/
In the past 20 years, the institute has cultivated a group of world-famous scholars, published a pile of significantly influential academic works, and maintained a wide academic impact. During the development in the past 20-year, Institute of Chinese Intellectual and Cultural History has formed the Philosophy of Discipline Construction based on the theory of innovation and sustainable development, that is, taking hold in cutting-edge, focusing on the present, stressing features, being keen on innovation. From late 1980s to mid-1990s, History of Chinese Culture, which then became famous at home and abroad, was published by the institute at the focus of macro study of cultural history. This was the first History of Chinese Culture since the foundation of People’s Republic of China. Reflecting the leading level of cultural history study at the turn of the 21st century, it was awarded first prize at the 7th China Book Award, Nomination Award at the 1st Nation Book Award and first prize at the 1st Excellent Works in Social Sciences of Hubei Province. From mid-1990s to 2000, based on the solid foundation of cultural history study, the institute expanded its research to social history. It published Theory of Chinese Social History which was highly praised to be ‘Inheriting the past and forging ahead into the future’. The book was awarded at the 13th China Book Award, the first prize at the 3rd Excellent Achievements in Social Sciences of Hubei Province, Honor Award at the 2nd Book Award of Hubei Province. The book was assigned to be reference book or compulsory reading to doctoral students by Chinese Academy of Social Sciences and many key universities. It has a lasting academic impact. Since the 21st century, in order to meet the needs of development in Hubei society, the institute has turned its research focus to Hubei social culture. Assembling domestic specialists of Hubei culture study, the institute compiled and published the megaword work Cultural History of Hubei which was described to be ‘The founding work of cultural study of Hubei’ by Guangming Daily and China Reading Weekly. In September 2004, with the approval from Hubei Provincial Department of Education and getting main support from the institute, the Center for Contemporary Cultural Studies of Hubei Province was founded. The center is the key research base of Humanities and social sciences in colleges and universities of Hubei. Based on the arduous and solid investigations of grassroots society, the center studied the contemporary social culture of Hubei Province and obtained a series of achievements such as Research Report on the Spread and Development of Christianity and Catholicism in Southern Hubei Area. Its achievements acquired high attention and full affirmation from CPC Hubei Provincial Government and related central departments. Internal Journal of Chinese Social Science and Chinese Academy of Social Sciences Review published these series of achievements, which generated essential social benefits and social impact. By continually condensing research direction in the long-term academic research, the discipline has formed three research focuses: Study on the history of academic history in the Qing Dynasty, Study of modern ideology and culture and Study of contemporary social culture in Hubei Province. Effective researches have been carried out on those focuses. The discipline has undertaken tens of projects of national and provincial level as well as other levels. It published The General Contents of Complete Library in the Four Branches of Literature in the View of Culture, A Study on Chinese Cultural History in 20th Century, The Initial Era – Study on the Early Modernization of China, Shock and Conflict – The Trend of Thought and Society in the Early Modernization of China, A Hundred Years of Hardship – Intellectuals and Modernization in China, Return to the Origin and Innovate – A New Theory on Cultural Conservation in Modern China, The Outline of Anti-Traditional Ideological Trend in the Late Qing Dynasty, Discussion on the Consciousness of Secret Association, A Study on Argumentative Philosophy in the early Qing Dynasty, Southern Society and the Change of Modern Chinese Culture, Culture Area Division and Hubei Culture, Study on the Blending of Hubei Culture etc. These works are all of high academic standard. Meanwhile, on academic journals of both internal and external such as Japan and Taiwan, the discipline has published more than 200 theses among which 50 theses were reprinted by important national journals such as Xinhua Digest as well as overseas newspapers and magazines. Various achievements of the discipline got tens of national-level, provincial and municipal awards for outstanding achievements in social science and book awards. Professor Zhou Jiming is the leader of the discipline. He is a famous expert in cultural history and was awarded National young and middle-age expert with outstanding contributions while being one of the recipients of special allowances of the State Council and the first level candidate of the new century high level talents project in Hubei Province. Professor He Xiaoming and Professor Guo Ying are awarded Young and middle-age expert with outstanding contributions of Hubei Province. Our academic team is speeding up to become younger and consist of more doctors. Institute of Chinese Intellectual and Cultural History has formed a talent training system of stepping up from bachelor degree, master degree to doctoral degree. Since undertaking ‘the Training mode reform for Liberal arts talents’ which is the major education reform project of Hubei Provincial Department of Education in 2002, the institute has persisted on exploration of education reform for 8 years. International Cultural Communication, a new specialty, has been created. For having achieved remarkable achievements on talent training, the institute was awarded second prize for National teaching achievements in 2009, and the first prize for Outstanding teaching achievements in Hubei Province. Institute of Chinese Intellectual and Cultural History has extensive contacts with universities home and abroad. Main members of the discipline have done lectures and academic exchanged in Australia, U.S.A., New Zealand, South Korea, Singapore, Malaysia, France, Italy and Russia etc. as well as areas such as Taiwan , Hong Kong and Macau. To promote academic exchanges between China and other countries, the institute also hosts international academic conferences and publishes works on abroad journals. Former named of International Cultural Communication, Specialty of International Affairs and International Relations in School of History and Culture of Hubei University is a full-time outside-the-directory undergraduate specialty which was given special approval by National Ministry of Education to cultivate talents for engaging in international cultural exchanges. It began to recruit students nationwide in 2006. When Ministry of Education adjusted the directory of undergraduate specialty of general colleges and universities in 2012, its title of Specialty of International Cultural Communication was adjusted to be Specialty of International Affairs and International Relations which was assigned to be a Special Specialty. The specialty is a four-year curriculum recruiting students of both liberal arts and science. Based on objectives of talent training, the specialty has set up a new curriculum system with interdisciplinary penetration which consists of three curriculum parts: Foreign languages, Chinese and foreign cultural foundation courses and course of international affairs and international relations. Making culture as the core and transboundary application as the feature, the specialty aims to cultivate its students with proficiency in English and Chinese and ability of international cultural exchange. The course intensity of foreign languages is basically the same as that of bachelor degree of English specialty. Supported by the first group of Hubei provincial key specialties and the first level discipline of doctoral station, Institute of Chinese Intellectual and Cultural History of Hubei University, the specialty obtains academic advantages, outstanding faculty and rich academic achievements. Meanwhile, the institute engages well-know scholars home and abroad to be visiting professors. With the practice of specialty construction for more than 10 years, the institute has acquired remarkable achievement on talents training. Pattern research for training talents of complex international cultural exchange was selected to be a provincial educational reform project and awarded Hubei Provincial Award for Teaching Achievement. Students of the specialty have high comprehensive quality, strong ability of foreign languages so they obtain wide range of employment. Some of our undergraduates continue their study in internal and external universities for master and doctoral degree in disciplines of international relations, diplomacy, Chinese and foreign culture, Translation, foreign languages, cultural Heritage, communication, teaching Chinese as a foreign language, international business, economic management etc. Some undergraduates work at the government, famous large-scale enterprises or public institutions relating to culture, propaganda, education, news and tourism etc.
http://lswh.hubu.edu.cn/Home/Institute_of_Culture_and_Department_of_Chinese_L.htm
- et al. Published Web Locationhttp://dx.doi.org/10.1186/1471-2105-12-380 Abstract Abstract Background The speed at which biological datasets are being accumulated stands in contrast to our ability to integrate them meaningfully. Large-scale biological databases containing datasets of genes, proteins, cells, organs, and diseases are being created but they are not connected. Integration of these vast but heterogeneous sources of information will allow the systematic and comprehensive analysis of molecular and clinical datasets, spanning hundreds of dimensions and thousands of individuals. This integration is essential to capitalize on the value of current and future molecular- and cellular-level data on humans to gain novel insights about health and disease. Results We describe a new open-source Cytoscape plugin named iCTNet (integrated Complex Traits Networks). iCTNet integrates several data sources to allow automated and systematic creation of networks with up to five layers of omics information: phenotype-SNP association, protein-protein interaction, disease-tissue, tissue-gene, and drug-gene relationships. It facilitates the generation of general or specific network views with diverse options for more than 200 diseases. Built-in tools are provided to prioritize candidate genes and create modules of specific phenotypes. Conclusions iCTNet provides a user-friendly interface to search, integrate, visualize, and analyze genome-scale biological networks for human complex traits. We argue this tool is a key instrument that facilitates systematic integration of disparate large-scale data through network visualization, ultimately allowing the identification of disease similarities and the design of novel therapeutic approaches. The online database and Cytoscape plugin are freely available for academic use at: http://www.cs.queensu.ca/ictnet Many UC-authored scholarly publications are freely available on this site because of the UC's open access policies. Let us know how this access is important for you. Main Content Enter the password to open this PDF file:
https://escholarship.org/uc/item/7xz4d5wt
This arrangement is designed as a feature for a soprano soloist, accompanied by choir and orchestra. The first verse features staggered oohs and aahs from the choir, the second verse is the soloist and orchestra alone, and then the choir rejoins for the final verse. Arranged by Sam Cardon and orchestrated by Garrett Breeze. Piano/Choral sheet music sold HERE. (Orchestral accompaniment works with all voicings.) INSTRUMENTATION:
https://holidaychoirmusic.com/shop/silent-night-orchestral-accompaniment/
Healthcare in Ontario: How does it Work and How is it Funded? The healthcare system in Ontario can be confusing, and most people don’t quite understand how it works and how it’s funded. We pay our taxes, but then what happens? Where does that money go, and how does it impact our healthcare system? This guide will help answer those questions in a simple, easy-to-understand way. - - Canada Health Act - Follow the Tax Dollars i) Ontario Health Insurance Program – Physicians / Practitioners – OHIP Clinics ii) Population and Public Health iii) Provincial Programs and Stewardship iv) Local Health Integration Networks (LHIN) – Long-Term Care Homes – Community Support Agencies – Hospitals – Home and Community Care - Understanding Alternate Level of Care (ALC), ie. ‘Bed Blockers’ - Hospitals vs. Home Care: Which is More Expensive? - Putting the Pieces Together - References The following flow chart provides a summary of the major Operating Expenses for the MOHLTC along with their sub-category expenses. Don’t worry, this will all be explained below! For a more in-depth flow chart, click here. Canada Health Act In order to understand how our healthcare system operates in Ontario, we first need to look at the Canada Health Act. The Canada Health Act might not be the most exciting thing to talk about, but it’s needed to set the backdrop for the rest of the information below. This piece of government legislation was adopted in 1984 and provides the conditions and criteria that each province must follow in order to receive federal transfer payments. The key conditions within the act include: public administration, comprehensiveness, universality, portability, and accessibility. To learn more about the Canada Health Act and each of these conditions, click here. It’s important to mention that the Canada Health Act doesn’t state how healthcare services should be delivered, rather it deals with how the system is financed. The Act sets the overall guidelines, but each province within Canada independently determines the service delivery model for their geography. We’ll be focusing specifically on Ontario’s healthcare system, how it works, and how it’s funded. Follow the Tax Dollars It all starts with tax dollars. Here we are, good tax paying citizens of Ontario. We’re going to pretend that all of our taxes goes to the Ontario government. Some goes to the municipality, the federal government, etc., but for simplicity we’ll pretend it all goes to the Ontario government. For every dollar that the government collects, what percentage do you think goes towards our healthcare system in Ontario? The correct answer is right around 38.7%. Yes, 38.7% of every tax dollar that we give to the Ontario government goes towards our healthcare system. Based on what you know about healthcare, is this funding enough to sustain our healthcare system in Ontario? When we say 38.7% of tax dollars goes towards the Ontario government, where does it actually go? It goes towards a governing body called the MOHLTC. If you’ve never seen this acronym before, don’t worry, you’re not alone. It stands for Ministry of Health and Long-Term Care. This is the ministry that takes the 38.7% of tax dollars and decides what to do with it. But the MOHLTC’s role is not actual service delivery – their role is to create the legislative landscape for Ontario’s healthcare system to operate. They collect the money, and then disperse the money appropriately. The Ministry of Health takes the money and divides it up into 8 main segments of operating expenses: - Ontario Health Insurance Program - Population and Public Health - Provincial Programs and Stewardship - Local Health Integration Networks (LHIN) - Ministry Administration - Health Policy and Research - eHealth and Information Management - Information Systems We’re going to focus on the first four of these segments, since these are the largest expenses for the MOHLTC. Each one of these segments accounts for more than 1 billion in annual spending! Ontario Health Insurance Program The three main areas of the Ontario Health Insurance Program include Ontario Health Insurance, Drug Programs, and Assistive Devices. These three categories can be broken down further into several smaller segments, but we’re going to keep things simple and focus on our provinces Physicians / Practitioners and our OHIP Clinics, which are part of the Ontario Health Insurance category. These two segments combined represent more than 14 billion dollars from the 19 billion dollars allocated to the Ontario Health Insurance Program, and more than 26% of the MOHLTC’s overall operating expenses! Physicians / Practitioners Everyone knows physicians – you probably don’t like seeing them because it often means something is wrong, but you know who they are. There are pediatricians, surgeons, family practitioners, dermatologists, and more. What most people don’t know however, is that most of the time, physicians are independent contractors that run their own business. Unless you’re on staff at a hospital, like a Chief Medical Office or the Head of Cardiology, you’re just a contractor for the hospital. That’s something that hospitals often struggle with, the fact that most doctors aren’t actually employees of the hospital. Their billings don’t even come out of the hospitals budget because they bill directly to the Ministry of Health. Remember all those tax dollars we give the Ministry? Well this is where a portion of those tax dollars go. OHIP Clinics When people think of OHIP Clinics, they often think about Physiotherapy clinics, but OHIP clinics include so much more than that. Have you ever gone for an x-ray, ultrasound, MRI or other type of medical imaging? What about blood work? Almost everyone has gotten blood work done at least once in their life, and sometimes even annually. These diagnostic clinics are also OHIP funded, and get directly funded through the Ministry of Health. Depending on the service, your coverage, and eligibility for OHIP, these services can be free or provided at a minimal cost, just like when you see a primary care physician. But behind the scenes, these clinics are billing directly to the Ministry of Health. Population and Public Health The Population and Public Health segment is responsible for exactly what it sounds like… the health of the public. For example, Toronto Public Health is responsible for Toronto’s general health. They’re responsible for things like making sure emission levels are being monitored and maintained. When there are disease outbreaks, they make sure that there are announcements being made and protocols put in place to stop the spread of disease. When you go into a restaurant, you hope to see a big green checkmark at the door that says “Approved by Toronto Public Health”, which means they don’t have a rat infestation. Be cautious when you see a red or yellow sign! When there’s risk of cold weather and a Cold Alert is announced, guess what they’re doing? Opening up warming centres, making sure homeless people have access to areas of warmth, and advocating for more shelter beds. From disease outbreaks to infection prevention, healthy eating programs to managing Smoke-Free Ontario, the Population and Public Health section of the MOHLTC’s budget is responsible for all of these issues that affect the health of the general public. Provincial Programs and Stewardship The funding allocated to Provincial Programs and Stewardship includes Provincial Programs, Emergency Health Services, and Stewardship. Provincial Programs includes transfer payment accountability along with operational policy development for a wide range of special programs, like Cancer Care Ontario, HIV/AIDS and Hepatitis C programs, contributing Ontario’s share of funding to Canadian Blood Services, and more. Things like “transfer payment accountability” and “operational policy development” may not sound exciting, but they’re important aspects of our healthcare system. Beyond that, this section of funding also goes towards emergency health services, such as ambulances and ambulance communications. Most land ambulance services are municipally operated/contracted, and the not-for-profit air ambulance Ornge is also included within this funding. These services are a vital part of Ontario’s healthcare system by providing safe and timely care and transportation for sick or ill individuals, so they can access the healthcare services that they need. A lot of people take for granted the fact that these emergency transportation services are essentially free of charge for Canadian citizens – but the funding has to come from somewhere, and yes, it comes from a small percentage of your tax dollars! Local Health Integration Networks (LHIN) Local Health Integration Networks are the “infrastructure” for lots of Ontario’s healthcare system. There are 14 LHINs in total, and they’re all geographically aligned based on the population within Ontario. Since the LHIN’s are geographically aligned with the population, the larger and denser areas within Ontario have more LHINs. In short, the LHINs are responsible for the local planning of health services. They examine the specific needs of their community, and then design programs to deliver the appropriate care for that community’s needs. We can look at some of each LHINs main responsibilities by breaking it down into four sections: Long-Term Care Homes, Community Support Agencies, Hospitals, and Home and Community Care. Long-Term Care Homes It’s important to make a distinction between Long-Term Care Homes vs. Retirement Homes. Let’s look at this in two ways: from a clinical perspective, and a financial perspective. Clinically, in a long-term care home you don’t necessarily need to be able to direct your own care, whereas in a retirement home, you do need to be able to direct your own care. There are exceptions, but that’s the general rule. In some retirement homes you can get special licenses to do dementia care and things like that, but generally speaking, if you can direct your own care, you belong in a retirement home. Financially, long-term care homes are usually government funded, and retirement homes are privately funded. But sometimes, when people can’t afford a retirement home, they end up in a long-term care home, even if they’re fully capable and high-functioning human beings. When living in a long-term care home, there is a co-payment required to help contribute to the costs of the facilities. These co-payment amounts are set by the government, and are to be paid directly to the long-term care facility. If you don’t have the economic means to afford the co-payments, you can apply for a subsidy, which will be decided based on an income means test. Community Support Agencies Another thing that the LHINs are responsible for are community support agencies. These community services are free for people that don’t have a lot of money, or don’t speak English, or are elderly, etc. (depending on the service). For example, there are addiction services that are publicly funded, where you can go in for free and receive treatment. There are also services that most people don’t even realize fall under this category. A lot of seniors are lonely and isolated, and can’t get to a grocery store. They don’t have the proper nourishment to stay healthy. Well guess what, as part of our publicly funded system, we have support services like Meals on Wheels to deliver food to seniors’ home for a small fee. There are transportation services that bring seniors to appointments; companies like TransCare receive government money just to make sure seniors can get to and from appointments safely. These community support agencies aren’t really medical, but they deal with the social and psychosocial aspect of our healthcare system. Hospitals Let’s think about the local communities within Ontario for a second. What would be the difference between, let’s say, the Toronto Central LHIN (Toronto), and the Northeast LHIN (Timiskaming, North Bay, etc.)? There are population age differences, lifestyle differences, population density differences, and much more. For example, think about Kirkland Lake, where the closest hospital is in New Liskeard and is a 2hr drive away. Remember the Canada Health Act requires that we provide equal and universal access to healthcare. When you start looking at the different geographies within our own province, there are very unique needs. Therefore, since the LHINs are responsible for the local planning of healthcare services, hospitals fall within their umbrella. Home and Community Care This Home and Community Care category used to be called the Community Care Access Centres (CCACs), which up until May 2017, worked separately (but aligned) with the LHINs. For each of the 14 LHINs, there was a respective CCAC. In relation to the Patients First Act, 2016, the government did some restructuring and decided that it’s actually better for taxpayer’s money to get rid of that extra level and instead have the money flow directly from the LHINs into Home and Community Care. From a clinical and medical context, there are several services that the government will fund for home and community care. These services include: - Nursing - Physiotherapy - Personal Support - Home Support - Occupational Therapy - Social Work - Speech Language Pathology - Dietetics Understanding Alternate Level of Care (ALC), ie. ‘Bed Blockers’ One of the biggest problems that Ontario hospitals are facing is that there are simply too many people coming! As hospitals continue to accept and admit patients, there’s a physical, finite resource that gets used. That resource is beds. More than 14% of the people in Ontario’s hospitals right now don’t actually belong in the hospital. Have you heard of the term Alternate Level of Care (ALC), or ‘bed blockers’? These 14% of hospital beds are being ‘blocked’ by people that don’t require acute care and are stable enough to be discharged back into the community, to their home, or someplace else like a long-term care home or retirement home. But lots of these people can’t be discharged. They might not have supportive housing, or their family might not want to take care of them. Maybe they think government services won’t be enough to support them, or maybe there’s too long of a waitlist at long-term care homes and they can’t afford a retirement home. This creates an interesting paradigm that the hospitals and LHINs are faced with. How do we as a province reduce the amount of ALC patients to free up beds for people requiring acute care? Hospitals vs. Home Care – Which is More Expensive? What do you think it costs to be in a hospital per day, on average? For your bed, your food, all of your care, the hospitals overhead costs, etc. The numbers can vary greatly depending on the patient’s needs, but estimates range from $450 on the low end, to $842, and upwards to over $1,530,,. Now what do you think it costs for that same person to be on a homecare program, a program that requires public taxpaying dollars only for service delivery and not for food or overhead costs. It’s about $42 – $45,, but can be upwards to $150 or more if the patient is palliative or requires more specialized care. Taking the midrange of these estimates, it costs the public on average 8 times more in taxpayer dollars for someone to be in the hospital versus the comfort of their own home. Isn’t that crazy to think about!? You don’t have to be an economics major to make the decision, where would you rather have your money flow? Would you invest your money at $1,530 per day in the hospital, or at $150 in the home and community care sector? Looking at it from a personal lens as well, do you think people would rather recover at home, or would they rather be in the hospital? Probably at home. What’s happening now is the MOHLTC is increasing the amount of money that they’re giving to the home and community care sector, providing an additional $100 million in 2017-2018 on top of the existing 5% budget increase since 2013. Hospitals on the other hand had a rate freeze for 4 years starting in 2012, but since the rate freeze ended they have seen annual budget increases of approximately 3.2%, with promises of future increases up to 4.6%. Putting the Pieces Together We have an aging population and a large number of seniors that are going to be entering our healthcare system soon, so it’s important to find ways to make our healthcare system sustainable. The theory is, if we can reduce the amount of ALC patients by moving them into home and community care, we can give them better access to services while also increasing the amount of funding available for long-term care homes, creating a downstream capacity. It’s an interesting problem that our province’s healthcare system faces. Where are we going to continue to invest to get better outcomes? We’re focusing on preventative medicine, proactive care, looking at the social determinants of health and psychosocial aspect of health by providing non-clinical services, and trying to figure out how to move people from hospitals into a cheaper setting. Therefore, this in turn should reduce the cost of our healthcare system and help create a sustainable future for healthcare within Ontario.
https://www.closingthegap.ca/healthcare-in-ontario-how-does-it-work-and-how-is-it-funded/
Rationale: In June of 2004 Congress passed a law (the Child Nutrition and WIC Reauthorization Act of 2004) which requires school districts to develop a local wellness policy. Additionally, the State of Illinois passed its own similar, more comprehensive legislation. The objectives of both policies are to improve the school nutrition environment, promote student health and reduce childhood obesity. The link between nutrition and learning is well documented. Healthy eating patterns are essential for students to achieve their full academic potential, full physical and mental growth and lifelong health and well-being. Healthy eating is demonstrably linked to reduced risk for mortality and development of many chronic diseases. Schools and school communities have a responsibility to help students acquire the knowledge and skills necessary to establish and maintain lifelong healthy eating patterns. Well-planned and well implemented wellness programs have been shown to positively influence children’s health. This plan supports the mission of Community Unit School District 95 as it promotes life-long wellness behaviors and links health nutrition and exercise to students’ and staff overall well being, scholastic and professional performance as well as overall readiness to learn.
https://www.lz95.org/departments/health_services/overview.aspx
Environment and Sustainability Climate change is one of greatest threats to our health and well-being. It has the potential to result in human suffering or loss of life due to extreme weather events, spread of infectious diseases, food and water shortages and an increased burden of disease. As one of the largest employers in Gateshead, we have a key part to play in reducing carbon emissions and supporting the local community to achieve a sustainable system. What is sustainable development? Sustainable development aims to ensure the basic needs and quality of life for everyone are met both present and for future generations. Its underlying guiding principles are: - Ensuring a strong, healthy and just society - Living within environmental limits - Achieving a sustainable economy - Promoting good governance - Using sound science responsibly The trust aims to integrate sustainable development into the work we undertake in management and delivery of our healthcare services. This is instructed through our Sustainable Development Management Plan and used in conjunction with our core business strategic plans. The plan demonstrates the actions required to reduce our carbon emissions in line with national targets. Our Sustainable Development Management Plan The main key focuses of our strategy include: - Energy and Carbon Management - Waste - Transport and Green Travel - Water - Governance and commitment - Organisational and workforce Development - Partnerships and networks These key areas have formed our action plan which is reviewed annually so we can continually monitor and review our progress in reducing emissions and meeting both national and internal targets. There is an organisational acknowledgement that we need to be seen as a ‘good corporate citizen’ and as a result we are constantly striving to improve and look for innovative ideas that will help reduce our carbon emissions year on year. We work in numerous areas to improve sustainability. You can click on the links to below to find out more about what we're doing in those areas:
https://www.qegateshead.nhs.uk/environment
Does Septum Surgery Change Your Voice? Question: I have had difficulty breathing through my nose for many years. I went to an ENT who told me that my septum was deviated. She recommended I have it repaired with a surgery. I’m really concerned because I am a singer. Will the septum surgery affect my voice? Answer: Septum surgery should not affect your voice negatively at all. To understand why, you need to understand where your voice comes from. Most singers understand that their vocal cords are the source of their sound. They understand that injuring your vocal cord will worsen the singing voice. The vocal cords are located in the neck, behind the Adam’s apple. What few people understand is that the sound of the vocal cords alone does not have any lustre, tone, or color. Healthy cords are necessary for good sound because they are the source of the vibrations in the air. If the vibrations are irregular due to vocal cord damage, you will sound hoarse. But what really gives color and beauty to the voice are the structures above the vocal cords. These structures are called resonators and include the: - Throat - Mouth - Space behind the nose (nasopharynx) - Nose - Sinuses The throat and mouth combined are often referred to as the vocal tract. The shape of your individual resonators, especially your vocal tract, will allow sound to bounce around in your head, adding tones and color to the vocal cord vibrations. This is why, when you have a bad cold and are congested, your voice changes. Your nose is stuffy and so the vibrations cannot bounce around in your nose. You lose the color that your nose space would normally add to the voice. Septoplasty is the procedure that straightens the septum in the nose. The septum is a normal structure divides the nose into two halves. It is often deviated to one side or the other. When the septum is straightened surgically, it allows air to pass more easily through the nose. As air passes through the nose, it resonates through the nose and sinuses. This adds nasal and sinus color and lustre to the voice. In singers with severe septal deviations, this results in voice improvement. However, in many singers, there is no change in the voice because enough air had been flowing through the nose prior to surgery to permit some degree of nasal resonance. More information on septoplasty can be found here. Key Points: - Septum surgery is done to straighten the septum in the nose to allow better airflow through the nose - Septoplasty will not worsen, and often can improve the voice. - Most often, the voice remains unchanged. Read patient stories about Dr. Reena Gupta from The Division of Voice at the Osborne Head and Neck Institute. To learn more about Dr. Reena Gupta, click here.
https://www.ohniww.org/septoplasty-voice-los-angeles/
Why World Radio Day? “In a world changing quickly, we must make the most of radio’s ability to connect people and societies, to share knowledge and information and to strengthen understanding. This World Radio Day is a moment to recognize the marvel of radio and to harness its power for the benefit of all,” said UNESCO Director General, Irina Bokova in her message on the occasion of the first World Radio Day. On November 3, 2011, during its 36th General Conference, UNESCO recognized the “transformational power of radio” by establishing World Radio Day on 13 February, which marks the day when the United Nations Radio was launched in 1946. The initial idea came from the Spanish Academy of Radio and was formally presented by the Permanent Delegation of Spain to UNESCO at the 187 session of the Executive Board in September 2011. Since the first broadcast over 100 years ago, radio has proven to be a powerful information source for mobilizing social change and a central point for community life. It is the mass media that reaches the widest audience in the world. In an era of new technologies, it remains the world’s most accessible platform, a powerful communication tool and a low cost medium. Radio technology, which began as "wireless telegraphy," owes its development to two other inventions, the telegraph and the telephone. Since the end of the 19th century, when the first successful radio transmissions were achieved and to this day, radio remains as important means of communications as ever. With the advent of new technologies and media convergence, radio is being transformed and is moving onto new delivery platforms, such as broadband internet, mobiles and tablets. In the digital era, radio continues to be relevant, as people digitally tune in via computers, satellite radio and mobile devices. Radio is especially suited to reach remote and marginalized communities, while simultaneously offering a platform for information sharing and promoting public debate. Radio plays an important role in emergency communication and disaster relief. It is also one of the most important ways to widen access to knowledge, promote the freedom of expression as well as encourage mutual respect and multicultural understanding. World Radio Day aims to raise awareness about the importance of radio, to encourage decision makers to provide access to information through radio and improve networking and international cooperation among broadcasters. The resolution is being submitted to the United Nations’ General Assembly, at its 67th session in September 2012, for endorsement.
http://www.yf1ar.com/2012/02/why-world-radio-day.html
Sort: Least downloads Sort options Most downloads Least downloads Most stars Newest Oldest Recently updated Least recently updated termination A terminal package for Atom, complete with themes, API and more. Fork of platformio-ide-terminal. #termination #platformio #terminal-plus #terminal #term bus-stop 74,727 47 atom-jinja2 Syntax highlighting for jinja2 templates. danielchatfield 84,662 110 ide-yaml Atom-IDE support for YAML language #atom-ide #yaml #yaml-parsing #ide #language-server-protocol liuderchi 124,593 51 language-groovy Groovy language support in Atom Jakehp 168,847 138 multi-cursor Atom package to expand your current cursor. joseramonc 190,554 334 language-terraform Terraform.io support for Atom #language #grammar #terraform #terraform-0-12 #configuration cmur2 237,395 118 language-docker Dockerfile syntax highlighting jagregory 320,422 561 merge-conflicts Resolve git conflicts within Atom smashwilson 762,564 2201 atom-ide-ui A collection of Atom UIs to support language services.
https://atom.io/users/george-kirillov/stars?direction=asc&sort=downloads
News: Government of India has given due cognizance to the problem of Antimicrobial Resistance (AMR). Ministry of Health and Family Welfare (MoHFW) has initiated various activities for containment of AMR. Steps taken by govt. to prevent and control AMR: - National Action Plan for containment of Antimicrobial Resistance (NAP-AMR) was launched on 19thApril, 2017, involving stakeholders from various ministries / sectors. - National Programme on Containment of AMR was initiated during the 12thFive Year Plan. National Centre for Disease Control (NCDC) coordinates this programme. Under the programme National AMR surveillance network of state medical colleges, labs(NARS-Net) have been established in order to generate quality data on AMR for seven priority bacterial pathogens of public health importance using WHONET software. - National Guidelines on Infection, Prevention and Control in Healthcare facilities were released in Jan, 2020. These guidelines have been shared with various stakeholders across the country to be used in training modules for country-wide trainings in a systematic manner. - Under the programme, NCDC conducts AMR surveillance through a network of 30 state medical college laboratories in 25 states. The network is expanded across the country in a phased manner. - Indian Council of Medical Research (ICMR) coordinates another AMR surveillance network of 20 laboratories located in tertiary care centres (both public and private) in the country. - Antimicrobial stewardship (AMSP) activities: In order to promote rational use of antibiotics among the healthcare providers, a series of sensitization and training workshops have been organized in different healthcare facilities in the country for the benefit of the practicing clinicians. Standard treatment guidelines developed by NCDC for rational use of antibiotics have been made available to clinicians across the country. - To create awareness among the public about AMR, various IEC activities like quiz competition in schools, participation in Perfect Health Mela, poster & quiz competition for healthcare workers at NCDC and the sites included in the NARS-Net during World Antibiotic awareness each year have been conducted to raise awareness about AMR. IEC material (audio /video /Print OD Media) to raise awareness about AMR and to prevent misuse of antibiotics has been made available on the website of NCDC for use by the States-UT governments and other stakeholders. What is the Antimicrobial Resistance? - Antimicrobial Resistance happens when microorganisms such as bacteria, fungi, viruses, and parasites change when they are exposed to antimicrobial drugs such as antibiotics, antifungals, antivirals, and antimalarials. - Microorganisms that develop antimicrobial resistance are sometimes referred to as “superbugs”. - With 7 lakh people losing the battle to Antimicrobial Resistance (AMR) per year and another 10 Million projected to die from it by 2050, AMR alone is killing more people than cancer and road traffic accidents combined. - Economic projections suggest that by 2050, AMR would decrease Gross Domestic Product (GDP) by 2 – 3.5 % with a fall in livestock by 3 % to 8 % costing USD 100 Trillion to the entire globe. - India is facing the challenges of combating diseases like Tuberculosis, Cholera, Malaria which are becoming more and more drug-resistant; on the other hand, the emergence of newer multi-drug resistant organisms pose newer diagnostic and therapeutic challenges. - Lower awareness about infectious diseases and inaccessibility to healthcare often prevents people from seeking medical help.
https://dics.co/antimicrobial-resistance/
As a Director of Sales & Marketing, I developed strategic directions for the Sales and Marketing Department, brand storming for idea creation, developed internal and external partnerships and general oversight of the tactical sales and marketing team, and also control the process to achieve overall company objectives. Reported to the General Manager/Owner. As a Product Manager, I conducted FGD and other quantitative research to get the customer insight, developed marketing, promotion and customer retention program, identified the customer behavior and influence to gain the satisfaction, created a loyalty program, oversee the distribution and the logistics, created a retailer program and how to increase their sales, created brand building program to parents and children as influencers, and developed partnership with Pediatricians, Institution like IDAI, Researchers, Agency, Event Organizer, Outlet and Distributors. As Customer Satisfaction Program Supervisor, I refocused the Isuzu's CS Program, developed the CS Program concept and implemented to Isuzu's branches based on the customer preferences. As a Team Leader, I coordinated my team to achieve the project goal by motivated, coached them to work hard, work smart and work with heart.
http://pmsbe.ac.id/eng/details.php?lang=en&id=Fe44
Q: Finding the minimum number of swaps to convert one string to another, where the strings may have repeated characters I was looking through a programming question, when the following question suddenly seemed related. How do you convert a string to another string using as few swaps as follows. The strings are guaranteed to be interconvertible (they have the same set of characters, this is given), but the characters can be repeated. I saw web results on the same question, without the characters being repeated though. Any two characters in the string can be swapped. For instance : "aabbccdd" can be converted to "ddbbccaa" in two swaps, and "abcc" can be converted to "accb" in one swap. Thanks! A: This is an expanded and corrected version of Subhasis's answer. Formally, the problem is, given a n-letter alphabet V and two m-letter words, x and y, for which there exists a permutation p such that p(x) = y, determine the least number of swaps (permutations that fix all but two elements) whose composition q satisfies q(x) = y. Assuming that n-letter words are maps from the set {1, ..., m} to V and that p and q are permutations on {1, ..., m}, the action p(x) is defined as the composition p followed by x. The least number of swaps whose composition is p can be expressed in terms of the cycle decomposition of p. When j1, ..., jk are pairwise distinct in {1, ..., m}, the cycle (j1 ... jk) is a permutation that maps ji to ji + 1 for i in {1, ..., k - 1}, maps jk to j1, and maps every other element to itself. The permutation p is the composition of every distinct cycle (j p(j) p(p(j)) ... j'), where j is arbitrary and p(j') = j. The order of composition does not matter, since each element appears in exactly one of the composed cycles. A k-element cycle (j1 ... jk) can be written as the product (j1 jk) (j1 jk - 1) ... (j1 j2) of k - 1 cycles. In general, every permutation can be written as a composition of m swaps minus the number of cycles comprising its cycle decomposition. A straightforward induction proof shows that this is optimal. Now we get to the heart of Subhasis's answer. Instances of the asker's problem correspond one-to-one with Eulerian (for every vertex, in-degree equals out-degree) digraphs G with vertices V and m arcs labeled 1, ..., m. For j in {1, ..., n}, the arc labeled j goes from y(j) to x(j). The problem in terms of G is to determine how many parts a partition of the arcs of G into directed cycles can have. (Since G is Eulerian, such a partition always exists.) This is because the permutations q such that q(x) = y are in one-to-one correspondence with the partitions, as follows. For each cycle (j1 ... jk) of q, there is a part whose directed cycle is comprised of the arcs labeled j1, ..., jk. The problem with Subhasis's NP-hardness reduction is that arc-disjoint cycle packing on Eulerian digraphs is a special case of arc-disjoint cycle packing on general digraphs, so an NP-hardness result for the latter has no direct implications for the complexity status of the former. In very recent work (see the citation below), however, it has been shown that, indeed, even the Eulerian special case is NP-hard. Thus, by the correspondence above, the asker's problem is as well. As Subhasis hints, this problem can be solved in polynomial time when n, the size of the alphabet, is fixed (fixed-parameter tractable). Since there are O(n!) distinguishable cycles when the arcs are unlabeled, we can use dynamic programming on a state space of size O(mn), the number of distinguishable subgraphs. In practice, that might be sufficient for (let's say) a binary alphabet, but if I were to try to try to solve this problem exactly on instances with large alphabets, then I likely would try branch and bound, obtaining bounds by using linear programming with column generation to pack cycles fractionally. @article{DBLP:journals/corr/GutinJSW14, author = {Gregory Gutin and Mark Jones and Bin Sheng and Magnus Wahlstr{\"o}m}, title = {Parameterized Directed \$k\$-Chinese Postman Problem and \$k\$ Arc-Disjoint Cycles Problem on Euler Digraphs}, journal = {CoRR}, volume = {abs/1402.2137}, year = {2014}, ee = {http://arxiv.org/abs/1402.2137}, bibsource = {DBLP, http://dblp.uni-trier.de} } A: You can construct the "difference" strings S and S', i.e. a string which contains the characters at the differing positions of the two strings, e.g. for acbacb and abcabc it will be cbcb and bcbc. Let us say this contains n characters. You can now construct a "permutation graph" G which will have n nodes and an edge from i to j if S[i] == S'[j]. In the case of all unique characters, it is easy to see that the required number of swaps will be (n - number of cycles in G), which can be found out in O(n) time. However, in the case where there are any number of duplicate characters, this reduces to the problem of finding out the largest number of cycles in a directed graph, which, I think, is NP-hard, (e.g. check out: http://www.math.ucsd.edu/~jverstra/dcig.pdf ). In that paper a few greedy algorithms are pointed out, one of which is particularly simple: At each step, find the minimum length cycle in the graph (e.g. Find cycle of shortest length in a directed graph with positive weights ) Delete it Repeat until all vertexes have not been covered. However, there may be efficient algorithms utilizing the properties of your case (the only one I can think of is that your graphs will be K-partite, where K is the number of unique characters in S). Good luck! Edit: Please refer to David's answer for a fuller and correct explanation of the problem.
“Jeff Soto is a painter, illustrator and muralist who has exhibited in galleries and museums around the world. As a youth, he simultaneously discovered both traditional painting and illegal graffiti – and, ever since, both worlds have informed his work. The artist’s distinct color palette, subject matter and technique resonate with a growing audience and bridge the gap between pop surrealism and street art. Inspired by youthful nostalgia, nature, and popular culture, his bold, representational work is simultaneously accessible and stimulating. He lives and works in California with his wife and two daughters.” This whimsical mural embellishing the wall of Empellón Taqueria in New York City's West Village was created by Jeff Soto ... 3 Murals Empellón by Jeff Soto at Empellón Taqueria New York, NY The LaBrea Owl is a massive mural painting found at the La Brea Avenue. This vibrant artwork was created by Jeff Soto us ...
https://www.wescover.com/creator/jeff-soto
No, it’s not the latest reality tv show, it’s the latest EPI Chapter meeting topic! In a joint meeting with Club Entrepreneur, we’ll be going “behind the scenes” to talk about what really happens during an acquisition. We’ll be convening a panel of business owners and experts, some of whom have experienced both successful and unsuccessful acquisitions during their careers, to talk about the hands-on reality of buying another company. Many mergers and acquisitions are not successful, and we’ll be sharing a checklist of factors you’ll need to consider, including rationale, budgets, culture, staffing and more. For a real-world look into this issue, join Club E and moderator Julie Keyes, President of the Twin Cities EPI Chapter. Learning Objectives: - Learn how to vet possible acquisition candidates and not just from the numbers side of the equation. - Identify who the main advisers are when a business owner is in pursuit of buying another company and the roles they play. - Identify the obstacles that could derail a successful M & A Integration and what the key components are to a successful integration. - Learn the Due Diligence process of a Buy Side Transaction. Meet the Presenters: Hear from a panel of experts including John Long, Pat Rowan, Eddie Eames, Dean Willer and Jeff Porubcan, moderated by Club E and Julie Keyes. Thank you to our sponsoring firms: This meeting will be held at the Minneapolis Club.
http://exit-planning-institute.org/events/event/epi-chapter-event-twin-cities-sep-real-acquisitions/
Officials consider adjustments to airport security after passengers and pilots complain about rigorous checkpoints even after background checks Background checks on pilots, opinionated passengers, and even the risk of touching children inappropriately aren't keeping airport security personnel from carrying out rigorous body patdowns, but officials are willing to consider adjustments to the controversial new checkpoint procedures. ByJeremy Pelofsky, ReutersNovember 16, 2010 Transportation Security Administration manager Anthony Crimi (l.) demonstrates ha full-body imaging machine on Oct. 7, in St. Louis. Airports across the country have provoked the ire of passengers and pilots, who say they have already gone through exhaustive background checks, with the use of intrusive patdowns on those who refuse the full-body imaging machine. ARLINGTON, Virginia — Homeland security officials on Monday defended heightened airport security screening measures but said they would consider adjustments to new rigorous patdowns after complaints from travelers. With the busy holiday travel season about to begin, U.S. Homeland Security Secretary Janet Napolitano made it clear that new full-body scan checks would become the routine as hundreds of the machines are installed at U.S. airports and that the alternative would be physical patdowns. "If there are adjustments we need to make to these procedures as we move forward, we have an open ear; we will listen," she told reporters during a news conference at Ronald Reagan Washington National Airport. "This is all being done as a process to make sure the traveling public is safe," she said, adding that the scans did not pose health risks and that privacy safeguards have been adopted to prevent the images from being saved or transmitted. There are almost 400 body scan machines in some 68 U.S. airports. Some airports still only use metal detectors. Those who opt out of a body scan would be subject to a patdown, which the Transportation Security Administration has made more rigorous in recent weeks and has provoked the backlash. The DHS and its TSA have been scrambling to address a public backlash against the new security measures, including a call to boycott body scans on one of the busiest travel days, the day before Thanksgiving. "I really regret that," Napolitano said of the proposed boycott. "Our evaluation of the intelligence and risk indicated that we needed to move more quickly into the non-metal environment, to get liquids and powders and gels off of aircraft." TSA rushed deployment of body scanners after a foiled plot by a Nigerian man who tried to detonate explosives hidden in his underwear aboard a U.S. flight from Amsterdam to Detroit. Last month authorities discovered explosives hidden in two packages aboard cargo flights to the United States. To stem the concerns, Napolitano and Pistole met last week with travel industry executives who have expressed worries that Americans will cancel their trips and thus hurt the fragile economy which is still trying to recover from a recession. Already the TSA has given a little ground after the flood of complaints, announcing that it has eliminated patdowns for children under 12 and will develop alternative procedures for pilots who are already subject to extensive security checks. "We've heard the concerns that have been expressed and agree that children under 12 should not receive that pat-down," Pistole said on NBC's "Today Show". TSA had been reviewing the issue and Reuters last week reported about a father upset after his 8-year-old son was subjected to a patdown. TSA also is experimenting with some alternative checks for the pilots after their unions raised concerns about health risks of the scanners and objected to rigorous patdowns. DHS has said the scans involve less radiation than people receive otherwise on a daily basis. Pilots' unions have said they already have gone through security background checks and have access to the cockpit, making further screening duplicative. Napolitano said she expected more details to resolve that issue soon.
Emma Quayle joined The Age as a cadet journalist in 1999 and has been covering football since 2001. She has won awards from the Australian Football Media Association and AFL Players Association for her feature writing, and specialised for many years in covering junior football and the AFL draft. Emma's two books - The Draft and Nine Lives (the story of former Essendon wingman Adam Ramanauskas' battle with cancer) - were published in 2008 and 2010. Clubs can take a rest from faking it Emma Quayle AFL clubs are no longer compelled to invent injuries for players they simply choose to rest because of new injury-allowance provisions that legitimise the growing desire of clubs to rotate players... AFL Rest may be best in long season: clubs Emma Quayle Geelong won its first 13 games in 2011, enough to not only entrench it in the top eight by the midway mark of the season, but to allow the club to move players in and out of the side so that, come... Football Dees to take a punt on Hogan call-up Emma Quayle Melbourne will call the AFL and at least ask the question: Is there any chance Jesse Hogan could play senior football this year? Football Demons give Bail time for full recovery Emma Quayle Rohan Bail must pass a series of concussion checks before Melbourne will clear him. AFL Watson to kick off season Emma Quayle Jobe Watson will make his first appearance of the pre-season in Essendon's match against Richmond at Wangaratta on Saturday, provided his right knee stands up on the training track. Clubs turn to doctors for more control Emma Quayle For the first time, AFL clubs will soon start hiring their own full-time doctors. Roos' future looking bright as another young gun extends deal Emma Quayle Third North Melbourne youngster backs the club's future. Dees ready for the heat of battle Emma Quayle Melbourne to use heat management techniques it tested at a pre-season camp. Ex-Bombers better informed on probe Emma Quayle Essendon has comforted the former players who will be interviewed by ASADA. Coming of age Emma Quayle Majak Daw has learnt some hard lessons and turned things around. Daw gets his chance to shine Emma Quayle Majak Daw will finally play his first game in North Melbourne colours in a senior game on Friday. Dees winners? We'll know in about 2015 Emma Quayle It's impossible to tell what Melbourne really did win by losing, though. Bombers to honour Lloyd and Lucas Emma Quayle Essendon will start its season by glancing to a less-complicated past. Scandal a 'wake-up call' for players Emma Quayle Scott Lucas says probe should make players more conscious of what they put into their bodies. As Goodes as it gets Emma Quayle Brett Goodes knew he was making a good decision, the best and most responsible one. But it was hard not to think about what he was giving up. Swan on target with knee injury Emma Quayle Adam Goodes expects his return from a knee injury to speed up in the next few weeks. Brett finds a Dog's life is all Goodes Emma Quayle The rookie is enjoying his new role being one of the boys, not looking after them. Football Hird proud of Dons Michael Gleeson and Emma Quayle AFTER 10 days of controversy and distraction, Essendon coach James Hird said it was a relief to get his side back playing football and expressed his pride in the players for the way they have handled... Hawk passes for a Magpie Michael Gleeson and Emma Quayle Ex-Hawk Clinton Young moved well on a wing, starting on Shaun Higgins. Football Bombers keep status quo Emma Quayle and Michael Gleeson The Bombers didn't appear to have implemented any major positional shifts.
http://www.theage.com.au/afl/by/emma-quayle?offset=880
Manus Noble was born in London in 1988. He studied the guitar from the age of seven and at an early age was noticed as an exceptional talent. He studied privately with Craig Ogden for two years before continuing his studies with Gary Ryan at the Royal College of Music in 2006. In 2010 he graduated with first class honours and was awarded a scholarship to do his Masters in Performance at the Royal Academy of Music. He was also given a substantial Performance Award by the Musicians Benevolent Fund and was accepted onto the Park Lane Group Young Artist's Concert Series at the Purcell Room, London Southbank Centre. He was awarded first prize in the Ivor Mairants Guitar Competition in 2011. He was awarded first prize for the Guitar Competition at the Royal College of Music before going on to the String Player of the Year Competition. He also received the award for 'most outstanding musician' from Magdalen College, Oxford. Manus Noble gave his debut performance at the Cadogan Hall, London at the age of 19. He has performed throughout the UK and abroad and had master classes with Charles Ramirez, Gary Ryan, Carlos Bonnell, Allan Neave, Eduardo Catemario, Mark Eden, Craig Ogden, Mark Ashford, Andrew York, Chris Stell, Sharon Isbin, Jason Vieaux, David Russell and many more.
https://www.westsussexguitar.org/performers/opus245/
Here are the best new folk songs of 2019. Featuring indie folk, folk pop, and acoustic songs, 2019 folk music has everything you want to hear. What are the best folk songs of the year? Help decide below, and see which songs topped the best of 2018 folk music. With so much talent on the folk scene, many songs stood out in 2019. This list is a comprehensive catalog of all the recent folk from this year. From the newest folk songs to top the charts to more obscure folk songs from lesser-known artists, browse this list below if you're in need of new music. Vote your favorite songs to the top of the list, and feel free to add anything you think is missing!
https://www.ranker.com/list/best-folk-songs-2019/ranker-music
Welcome to the CADTutor forums, probably the most lively and friendly AutoCAD forums on the web. You will need to register in order to post a question and to see all the content on this board. See How to register for details. Use the Lost password recovery form if you ever forget either your password or username. Be sure to check out the FAQ for more information. Make the object into a block. You will then be able to adjust the scale in X, Y or Z independantly by selecting the block, right click and select Properties. In the Geometry section of the Properties palette you will have options to change the scale for X, Y and Z. Make the object into a block. You will then be able to adjust the scale in X, Y or Z independantly by selecting the block, right click and select Properties. In the Geometry section of the Properties palette you will have options to change the scale for X, Y and Z. If you set it as a dynamic block and set it to stretch horizontally and vertically, could you still adjust it to a scale or is it just freehand? "You have to flatten them first. But then they won't be 3D anymore." -Alan Cullen If you set it as a dynamic block and set it to stretch horizontally and vertically, could you still adjust it to a scale or is it just freehand? If your dynamic block is set up with stretch parameters, there's no need to be messing with the X and Y scales in the properties palette. In fact, if you do change those properties, you will lose your dynamic functionality. You need to leave all of those values at 1.0 and adjust the block with your dynamic grips. If your dynamic block is set up with stretch parameters, there's no need to be messing with the X and Y scales in the properties palette. In fact, if you do change those properties, you will lose your dynamic functionality. You need to leave all of those values at 1.0 and adjust the block with your dynamic grips. Yea, but on the dynamic block grips can you enter a scale factor like yiyan wanted, or is it just up to you to pull the grips to a desired spot? "You have to flatten them first. But then they won't be 3D anymore." -Alan Cullen Thanks very much for your answer. what if I want to scale the whole drawing horizontally? Can you post the drawing or give a little more information about it and what exactly you are trying to achieve? There are many problems that could arise from trying to stretch an entire drawing. Depending on how complex it is, whether you have xref's or images attached, if you have blocks in the drawing, text, dimensions, etc. I can think of a lot of reasons why you would not want to do this. If your drawing is simple, it could be possible by making the entire drawing a block and adjusting the X scale in the properties palette, but more information is needed. Yea, but on the dynamic block grips can you enter a scale factor like yiyan wanted, or is it just up to you to pull the grips to a desired spot? If your dynamic block has stretch parameters applied to it, select the block, right click and select Properties. In the "Custom" section of the Properties palette you will have "Distance" and "Distance1" for your horizontal and verical dimensions. You can type in whatever dimension you want those to be. If your dynamic block has stretch parameters applied to it, select the the block, right click and select Properties. In the "Custom" section of the Properties palette you will have "Distance" and "Distance1" for your horizontal and verical dimensions. You can type in whatever dimension you want those to be. Awesome...dynamic blocks are really something else! "You have to flatten them first. But then they won't be 3D anymore." -Alan Cullen
© 2022 MJH Life Sciences™ , Pharmacy Times – Pharmacy Practice News and Expert Insights. All rights reserved. Understanding Arthritis in the Elderly With the aging of the baby boomers, the population of adults over 65 years old is expected to increase to 22% by the year 2050.1 In addition to other chronic conditions - such as diabetes, hypertension, and heart disease?the elderly are more likely than other segments of the population to suffer from rheumatoid arthritis (RA) and osteoarthritis (OA). Arthritis can result in chronic pain (common in older people) and can lead to depression and sleep disturbances, as well as increased health care costs.2 Managing arthritis, or managing chronic pain in general? including choosing an appropriate therapy regimen in the elderly - can be complicated due to many factors. Among these factors are multiple drugs, multiple diseases, potential drug interactions, a decrease in cognitive function, and altered pharmacokinetics.3 All of these factors lead to challenges in achieving good therapeutic outcomes (Table 1). Rheumatoid Arthritis RA, a chronic autoimmune disease, is a systemic inflammatory condition that causes joint destruction, pain, swelling, and stiffness. 4 The progressive deterioration of the joints can lead to permanent damage and deformity and is a common cause of disability. 5 Yet, the underlying cause of this autoimmune disorder is unknown. The cardinal symptoms of RA usually begin between the ages of 25 and 50.1 In addition to swelling and stiffness, they include bilateral pain - for example, in the feet, hands, and wrists initially. Pain also can develop in other areas, such as the hips, knees, shoulders, and neck. Because RA affects the body as a whole, other symptoms - such as fever, weight loss, fatigue, and loss of appetite - may be present.6 The diagnosis of RA comprises a physical examination along with a blood test for the presence of rheumatoid factor (an antibody found in patients with RA); the presence of C-reactive protein (responsible for some inflammatory disorders); and an elevated erythrocyte sedimentation rate. Osteoarthritis OA, a common form of joint disease, generally afflicts persons over the age of 60. It often is associated with pain, limitation of motion, and disability.7 OA most commonly affects weight-bearing joints and usually is associated with the deterioration or breakdown of the joint.8 Unlike RA, there is no specific diagnostic test for OA. The usual clinical presentation is pain initially when the joint is used; later, pain may occur at rest. Prolonged activity may aggravate the condition, and rest may ease the pain.9 The goal of therapy is to reduce pain and its impact on the patient and his or her quality of life.7 General Management of Arthritis The goal of treatment in arthritis is generally achieved through a combination of pharmacologic and nonpharmacologic therapy?aimed essentially at minimizing pain and making an impact on a patient?s daily function and quality of life.7 Nonpharmacologic approaches consist of patient and caregiver education, exercise, weight control, and thermal modalities (heat or cold applications). Effective pain management is mostly achieved by way of analgesic drugs (nonsteroidal anti-inflammatory drugs [NSAIDs] and acetaminophen); invasive techniques (corticosteroid injections); and even opioids. 3 The use of opioids in the elderly, however, remains a controversial subject due to the fear of addiction or illicit drug use. Nevertheless, the recent guideline from the American Pain Society sanctions the use of opioids in instances of severe arthritis pain.10 Management of Rheumatoid Arthritis Pharmacologic treatment of RA consists of NSAIDs, with emphasis on the new cyclooxygenase-2 inhibitors,5 which control inflammation with less risk of gastrointestinal (GI) toxicity. Oral and intra-articular corticosteroids have been used with success, but the side-effect profile must be considered, especially in the elderly. Also used in RA are the disease-modifying antirheumatic drugs (DMARDs), which have been widely utilized to control disease progression.4 Some of the early agents include hydroxychloroquine, sulfasalazine, and methotrexate, traditionally the ?gold standard?11 and first-line therapy due to its once-weekly dosing and efficacy. The new-generation DMARDs exert their effect by antagonism of the tumor necrosis factoralpha that is involved in the inflammatory process leading to RA. These newly developed agents include infliximab, etanercept, and adalimumab (Table 2). In addition, anakinra (Kineret), an anti-inflammatory cytokine, is available for once-daily dosing via subcutaneous injection. Management of Osteoarthritis Pharmacologic therapy in OA consists of acetaminophen, NSAIDs, and often opioids to treat moderate-to-severe pain.7 The use of NSAIDs in the elderly can result in GI adverse effects. With the use of histamine2 receptor antagonists, proton pump inhibitors, or misoprostol, these negative symptoms can be alleviated.12,13 Nevertheless, acetaminophen in adequate doses can be a safer alternative. Another option for pharmacists to recommend would be topical therapy with capsaicin, which inhibits substance P, a pain mediator. Patients should be counseled that optimal results with capsaicin are achieved when it is used on a regular basis. The Pharmacist?s Role There are many ways that pharmacists can ensure that older patients understand and adhere to their drug regimen, thus achieving optimal outcomes. Counseling patients on the names of the drugs, the reason why they are prescribed, how they are to be taken, as well as the side effects can increase awareness. Yet, special considerations should be taken into account, and the therapy should be tailored to a patient?s individual needs (Table 3). Pharmacists, as integral members of the health care team, are most qualified to educate the elderly on how best to manage pain with minimum adverse effects and toxicity.
https://www.pharmacytimes.com/view/2003-12-7559
This website (www.conceptowl.com) is owned by RankSurge Learning Pvt. Ltd., which is hereafter referred to as ‘ConceptOwl’. For the purpose of this Policy, ConceptOwl is committed to ensure that any “information” asked to be provided which serves as identification when using this website will only be used in accordance with this privacy statement. When this Policy uses the generic term “information” it is intended to address the general use of information, and not specific course information. By using ConceptOwl, its products and services, you consent to the collection, use and disclosure of your personally identifiable information, as applicable, in accordance with this Policy. ConceptOwl may change its Policy by updating this page. You should check this page to ensure that changes, if any, are acceptable to you. We collect information from when you register on our site or fill out a form. When registering on our site or requesting for any product or information, you may be asked to enter the following information which is used to complete your request: You may, however, visit our site anonymously. We do not collect personal information from our visitors without the visitor providing us with this information as set forth in this Policy. Apart from this, ConceptOwl may also automatically collect and analyze information about your general usage of the website. We might track your usage patterns to see what features of the website, services and courses you commonly use, site traffic volume, frequency of visits, type and time of transactions, type of browser, browser language, IP address and operating system, and statistical information about how you use the Services and Courses. We only collect, track and analyze such site information in an aggregate manner that does not personally identify you. This aggregate data may be used to assist us in operating the website and the services provided to other third parties to enable them to better understand the operation of the services, and improve their course offerings, but such information will be in aggregate form only and it will not contain personally identifiable data. We require the information we collect to understand your needs and provide you with a better service. Any information that we collect from you may be required for and used in the following: We are committed to ensuring that your information is secure. We implement a variety of security measures to maintain the safety of your personal information. In order to prevent unauthorized access or disclosure, we have put in place suitable physical, electronic and managerial procedures to safeguard and secure the information we collect online. We will share your personally identifiable information with third parties only in the ways that are described in this Policy, like sharing information with service providers to allow them to fulfill your requests. We do not sell your personal information to third parties. We are in no way responsible for the protection and privacy of any information which you provide whilst visiting such other websites of interest contained in our website as links and such websites are not governed by this Policy. You may choose to restrict the collection or use of your personal information in the following ways: We will not sell, distribute or lease your personal information to third parties unless we have your permission or are required by law to do so. We may use your personal information to send you promotional information about third parties, which we think you may find interesting if you tell us that you wish this to happen. If you believe that any information we are holding on you is incorrect or incomplete, please contact us as soon as possible, at the above address. We will promptly correct any information found to be incorrect. By using our site, you consent to our Policy. If there are any questions regarding this Policy you may contact us using the information below: RankSurge Learning Private Ltd, No. 1502, 19th Main, HSR Layout, Bengaluru 560102, Karnataka. If you have any grievance or complaint, please reach out to our Grievance Officer, Saranya K.
https://conceptowl.com/privacy
Are you ready for the new whistleblower protection laws? Effective 1 July 2019, whistleblowers will have greater protection against harassment and victimisation, with more organisations, across all sectors, captured under the regime. Whilst the new regime has an in-force date of 1 January 2020, it is important to note that other provisions relating to the protection of whistleblowers commences immediately. What this means is that your organisation is at risk of exposure until such time that a revised or new whistleblower policy is in place. We therefore recommend that organisations commence development or review of their policy and supporting processes as soon as possible. The following information seeks to provide you with further clarity on the new whistleblower protection laws, what organisations need to consider and how we can help you comply with the new regime. Expanded protections The Whistleblower Act expands the existing whistleblower framework by: - extending the group of people who can make disclosures and be eligible for protection - broadening the types of wrongdoing that can be the subject of a disclosure - expanding who can receive a whistleblower’s disclosure - allowing anonymous disclosures - strengthening immunities for whistleblowers - extending non-compliance or misconduct to include both corporate and taxation laws Impacted organisations Now impacting all sectors (including corporate, financial and credit sectors), organisations that fit the following profiles will be required to implement a whistleblowing policy in order to comply with the new legislation: 1. Public companies (including companies limited by guarantee) 2. Proprietary companies that are the trustee of a registerable superannuation entity 3. Large proprietary companies Action required New or existing organisations, previously impacted by the regime, will need to have a compliant whistleblower policy and framework in place before 1 January 2020. In addition, ASIC has released draft guidance on the new whistleblower obligations to implement a whistleblower policy. The proposed ASIC Regulatory Guide 000 Whistleblower policies published on 7 August 2019, explains how companies can establish, implement and maintain a whistleblower policy (proposed ASIC Guidance). Submissions on the proposed ASIC guidance are due on 18 September 2019. Therefore: - if your organisation currently has a whistleblower policy and framework in place, it will need to be reviewed and may need to be amended to ensure it is compliant with the new regime. - if your organisation is subject to the legislation for the first time, you will need to develop and adopt a compliant whistleblower policy and framework. Impacted organisations will also need to ensure that appropriate training has been provided to employees, managers and key personal involved in implementing the whistleblower policy. Ash St. Can Help You Have a read of our Whistleblower Essentials Packages developed by our senior legal, compliance and regulatory experts, designed to ensure you have a compliant policy in place as well as some key supporting tools to assist with implementing key components of an effective whistleblower framework. Alternatively, you can purchase our comprehensive whistleblower policy precedent (which incorporates Guidance issued by ASIC and high level tailoring for your organisation) for $2,500 plus GST. Reach out to Catherine Tomic on +61 414 088 165 to get started! We also have training solutions available for: - employees – refer to our Strategic Partners and leaders in Governance, Risk and Compliance, GRC Solutions for more information; and - management, key persons involved in managing and implementing the whistleblower policy – speak to Catherine Tomic on +61 414 088 165.
https://ashstreet.com.au/campaign/whistleblower/