content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
HIV related cognitive impairment reports to the therapist that he or she is feeling increased levels of anxiety and/or depression, it is appropriate for the therapist to reflect back to the patient that his or her current mental and cognitive limitations may be contributing to their emotional distress and an overall sense of feeling overwhelmed. Needing to cease working is often a traumatic loss for the person with AIDS and brings with it a corresponding loss in self–esteem and self–definition as a professional. Exploring potential options where the patient may be able to feel useful in a safe environment like volunteering at a local AIDS service organization or people with HIV or AIDS self help groups can be a useful intervention.With a patient who suffers from any level of
Providing structure and a familiar environment will facilitate greater independence in activities of daily living than novel and ambiguous situations. Whenever possible, demented patients should be in environments that are familiar and that have sufficient structure and support. Unfamiliar environments with no one to assist with activities of daily living may promote increasing confusion in a patient with only mild dementia. Hospitalizations are often a time when the patient with even mild dementia feels easily confused and overwhelmed by being in a new and unfamiliar environment. The therapist should encourage the patient’s relatives and friends to bring calendars, and other familiar photos or mementos to the hospital room to help orient the patient. In addition, placing important phone numbers next to the hospital phone can help reduce anxiety. One man, a former actor who had been able to remember pages of scripts, was no longer able to even remember his home phone number near the end of his illness. During his final hospitalization his lover taped a note to the telephone that said: “For help call…” Upon which he had written both his home and office number, so that his lover could easily contact him at any time.
Patients with HIV–Associated Cognitive/Motor Complex may have sufficient motivation to undertake activities or tasks but may lack the necessary initiation to actually begin the activity. This is common to other sub cortical disturbances (e.g. Parkinson’s Disease) and assistance with initiating desired activities and tasks by family members or loved ones may provide the crucial impetus for actually starting a desired activity.
If the patient is a single parent, the therapist must do a careful assessment regarding whether the patient’s reduced cognitive and concentration skills endanger the children, or could contribute to their being neglected or abused. One woman with advanced AIDS was having difficulty finding her way to the therapist’s office where she had been a patient for several years. One day, this woman reported to the therapist that she had kept her young daughter waiting after school because she was wandering around the neighborhood lost and confused. When she had not claimed her daughter after an hour, the school called the woman’s mother who came right away and brought her granddaughter to her apartment. The child was understandably frightened by not having her mother keep to their regular schedule.
Once the therapist learned about this incident she called a family therapy session with the patient, the patient’s parent and siblings in order to develop a plan that would insure that the young daughter would be safely cared for. This practical management approach is not a traditional psychoanalytical or psychotherapeutic one, yet this kind of creative and often non–traditional approach is needed in order to provide a comprehensive level of psycho–social support to patients suffering from HIV/AIDS related cognitive impairment.
Another sensitive issue is at what point a referral to an HIV related day treatment program is an appropriate intervention by the therapist. When introducing this issue, the therapist must be sensitive and skillful in raising a broad variety of issues. A central issue in attempting to enlist family, friends and other caregivers in helping the patient who is cognitively impaired is the treading of a fine line in avoiding having people do things for him or her that he or she can still do for themselves. Developing ways to insure that the patient is compliant with an extensive schedule of infusions and oral medications is another example of how the therapist or family and friends need to function as a case manager for the individual experiencing HIV related memory loss and disorientation.
In some cases the patient may live too far away from where the therapist either works or lives for home visits to be practical. Where they are a viable option, these home visits serve multiple purposes. They have the potential to provide the patient with a source of support, comfort and continuity in his or her life that may be all too rare due to the complications associated with advanced and terminal stage of AIDS.
In addition, a home visit provides the therapist with the opportunity to assess what level of care the patient is receiving or in need of, if he or she is still capable of living independently. In some cases, it will be obvious form the condition of the home, apartment, kitchen and undiscarded medical waste and household garbage, that the individual should no longer be living alone. A home visit then gives the therapist entry to raising this painful and difficult issue with the patient.
When due to neuropsychiatric involvement, the patient is no longer able to concentrate long enough to follow the thread of a normal conversation during a psychotherapy session, there are ways that the therapist can be comforting and helpful that are not traditional forms of psychotherapy. One useful option is for the therapist to bring meditations, poetry or visualizations that the therapist can read to the patient, in the hope of calming an agitated state and temporarily reducing fears. One of the authors is trained in hypnotherapy and finds doing trance induction and relaxation and pain control work while a patient is in a trance to be a very effective clinical tool. Similarly, guiding a patient in visualizations is very useful if the therapist makes some audio tapes and leaves them with the patient, so that the patient can experience this kind of relief in between sessions. | https://aidssupport.aarogya.com/psychological-problems/292-providing-a-stable-environment-for-aids.html |
In the best of times, gender inequalities in academia create additional barriers for women to succeed. Studies now show that COVID-19 has exacerbated these existing inequalities, resulting in adverse effects on women’s research. This impact could have devastating, long-lasting effects for female graduate students, postdocs and young faculty members who are in the early stages of their careers, and who could continue to feel the negative effects of pandemic in the years to come.
Over the past several months, research has emerged demonstrating a disproportionately gendered impact of COVID-19 on academic research productivity across the population (e.g. in the social sciences, in the health sciences, and more broadly). This is most evident in women’s research productivity as measured by publications and grants, which have been on average disproportionately constrained.
Some of these impacts relate to inadequate access to childcare, inequitable division of domestic labour, and ongoing inequalities in care work and service roles, which, in addition to being exacerbated by COVID-19, are reproducing systemic inequities. These barriers are often felt even more keenly by those at the intersection of gender-based and other biases, including women of colour, as well as women belonging to other marginalized groups relating to sexual identity and disability and accessibility (see for instance, recent CIHR message).
The systemic nature of these inequities makes them difficult to address at the level of a single institution given the fluid situation during the pandemic. However, we must endeavour to do more than simply acknowledge the unprecedented challenges in academia during this time. UBC is committed to Inclusive Excellence (Strategy 4 of the UBC Strategic Plan) and, as such, we should be particularly attentive to these impacts, as well as to work to develop strategies for mitigating systemic barriers to all affected groups. Failure to do so may have much longer-term equity-related consequences for the University in the future.
Graduate students and postdoctoral fellows who are experiencing differential consequences because of the pandemic should receive special attention. We encourage units to consider ways in which their graduate students and postdoctoral fellows have been impacted by COVID-19, and look to opportunities for mitigating against some of these challenges. Potential areas to be considered are:
- Supervision: How do we best support and maintain communication with all of our graduate students and postdoctoral fellows, with particular attention to those who are differentially impacted? Please see the message sent on March 27 on our page on Supervision during COVID-19 for more information.
- Academic progress/productivity: How do we continue to encourage progress, completion and research productivity while recognizing differential impacts? For some suggestions see webinar posted on June 5 on our page on Supervision during COVID-19
- Funding: Are there ways to direct funding at differentially impacted researchers without forcing or expecting self-disclosure from them? Will funding that recognizes delays in program completion support those most heavily impacted?
- Adjudication and admissions: As we develop ways to combat systemic bias, how can we support a more equitable approach? How will we review admissions applications in coming years to acknowledge the differential impact that the pandemic may have had on different groups of applicants? How do we encourage applicants to self-disclose differential impacts on their research and how to we assist adjudicators in taking such impacts into account?
We welcome your thoughts on supporting our graduate students and postdoctoral fellows who are differentially affected by the pandemic in your units. Additionally, we are interested in hearing from you about ways we at the Dean’s Office, Graduate and Postdoctoral Studies, might ameliorate and address these effects to create and sustain a more equitable campus. Please email Julian Dierkes, Associate Dean, Funding, with your comments and suggestions. | https://www.grad.ubc.ca/advance/2020/2/repairing-differential-impact-covid-19 |
FIELD OF THE INVENTION
BACKGROUND OF THE INVENTION
SUMMARY OF THE INVENTION
BRIEF DESCRIPTION OF THE DRAWINGS
DETAILED DESCRIPTION OF THE INVENTION
A. Functional Overview
B. System Overview
C. Synchronization Operations
D. Detecting Operations on Working Files
E. User-Interface
F. Hardware Description
G. Alternative Embodiments
H. Conclusion
The present invention relates to an application for managing network files. In particular, embodiments of the invention pertain to detecting alterations to file systems for purpose of synchronization.
When multiple file systems contain exactly the same content, the file systems are said to be "in sync". To keep file systems in sync, synchronization applications detect differences between file systems, and then perform operations to eliminate the differences. Typically, synchronization applications are used to synchronize the file systems of different computers that need to access different copies of the same files. A set of file systems that are to be synchronized with each other are referred to herein as a "synchronization set". Each file system that belongs to a given synchronization set is referred to as a "synchronized system".
The act of synchronizing a synchronization set is referred to as a synchronization operation. During each synchronization operation, a synchronization application typically attempts to detect when items have been deleted or added in any of the synchronized systems since the previous synchronization operation.
In general, synchronization applications seek to add to all synchronized systems those items that are detected as being added to any synchronized system since the previous synchronization operation. Similarly, synchronization applications seek to delete from all synchronized systems those items that are detected as being deleted from any synchronized system since the previous synchronization operation.
Typically, the synchronization application will not distinguish between (1) added items that are copies of other items and (2) added items that were created as originals. In addition, when a renamed item has been altered, it appears that the original item was deleted and a new item was added. Consequently, the original items may be deleted in all synchronized systems, and the new altered item will be added to all synchronized systems. Alternatively, the synchronization application may place both the altered item and the non-altered item together in the same file.
Under current synchronization techniques, if a user performs multiple operations on an item and then seeks to synchronize that item with another, the application will detect that item as altered or new. The application will not be able to detect specific operations performed on the item. As a result, the synchronization operation may delete one file for another, or add the altered file to be stored with the original file in the same file system.
Synchronization operations often involve a significant amount of resource consumption. For example, when a synchronization application detects the addition of a new file to one synchronized system, data transfer of the contents of the new file to all other synchronized systems is required. If the file is large and/or the number of synchronized systems is large, the resource consumption may be significant.
Another problem with current synchronization techniques is that new or replaced files do not retain metadata information from prior to their transfer or recreation on the file system. Thus, if a file created at time T1 is altered, the fact that the file was originally created at time T1 will be lost by when the synchronization application treats the altered file as a new file, and the original file as a deleted file.
EP 0 707 263 A1
discloses a method for tracking computer software units in a computer file system wherein the computer file system is monitored for changes to operating system file directories involving the software units, and a tracking directory is updated to reflect the changes in the software units.
The invention is defined by the independent claims. The dependent claims concern preferred embodiments of the invention.
Embodiments of the invention provide an application that can detect one or more operations performed on a first file system that is to be synchronized. The synchronization application updates a second file system using the detected operations of the first file system.
An embodiment of the invention is capable of detecting operations in the first file system, including copying an item, moving an item, creating a new item, deleting an item, and editing an item. An embodiment of the invention also detects multiple operations performed on the first file system. The detected operations may be recreated on the second file system during a synchronization operation.
Synchronization techniques described with embodiments of the invention incur less overhead than other synchronization processes in use. Furthermore, embodiments of the invention provide for synchronization techniques that preserve metadata information about synchronized files, in contrast to other synchronization processes that incur loss of such information.
FIG. 1 is an overview of a system architecture, under an embodiment of the invention.
FIG. 2 is a flowchart describing synchronization on a terminal, under an embodiment of the invention.
FIG. 3 is a flowchart describing synchronization of a file shared by many users on a terminal of the system, under an embodiment of the invention.
FIG. 4 is a flowchart detailing synchronization of a shared file system on multiple terminals, under an embodiment of the invention.
FIG. 5 is a flowchart detailing detection of multiple operations and compound operations on a working version of a file system, under an embodiment of the invention.
FIG. 6 is a flowchart for identifying moved or deleted items during a synchronization operation.
FIG. 7 is a flowchart for identifying edited items during a synchronization operation, under an embodiment of the invention.
FIG. 8 is a flowchart for identifying one or more operations for an item that was edited and/or moved or deleted, under an embodiment of the invention.
FIG. 9 is a flowchart for identifying items that have were created as new or copied from other items, and possibly edited, under an embodiment of the invention.
FIG. 10 illustrates a user-interface for use with an embodiment of the invention.
FIG. 11 is a hardware block diagram for use with an embodiment of the invention.
The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements.
A method and apparatus for managing files is described. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present invention.
Among advantages provided, embodiments of the invention enable a synchronization operation to be performed that identifies specific acts performed to file items that belong to synchronized systems. The specific operations are recreated on corresponding file items of other synchronized systems during the synchronization operation. As a result, complete transfers of file items may be avoided in many instances where such file items were merely altered or moved in some way. The result is that communication resources are conserved. In addition, corresponding file items of the other synchronized systems may be updated without losing metadata information for those items.
In an embodiment, one or more client terminals can access a file system on a server. One or more files can be downloaded from the file system and manipulated on the client terminal. In particular, a user may perform certain acts on the content of the downloaded file, including editing documents, deleting items, creating new documents for the file, moving items or copying items within the file, or combinations of these acts. Under an embodiment, a management system detects the acts performed on the downloaded file. The management system then synchronizes the downloaded file with a corresponding portion of the file system.
As described herein, the file system is part of an overall management system that keeps numerous files for multiple clients. A client may download only a portion of the file system. The portion of the file system may include items such as directories, sub-files, applications, executables, documents, and individual resources of different data types.
When portions of the file system are downloaded by a client, the resulting local file is referred to as a working version. The working version copies the items from the portion of the file system selected to be downloaded for the client. Information is recorded in a comparison file about the working version when it is created. The comparison file may also include information about the portion of the file system downloaded. This information includes metadata information that can subsequently be used to identify file items, as well as modifications made to the working version after it is created. After the working version is modified, the working version can be synchronized with the portion of the file system used to download the working version. Information recorded in the comparison file is used to detect the changes made to the working version.
As used herein, reference to the term "item" means data structures that can be maintained and/or managed within file systems. As noted, items include directories, files, applications, executables, documents, and individual resources of different data types. The item may include a document or resource of a particular data type. For example, a first item may be a word processing document, and a second item may be a folder than stores the document with other resources.
Under an embodiment, information included in the comparison file is primarily metadata information. The metadata information may include location information for a particular item, creation times, modification times, item size, and file names.
A location is identifiable by a memory address and computer location. Location information refers to data that can be used to identify the memory location of the item on a computer. Location information may include a file or resource name. Location information may also include a file path locating a particular item in memory.
Embodiments of the invention include a system and method of managing files. According to one embodiment, after a first synchronization operation, information is mapped from a file system to a comparison file. The information includes information about the status, after the first synchronization operation, of a first item. For example, the information in the comparison file may indicate that the first item was located at a first location after the first synchronization operation. In addition to mapping the file system information to the comparison file, a working version of the file system is made. Initially, the working version indicates that the first item is at the first location. The information contained in the working version is modified to reflect any changes made to the status of the first file after the first synchronization operation. During a second, subsequent synchronization operation, the location indicated in the working version of the file system is compared with the location indicated in the comparison file to determine whether, during the interval between the first synchronization operation and the second synchronization operation, the first item has be moved.
In an embodiment, the first item in the file system may be moved to a new location that is identifiable by new location information. Therefore, the file system does not have to recreate the first working item if the first working item is moved. Rather, the file system can move the first item corresponding to the first working file to a corresponding location in the file system. In contrast to previous synchronization applications, such an embodiment of the invention does not require the first working item to be recreated as a new part of the file system just because it was moved on the working version. As a result, communication resources are preserved because a data transfer of the contents of first working item is not necessary. Furthermore, metadata information for the first item in the file system is preserved.
Another embodiment of the invention maps information about a file system to a comparison file. A working version is made of a portion of the file system. During a synchronization operation, the comparison file and the working version are used to determine whether items have been copied since the last synchronization operation.
Another embodiment of the invention provides a computer system that can operate a network management application. The computer system includes a network interface to exchange communications with a second computer. The communications are to create a working version of a file system portion accessible on the second computer. The first computer includes a memory that stores the working version. A processor on the first computer records a creation time for at least one working item in the working version, where the first working item originates from a first item of the file system. The processor subsequently uses the creation time to determine if an operation was performed on the first working item.
Among other advantages, embodiments of the invention can detect if an item was moved to a new location after the working version was created, if an item was copied from another item existing in the working version when it was created from another file, or if an item was copied from an item added to the working version subsequent to its creation. Other operations that can be detected under an embodiment of the invention include if an item was edited, or deleted from the working version. An embodiment of the invention can also detect multiple operations performed on or for an item in the working version.
The result is that a portion of a file system used to create the working version can be updated to reflect subsequent changes in the working version. However, items in the file system that are subsequently updated by working version items do not need to be entirely replaced, or be recreated by the working version items. Rather, a synchronization method or application can update the file system to reflect changes to corresponding items of the working version, or to additions of items to the working version. Another advantage is that file system items that are operated on in the working version can maintain information that tracks their origin. As a result, the file system can be updated to reflect only those operations performed on the working version.
FIG. 1 illustrates a system for managing files shared between computers, under an embodiment of the invention. The system includes a first terminal 10 coupled to a server 20 via a network 15. A plurality of other terminals 25 may also be coupled to server 20 over network 15. The first terminal 10 may be operated as a client that communicates with server 20. In an embodiment, a client application is operable on first terminal 10 to manage file items and resources that are shared with server 20.
A user may operate first terminal 10 to access a file system 40 containing one or more resources and other items from server 20. The original version of the file system 40 may remain on server 20 while the user works on a borrowed or working version of a portion of the file system 40. A system such as described by FIG. 1 enables the user to work locally on items accessed from a remote server 20, then update file system 40 on the remote server to reflect changes made on first terminal 10 to those items.
In an embodiment, the user on terminal 10 can perform operations on items accessed from server 20. These operations may include editing content, deleting particular items retrieved from the server, moving items to new locations, copying items retrieved from server 20, and adding new items for subsequent inclusion on server 20. In addition, an embodiment of the invention allows a user to update server 20 to reflect combinations of operations performed on items. An embodiment of the invention reduces possible combinations of operations performed by the user into equivalent compound operations. The equivalent compound operations may include editing and copying an item, creating new items then editing them and/or copying them, as well as editing existing items and then editing them.
In an embodiment, terminal 10 exchanges communications with server 20 using a network interface 12. In one implementation, network interface 12 enables Internet Protocol (IP) communications, and specifically Transport Control Protocol (TCP/IP) for enabling communications over networks such as the Internet. Alternatively, embodiments of the invention may signal communications between computers over networks such as Local Area Networks (LANs) and other types of Wide Area Networks (WANs).
The server 20 may be used to store or otherwise manage a file system 40. In an embodiment, file system 40 includes a plurality of portions, where each portion is associated with a user or an account. A first portion 46 of file system 40 may be a file stored on server 20, that is accessible to first terminal 10, or to a user of first terminal 10. The first portion 46 may include a plurality of items, such as files and resources of particular data types.
A first item 44 of first portion 46 is identified in FIG. 1. For illustrative purposes, first item 44 is assumed to be a resource such as a document. Alternatively, first item 44 could be a file containing other items. The first item 44 includes or is otherwise associated with metadata information and content. The metadata information of first item 44 may identify a particular location (L1) on a memory (not shown) of server 20. The metadata information of first item 44 may also include a location identification information (LI1) used to locate the first location (L1) on server 20. Since first item 44 is assumed to be a resource, first item 44 also includes a content associated with the metadata information.
In an embodiment shown, first terminal 10 receives a first communication 32 from server 20, signaled over network interface 12 and network 15. The first communication 32 includes first portion 46 of file system 40. In one implementation, a user operating first terminal 10 has access rights to first portion 46. The access rights enable the user to download or otherwise retrieve some or all of first portion 46, including first item 44. The user can make a working version 50 of first portion 46 after receiving first communication 32. The working version 50 includes content from items in first portion 46. Certain metadata information for working version 50 may be transferred from server 20 and included in first communication 32. Other metadata information may be generated on first terminal 10 when working version 50 is made. Metadata information transferred over from the file system may include, for example, location information such as file paths and names to locate certain items.
Data signaled with first communication 32 may be used to generate working version 50, including at least a first working item 56. The first working item 56 originates from first item 44 of file system 40. In an embodiment, the first working item 56 originates from first item 44 because a content portion 58 of first working item 56 is copied from a corresponding content portion 48 of first item 44.
Metadata information that may be carried over from file system 40 includes the first location information (LI1) of first item 46. The first location information (LI1) may be used to identify a second location (L2) for first working item 56 on first terminal 10. For example, the first location information (LI1) may include a file path and a name. The file path may be recreated on working version 50 to enable first working item 56 to be located at the second location (L2). The name may be transferred over also as additional location information. In many applications, the name is an inclusive portion of the file path.
When working version 50 is generated on first computer 10, new metadata information is recorded. The new metadata information may include time values that mark certain events for first working item 56. In an embodiment, a first time value 62 may correspond to a creation time for the first working item. A second time value 64 may correspond to a modification time for first working item 56. The first time value 62 and second time value 64 are initialized upon or just after working version 50 is generated on first computer 10. As an example, a user may download a word processing document as first working item 56. When the document is downloaded, first time value 62 (creation time) and second time value 64 (modification time) are recorded by an operating system (or other application) on first terminal 10. For example, first terminal 10 may run a WINDOWS type operating system that automatically records creation time values and modification time values when first working item 56 is generated. The creation time is a value assigned to a particular item marking the time of its creation on a particular computer system. The creation time is stored as a static value that can be used subsequently identify a corresponding working item, even if that working item has a new address or a new name. The modification time is a value associated with the working items to mark the last instant that the item was edited or created. The modification time can therefore change after working version 50 is downloaded from file system 40.
In an embodiment, first computer 10 maintains or otherwise accesses a comparison file 70 to store metadata information. The metadata information stored in comparison file 70 may include new metadata information recorded when first working item 56 is generated on first computer 10, as well as certain metadata information that may be carried over from first item 44 of file system 40.
In the example provided, comparison file 70 stores the first location information (LI1) of the first working item 56 and the first item 44, the second location (L2) of the first working item 56, the first time value 62 (creation time) of the first working item 56, and the second time value 64 (modification time) of the first working item 56. The first location information (LI1) is transferred from server 20, while other metadata information in comparison file 70 is created with generation of working version 50. The metadata information at the initial moment for items in working version 50 are generated and stored in comparison file 70. As shall be described in greater detail hereafter, this metadata information is used to identify the specific operations that are performed on first working item 56 after the working version 50 is made. By knowing the specific operations, synchronization may be performed more efficiently.
The operations that comparison file 70 may be used to detect include the operations of editing items, moving items, making new items, copying items, deleting items and combinations thereof. Comparison file 70 provides access to metadata information for each item signaled with first communication 32. Subsequent to operations being performed on working version 50, an embodiment of the invention provides that metadata in items of working version 50 are compared against comparison file 70. The comparison of metadata information is used to detect the operation(s) performed on working version 50, for purpose of determining differences between working version items and file system items. In making the comparison, items in working version 50 may be detected as having metadata information that are different from metadata of corresponding items recorded by comparison file 70. In addition, items in working version 50 may be detected as not having a corresponding item identified by comparison file 70. The differences identified when making the comparisons are noted and used to synchronize the working version 50 with first portion 46 on file system 40.
Under an embodiment of the invention, first portion 46 is a shared file accessible to other terminals 25 from server 20. It is possible that first portion 46 is changed by another computer after portions of it are signaled to the first terminal 10. The other terminal 25 may, for example, access and operate on items in first portion 46 so that first portion 46 is changed from the time it is signaled to first terminal 10. In order to make the comparison for identifying the changes in working version 50 with shared file system 40, a second communication 34 is signaled to first terminal 10 from server 20. The second communication 34 includes metadata information as existing on server 20 at the moment synchronization is to be performed with first terminal 10. In an embodiment, second communication 34 is signaled to first terminal 10 upon a synchronization request being made from the first terminal 10.
In an embodiment, first terminal 10 performs the synchronization operation. The synchronization operation may compare metadata information between changed or added items of working file 50 and items of first portion 46. The changed or added working version items are the result of the first terminals users performing one or more operations on working version 50. Changed or added file system items are the result of other users performing one or more operations on their versions of file system 40. The differences between working version 50 and items of first portion 46 are identified and reconciled by the user of first terminal 10. The differences are recorded as reconciled metadata information. In an embodiment, a third communication 36 is used to signal the reconciled metadata information from first terminal 10 to server 20. The reconciled metadata information may be signaled to server 20 to cause server 20 to perform one or more operations that update file system 40 so as to reflect changes from operations performed on working version 50. Furthermore, the reconcile information may be displayed by the user 50 so that the user can select changed or altered items that will be used to update file system 40.
In another embodiment, first portion 46 is not shared with other users, but only with the user of first terminal 10. As such, second communication 34 may not be necessary. Rather, comparison file 70 is used to perform the synchronization operation and to identify reconciled metadata information. The reconciled metadata information is then signaled to server 20 after the synchronization operation is performed on first terminal 10. The reconciled metadata information is signaled to server 20 to cause it to update file system 40 with changes of working version 50.
FIG. 2 illustrates a method for making working version 50, and subsequently synchronizing working version 50 (FIG. 1) with a corresponding portion of file system 40. Reference to components of FIG. 1 is intended to communicate exemplary components for use with that embodiment. In an embodiment such as described with FIG. 2, it is assumed that first file system 40 is not shared with other users.
In a step 210, a working version of file system 40 portion is downloaded onto first terminal 10. For example, first terminal 10 may connect with server 20 over the Internet. The user of first terminal 10 may have an account to identify first portion 46 of file system 40. The first portion 46 of file system 40 can be selected by the user to be downloaded onto first terminal 10.
In step 220, comparison file 70 is generated when working version 50 is made. The comparison file records initial metadata information of working version 50. Some of the metadata information may also be transferred from items of first portion 46 of file system 40. Steps 210 and 220 are performed at t=0, prior to any operations that may affect working version 50. Steps 230-250 occur after some time t= i, so that the user may have performed an operation on working version 50. At that time, the user is making a request to synchronize working version 50 with file system 40.
In step 230, differences are identified between modified working version 50 and working version 50 at the time comparison file 70 was created. The differences may be referred to as delta items. The delta items include items in working version 50 at the later time that are new, copied, moved, or modified. The delta items may also include items identified by comparison file 70 that have no correspondence or counterpart in working version 50. For example, first working item 56 may be operated on through edits and moves, in which case it is a delta item in working version 50. Alternatively, comparison file 70 may identify working item 56, but working item 56 may have been deleted from working version 50. In this case, first working item 56 is a delta item in comparison file 70. Similarly, other working items may be copied or added to working version 50 after comparison file 70 is made, in which case those items are identified as delta items in working version 50.
In step 240, differences between items in working version 50 and comparison file 70 are identified. As discussed, these differences are also referred to as delta items.
In step 250, differences identified between working version 50 and items identified by comparison file 70 are reconciled. To reconcile, delta items may be selected for instructing file system 40 to be updated. For example, if a delta item is an edited version of first working item 56, then the selection specifies whether file system 40 is to include the edited or the original version of first item 44. If a delta item is an added item (such as a new or copied item) to working version 50, then the selection determines whether file system 40 is to keep those additions. If the delta item is first working item 56 moved to a new location, then the selection determines whether file system 40 is to use new location information for first item 44, or whether the file system is to maintain the old location. If the delta first working item 56 deleted from working version 50, the selection specifies whether file system 40 is to delete first item 44. Similar methods may be performed with combinations of operations, as detailed elsewhere in this application.
FIG. 3 details a method for synchronizing working version 50 with file system 40 on server 20, when system 40 is shared with other computers, under an embodiment of the invention. In FIG. 3, working version 50 is made as described with an embodiment of FIG. 2. The working version 50 is downloaded from file system 40 in step 310. Comparison file 70 is created to record metadata information about working version 50 and file system 40 in step 320. Changes are made to working version 50 in step 330. Differences between the working version 50 and items identified by comparison file 70 are identified in step 340. These items, referred to as delta items, may include working version items that have at some point been moved, deleted, edited, are added through one or more operations.
In step 350, the user of first terminal 10 makes a synchronization request with server 20. By this time, working version 50 may have been modified from its original state by one or more operations.
In step 360, new information is received about file system 40 on first terminal 10. The file system 40 may have been accessed and altered by other terminals since the time working version 50 was generated on first terminal 10. Therefore, new information about file system 40 may identify changes made to items of the file system 40 by other users. In one implementation, information about the file system 40 is in the form of metadata, and may be particular to items of first portion 46 downloaded by the user of first terminal 10. The metadata information may include location information of file system items that correspond to the downloaded items. In addition, the new metadata information about file system items may include time values. For example, create time and modification time values of the file system items at the time the synchronization is requested may be signaled to first terminal 10 for purpose of determining delta items of file system 40.
In step 370, differences, or delta items, are detected between the update file system 40 and the file system at the time the working version was made. These delta items are identified by new metadata information received in step 360 compared to items identified by comparison file 70 when the comparison file was created in step 320. The delta items identified in this step may be identified by either comparison file 70 or by the new metadata information received about file system 40. Delta items identified by new metadata information about file system 40 may correspond to items that were moved or edited by other users. In addition, delta items of file system 40 may include items added to first portion 46 by other users, either as new items or copies of other items. Delta items identified by comparison file 70 include items deleted from file system 40 after working version 50 is made on first computer 10.
In step 380, selections are made for delta items identified in step 340 and in step 370. The selections may be made by a user. The selections may specify delta items of comparison file 70, working version 50, and file system 40. For each delta item, the selection may determine whether to keep that delta item or not.
In step 390, conflicts between differences identified in steps 340 and 370 are detected and resolved. For example, an item in the working version 50 may be edited, so that it is identified as a delta item when compared to a corresponding file system item at the time working version 50 was made. That file system item identified in comparison file 70 may be subsequently changed by another computer having access to server 20. Thus, two delta items may be associated with the same item identified by comparison file 70. In an embodiment, the user of first terminal 10 can choose which of the two delta items should be used for inclusion in file system 40.
Alternatively, conflict selections between delta items may be made through a conflict protocol that selects whether each delta item is to be incorporated into the synchronized file system 40.
In step 395, the selected delta items are used to update file system 40. Each delta items identified in step 340 and 370 may be omitted or included in the file system's update. The user can select between delta items in conflict.
FIG. 4 illustrates a method for operating server 20 under another embodiment of the invention. In an embodiment such as described with FIG. 4, multiple users are assumed to access shared file systems on server 20. Reference is made to FIG. 1 for descriptive purposes. The first terminal 10 is assumed to be making the synchronization request. Portions of file system 40 is shared with other clients 25 who can access server 20.
In step 410, a portion of the shared file system 40 is signaled to first terminal 10 and clients 25. Each client may be operated separately to access and receive portions of the shared file system.
In step 420, a synchronization request is received from first terminal 10. The synchronization request may correspond to a user wishing to implement their changes to the file system 40. The users may also wish to receive any changes entered from other users who downloaded the portion of the file system 40.
In step 430, updated information about the file system 40 may be signaled to the client requesting the synchronization. The file system 40 may be updated from the time that client downloaded the file system to include changes entered from clients 25.
In step 440, server 20 receives information about changes that are to be made to file system 40 as a result of operations performed on working version 50. The changes may be the result of operations such as edits, additions (new items and copies), deletions, and moves.
In step 450, file system 40 is updated using changes signaled from first terminal 10 (the client making the synchronization request). The updated changes may be selections decided by a particular user after performing one or more operations on that terminals working version of the file system 40.
In step 460, a determination is made as to whether any other requests are made or will be made from other terminals that access file system 40. If there are other requests to synchronize, then step 430-460 are repeated for the next client making the request. Under such an implementation, each client having access to the shared file system 40 makes additional changes to it. The changes and alterations made by other users are incorporated in file system 40 when the synchronization request is made. Therefore, file system 40 changes after each synchronization operation with one of the clients, so that the next client is synchronizing with a previously updated file system 40.
With reference to exemplary components of FIG. 1, embodiments of the invention enable synchronization between items of the working version 50 and items of the file system 40, even after the items undergo multiple and different types of operations. The operations that can be performed on working version 50 may be characterized as a primary operation or a compound operation. Under an embodiment, multiple operations performed on an item can be detected as one of a set of equivalent compound operations.
In an embodiment, the primary functions are edit, delete, copy, move and create new. The edit operation results in a content of an item in working version 50 being changed. The delete operation causes an item to be removed from the working version 50. The copy operation recreates the contents of an item in working version 50 as a new or added item. The move operation causes an item located in one location of the working version 50 to be given a new location. A location may be defined by a name, a memory address and a memory unit. Thus, the move operation may be performed to move an item to a new folder location, to rename an item, or to move the item to a new memory unit. The create new operation is performed on the working version 50 to create an additional item to the working version 50.
A compound operation is a combination of multiple operations performed on the working version 50, to create and/or affect a working version item. In contrast to embodiments of the invention, previous synchronization systems are able to detect performance of some of the primary operations, but are unable to detect certain primary operations, or combinations of operations. Advantages provided by embodiments of the invention enable detection and synchronization of all the primary operations, as well as combinations of multiple operations performed on individual items of working version 50.
Ex -- Edit file X
Dx - Delete file X
Nx - -Create new item X
xMy--X is moved to Y
xCy--X is copied as Y
Analytical expressions for describing operations of file management may be described using the format aOb, where the capitalized letter stands for the operation, an item preceding the operation represents the source for the operation, and an item following the operation represents the destination for the operation. To summarize the primary operations:
ExMy--Edit X and move it to Y
(Nx)Cy--Create X and copy it as Y
E((Nx)Cy)--Create X, copy it as Y, and edit Y
E(xCy)--Copy X as Y, and edit Y
Parentheticals are to be performed first in any equivalent compound operation.
Under an embodiment of the invention, compound operations can be reduced and abstracted to a finite number of equivalent compound operations. Some examples of principles uses in making these abstractions include: (1) if an item is deleted, previous operations performed on that item can be ignored; (2) multiple moves of an item can be treated as one move, from the initial source to the final destination; and (3) any move operation performed on a combination of operations can be analyzed in any order with respect to other operations, so assuming a move is performed before another operation provides a true and simplified result. Using these principles, it can be assumed that any working item undergoes one of nine possible operations or combinations of operations, where the combinations of operations are equivalents of other operation combinations. The operations performed on items of working version 50 can be replicated for file system 40 as either one of the five primary operations, or one of four equivalent compound operations. In an embodiment, the four equivalent combinations of operations are:
FIG. 5 illustrates a method for detecting operations performed for items in working version 50 at the time a user requests to synchronize the working version 50 with file system 40, under an embodiment of the invention.In an embodiment such as shown, there are ten possible outcomes for each item in the working version 50 at the time of synchronization: unchanged, five prime operations, and four equivalent combinations of operations.
In step 502, working version 50 is created from a portion of file system 40. In step 504, a comparison file 70 is made that includes information about the working version 50. Both steps 502 and 504 are assumed to take place before any operations are performed for the working version items (i.e. at t = 0). At a subsequent moment, the user requests synchronization with the file system (i.e. at t = f). From between t=0 and t=f, the user may perform one or more operations that alters working version 50.
Upon receiving a synchronization request, a determination is made in step 506 as to whether an item identified and located by comparison file 70 has a same location as a corresponding item in working version 50. Initially when comparison file 70 is made, the location of each item is recorded. Thus, step 506 determines whether an item identified in comparison file 70 can still be located using location information initially recorded for that item.
If step 506 determines that the item identified by comparison file 70 still has the same location within working version 50, step 508 follows with another determination as to whether that item was edited subsequent to being recorded in comparison file 70. If step 508 determines the item was not edited, then step 510 concludes that the particular item was unchanged in working version 50. If step 508 determines that the particular item was edited, then step 512 notes the item identified by comparison file 70 as edited.
If step 506 determines that the item identified by comparison file 70 was not located by information recorded for that item, then step 514 makes a determination as to whether the item was moved. If the determination is that the item was not moved, step 516 notes the item as being deleted. If the determination is that the item was moved, then step 518 notes the new location of the item in working version 50. Then a determination is made in step 520 as to whether the moved item was also edited. If the determination is positive, then the item is marked as moved and edited in step 522.
Step 524 makes a determination as to whether any more items identified by comparison file 70 remain to be checked. Step 524 follows step 510 if the previous item was determined to be unchanged. Step 524 follows step 512 if the previous item was noted as being edited. Step 524 follows step 516 if the previous item was noted as deleted. Step 524 follows step 520 if the previous item was noted as moved. Step 524 follows step 522 if the previous item was noted as moved and edited. If step 524 determines items remain that are identified by comparison file 70, and that these items have not been checked, then step 526 provides the next item identified by comparison file 70 is to be checked. For the next item, the method is repeated beginning with step 506.
If step 524 determines no items identified by comparison file 70 remain to be checked, step 528 determines if any items remain in working version 50 unchecked. The unchecked items are working version items that were not checked as a result of comparison file items being checked in steps 506-524. If there are no unchecked items in working version 50, the method is done. Else, the remaining items in working version 50 are marked checked in step 530.
Step 532 determines whether an unchecked item in working version 50 is a copy. If the determination is positive, step 534 notes the item as a copy. Step 536 determines whether the copied item was copied from another item that was created new in working version 50.
If the determination in step 536 is negative, step 538 determines whether the copied item was also edited after being made in step 538. If the determination in step 538 is positive, step 540 notes the copied and edited.
If the determination in step 536 is positive, the item is marked as new and copied in step 542. In other words, the item is noted as being copied from another item that was created as an original, after working version 50 was created from file system 40. In step 544, a determination is made as to whether the new and copied item was also edited. If the determination in step 544 is positive, the item is noted as new, copied and edited.
If in step 532 the determination is that the item is not a copy, then in step 548 the item is noted as being new.
Step 554 determines whether any items marked as unchecked remain in working version 50. Step 544 follows one of these steps: if the determination in step 538 is negative, so that the item is noted only as being copied; step 540 if the item is determined to be copied and edited; step 548 if the item is determined to be only new; if the determination in step 544 is negative, so that the item is determined to be new and copied; and step 546 if the item is determined to be new, edited and copied. If step 554 determines that items are remaining that are unchecked in working version 50, step 556 iterates to the next unchecked item. Then the method is repeated for the next item, beginning with step 532. If step 554 determines that unchecked items are not remaining in working version 50, the method is done.
As shown by an embodiment of FIG. 5, a synchronization operation may detect ten possible outcomes for each item being synchronized. Each item may be determined as being unchanged since being downloaded (step 510). Otherwise, each item being synchronized may be determined to be a result of one or more operations performed by a user after working version 50 is created. Five prime operations are detected: edit (step 512), move (step 518), delete (step 516), create new (step 548), and create copy (step 534). In addition, four compound operations are detected: move and edit (step 522); new and copy (step 542); new, edit and copy (step 546), and copy and edit (step 540).
In an embodiment, a special case may arise where a working version item is deleted, and then recreated with the same name and location information. Such an item could be identified as a new item rather than a moved item if step 506 incorporates a check for the special case. Specifically, an identification such as the recreated item's creation time value may be used to check that the item was not subject to deletion and recreation in step 506.
FIGS. 6-9 illustrate flow charts providing additional details for detecting operations performed in FIG. 5, under embodiments of the invention. FIG. 6 is a method for determining whether an item is moved or deleted. FIG. 6 may correspond to steps 514-518 of FIG. 5, under an embodiment of the invention.
In step 610, a first time value is recorded for each item in working version 50 when working version 50 is made on first terminal 10. In an embodiment, the first time value may correspond to a creation time of an item. The creation time is a property attached to items under certain operations systems, such as WINDOWS. The creation time may record a time value for an item created on first terminal 10 when that item is downloaded from another computer. Thus, when an item of working version 50 is downloaded from file system 40, the first terminal 10 may record the creation time of that item. The creation time may be significant to a thousandth of a second, or to several magnitudes greater.
In step 620, location information is created for each item when working version 50 is made. The location information may correspond to segments of file paths or names that can be used to locate an item either on file system 40 or on working version 50. Both steps 610 and 620 occur at t = 0, corresponding to when working version 50 is made, and before any operations are performed. The first time value and the initial location information may be located in comparison file 70.
After step 620, the flowchart forwards to when a synchronization request is made, or at t= f. Step 630 determines whether location information recorded initially (at t=0) locates the item in working version 50 at t = f. If the determination in step 630 is positive, then step 640 records the item as not being moved. If the determination is negative, step 650 follows. Step 650 determines whether any item in working version 50 at t = f has a corresponding first time value that matches the time value recorded for the unlocated item in step 610. In one embodiment, other items in working version 50 may be checked for a creation time that matches the creation time of the unlocated item.
Given that the creation time may be carried out past a thousandth or even a millionth of a second, another item in working version 50 having the same creation time as the missing item can be assumed to be the unlocated item in a new location. If the determination in step 650 is positive, the item having the same time value is recorded as being moved in step 660. If step 650 determines that no item in working version 50 has the creation time of the missing item, step 670 notes that item as being deleted.
FIG. 7 is a flowchart for determining whether any item in working version 50 has been subject to an edit operation, under an embodiment of the invention. A method illustrated by FIG. 7 may correspond to steps 508, 510 and 512 of FIG. 5. In step 710, a first time value is identified in working version 50. The time value may correspond to that item's creation time. In step 720, a second time value is identified for the same item, corresponding to that item's modification time. As mentioned, both the creation time and modification time are time values that are automatically recorded by operating systems such as WINDOWS. Both of these time values may be significant to a thousandth of a second, or even to much greater orders of accuracy. Thus, for an embodiment such as shown by FIG. 7, both the creation time and modification times are assumed to be unique to that item.
In step 730, a determination is made as to whether the modification time is different than the creation time. When an item is created, either as an original, copy or a download, an embodiment provides that the creation time and modification time are the same. Thus, if the creation time and modification times are different, step 740 notes the item as edited. Else, step 750 notes the item as not edited.
It is possible the creation time and the modification time are not initially exactly the same, but within a range of one another. An embodiment may check to determine if the modification time is outside the range of the creation time.
FIG. 8 is a method for identifying the compound operation of edit and move, under an embodiment of the invention. In one implementation, a method as shown by FIG. 8 may be used as sub-steps for steps 514, 518, 520 and 522 of FIG. 5.
In step 810, multiple time values are recorded for each item in working version 50 when the working version is downloaded from the file system 40. As noted in other embodiments, a first one of the recorded time values corresponds to a creation time. The creation time may automatically be created by the operating system of the computer using the working version 50. The creation time and the modification time may each be recorded in comparison file 70, and associated to a corresponding item.
In step 820, location information is recorded for each item when the working version 50 is made. The location information may include segments of file paths that can locate the item in working version 50. The location information may also include a name of the item. The initial location information for each item in the working version may be recorded in comparison file 70.
In step 830, a determination is made as to whether initially recorded location information can subsequently locate a corresponding item working version 50. If the location information does locate the corresponding item, the item is recorded as not moved in step 840. If the location information recorded initially does not locate the corresponding item, then in step 850 another determination is made. Step 850 determines whether another item in the working version 50 has the same creation time as the unlocated item. If this determination is negative, then step 860 notes the unlocated item as deleted.
Otherwise, step 870 makes a determination as to whether the modification time matches the creation time for that item. If the determination in step 870 is positive, the item is noted only as being moved in step 880. If the determination in step 870 is negative, the item is noted as being moved and edited in step 890.
FIG. 9 illustrates a process for detecting one or more operations on the unchecked items in working version 50, under an embodiment of the invention. With a process shown by FIG. 9, the operations performed on the unchecked items may include at least two operations from a group consisting of create new, copy and edit. In an embodiment, a process such as shown by FIG. 9 may form sub-steps of steps 532-546 in FIG. 5.
The steps 910-980 are performed on individual unchecked items in working version 50, at the time synchronization is requested. Steps 910-980 assume certain other steps have already been performed to detect other operations that may have been performed on working version 50. Specifically, 910-980 are performed on unchecked items in working version 50. As illustrated with FIG. 5, the unchecked items are items left over after items identified by comparison file 70 are compared to items of working version 50. The unchecked items can therefore be assumed to have been created after the working version 50 was made. Thus; unchecked items are a copy and/or or a new item. The unchecked items may also have been edited after being created.
In step 910, time values are recorded for each unchecked item in the working version. For items created subsequent to working version 50 being downloaded, the creation time may correspond to when a user created that item and stored it with downloaded items in working version 50. The creation time should provide each unchecked item in working version 50 with a unique identifier. In addition, the modification time for each item in working version 50 is recorded. The modification time changes each time the corresponding item is edited. However, if the item is not edited, the modification time should be the same or very close to the creation time for that same item. In an embodiment, it can be assumed that the creation time for each item in working version 50 matches the creation time stored for that item in comparison file 70.
Step 920 determines whether the modification time for each unchecked item matches the modification time of one of the items stored in comparison file 70. An embodiment provides that modification time of a copy is the same as the modification time of its original. This feature may be implemented through an application operated on first terminal 10. In an embodiment, first terminal 10 operates an operating system that includes this attribute or feature. An example of such an operating system is WINDOWS type operating system.
If the determination is positive, step 930 provides that the item is noted as being created as a copy of another item originally downloaded from file system 40. The item can be assumed to not have subsequently been edited because the edit operation alters the modification time. If the determination is negative, then step 940 follows.
Step 940 determines whether the modification time of the unchecked item is before the creation time. If the modification time is after the creation time, step 950 notes the item as a copy of another item. This is because a copy of another item keeps the original's modification time, but is assigned a new creation time when created. Step 940 cannot be used to detect whether an item created as a copy was subsequently edited, as that would alter the modification time to be after the creation time.
If the modification time is after the creation time, step 960 makes a determination as to whether the modification time matches the creation time. Upon an item being created either as a new or copy of another item, the modification time and creation time may be exactly the same, or slightly different depending on the configuration of the operating system or other application affecting the working version 50. If the determination from step 960 is positive, step 970 provides that the item is created as a result of an operation to create a new item.
If the determination from step 960 is negative, step 980 provides that the item is edited, new and possibly also a copy. Thus, step 980 provides for two possibilities. At this point, the modification time and creation time cannot be used to distinguish between the two possibilities. To resolve between the two possibilities, an embodiment may provide that all items identified in step 980 also be subjected to one or more steps of content matching. An algorithm may be used to compare contents of all files in step 980 with contents of other files identified as being new in working version 50, for purpose of determining if a file is new and edited, or new, copied and edited. The premise may be that the latter would have contents similar to another item identified as being new.
FIG. 10 illustrates a user-interface 1000 for use with an embodiment of the invention. The user-interface 1000 enables users to elect between same items changed on different computers. For example, with reference to FIG. 1, a user on first terminal 10 may perform operations on working version items downloaded from file system 40. The file system 40 may be shared, so other users may access it over the network. The other users may operate on an item in file system 40, while a user on first terminal 10 may operate on the corresponding working version item. When the synchronization request is made, a conflict may be presented. The file system item corresponding to the altered item of the working version has been changed by another user who has accessed file system 40.
An embodiment of the invention enables the user making the synchronization request to elect between items in file system 40 and items in corresponding working version 50. An embodiment also allows the user making the request to choose how to decide conflicts between file system items updated by other computers and by working version items on the computer making the synchronization request.
The user-interface 1000 includes a first column 1110 and a second column 1120. The first column provides information about delta items on first computer 10. The second column 1120 provides information about delta item of file system 40. The delta items of the file system 40 may be identified by comparing the updated file systems with comparison file 70. A first portion 1125 of first column 1110 identifies the delta items of working version 50. A first portion of second column 1120 identifies the delta items of file system 40, as updated by other users. A second segment 1118 of first column 1110 identifies the operation or equivalent compound operations performed on the delta item of working version 50. Likewise, a second segment 1128 of second column 1120 identifies the operation or equivalent compound operation performed on the delta item of the updated file system. The operation(s) listed in second segment 1128 are assumed to have been performed by others who access the shared file system 40.
For each delta item listed in first column 1110 and second column 1120, the user may elect to keep the changes or maintain the item as it is on file system 40. If the delta item listed in first column 110 conflicts with a delta item in second column 1120, the user may decide how to resolve the conflict. For example, an item in file system 40 may be downloaded onto working version 50, and subsequently operated on in working version 50. The same item downloaded may be accessed by another computer and operated in a different manner, When the synchronization request is made, the computer making that requested is presented with a conflict. The user of that computer may be given the ability to resolve the conflict. The user can pick which delta item to keep and use that item when reconciling with file system 40.
Figure 11 is a block diagram that illustrates a computer system 1100 upon which an embodiment of the invention may be implemented. Computer system 1100 includes a bus 1102 or other communication mechanism for communicating information, and a processor 1104 coupled with bus 1102 for processing information. Computer system 1100 also includes a main memory 1106, such as a random access memory (RAM) or other dynamic storage device, coupled to bus 1102 for storing information and instructions to be executed by processor 1104. Main memory 1106 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 1104. Computer system 1100 further includes a read only memory (ROM) 1108 or other static storage device coupled to bus 1102 for storing static information and instructions for processor 1104. A storage device 1110, such as a magnetic disk or optical disk, is provided and coupled to bus 1102 for storing information and instructions.
Computer system 1100 may be coupled via bus 1102 to a display 1112, such as a cathode ray tube (CRT), for displaying information to a computer user. An input device 1114, including alphanumeric and other keys, is coupled to bus 1102 for communicating information and command selections to processor 1104. Another type of user input device is cursor control 1116, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 1104 and for controlling cursor movement on display 1112. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane.
The invention is related to the use of computer system 1100 for implementing the techniques described herein. According to one embodiment of the invention, those techniques are performed by computer system 1100 in response to processor 1104 executing one or more sequences of one or more instructions contained in main memory 1106. Such instructions may be read into main memory 1106 from another computer-readable medium, such as storage device 1110. Execution of the sequences of instructions contained in main memory 1106 causes processor 1104 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions to implement the invention. Thus, embodiments of the invention are not limited to any specific combination of hardware circuitry and software.
The term "computer-readable medium" as used herein refers to any medium that participates in providing instructions to processor 1104 for execution. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device 1110. Volatile media includes dynamic memory, such as main memory 1106. Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 1102. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punchcards, papertape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read.
Various forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to processor 1104 for execution. For example, the instructions may initially be carried on a magnetic disk of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system 1100 can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus 1102. Bus 1102 carries the data to main memory 1106, from which processor 1104 retrieves and executes the instructions. The instructions received by main memory 1106 may optionally be stored on storage device 1110 either before or after execution by processor 1104.
Computer system 1100 also includes a communication interface 1118 coupled to bus 1102. Communication interface 1118 provides a two-way data communication coupling to a network link 1120 that is connected to a local network 1122. For example, communication interface 1118 may be an integrated services digital network (ISDN) card or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface 1118 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface 1118 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
Network link 1120 typically provides data communication through one or more networks to other data devices. For example, network link 1120 may provide a connection through local network 1122 to a host computer 1124 or to data equipment operated by an Internet Service Provider (ISP) 1126. ISP 1126 in turn provides data communication services through the world wide packet data communication network now commonly referred to as the "Internet" 1128. Local network 1122 and Internet 1128 both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link 1120 and through communication interface 1118, which carry the digital data to and from computer system 1100, are exemplary forms of carrier waves transporting the information.
Computer system 1100 can send messages and receive data, including program code, through the network(s), network link 1120 and communication interface 1118. In the Internet example, a server 1130 might transmit a requested code for an application program through Internet 1128, ISP 1126, local network 1122 and communication interface 1118.
The received code may be executed by processor 1104 as it is received, and/or stored in storage device 1110, or other non-volatile storage for later execution. In this manner, computer system 1100 may obtain application code in the form of a carrier wave.
While embodiments provided herein (see e.g. FIG. 1) describe reconciliation information as being in the form of metadata information, other embodiments may use portions or all of the content for items in first portion 46 (FIG. 1) to identify changes to items of the working version 50 (FIG. 1). In particular, content matching may be used to determine whether one item was copied from another. An intelligent algorithm may be employed to detect similarities between contents of items using the assumption that items with specific types of similarities are copies of one another.
Content matching may also be used as an additive step in a process such as described with FIG. 9. For example, if an equivalent operation is detected as shown by step 970, it may not be possible to determine whether the item was new, edited and also copied. Content matching may be required to detect whether one item is an edited copy of another item that was new.
Another use for content matching is as a tie-breaker, in case one or both time values of an item are exactly the same as another item. Considering the significant digits (i.e. one millionth of a second) of time values applied in popular operating systems such as WINDOWS, the chances of two items having exactly the same creation time or modification times is remote. However, if there is an exact match between time values of different items, embodiments of the invention allow for content matching to decipher between the two items.
While embodiments of the invention have been described for synchronizing files operated on different computers, it should be noted that other embodiments may be applied to stand alone or singular computer systems. For example, one application for an embodiment of the invention is to synchronize one file containing multiple entries with a backup file that was created as an archive. No interaction with other computer systems may be needed.
In some applications, it may be more useful to not detect certain equivalent compound operations, but rather assume more simple operations were performed on an item. Alternatively, the equivalent compound operations may be detected, but other operations may be used to update the file system 40. For example, an embodiment of the invention may treat the equivalent compound operation of (Nx)Cy as Nx and Ny. Thus, during synchronization, file system 40 will be instructed to add two new items. Similarly, the compound operation of E(xCy) may be treated as Ny, where file system 40may be instructed to create one new file, rather than copy X to Y, then edit it.
In the foregoing specification, the invention has been described with reference to specific embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the scope of the invention, as defined by the claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. | |
Madison Springfield has provided a range of services including stability assessments, communications analysis and survey research to multinational clients from the development sector.
Nationwide Stability Assessment
Development
Ajloun, Amman, Aqaba, Balqa, Irbid, Jarash, Karak, Ma'an, Madaba, Mafraq, Tafiela, and Zarqa, Jordan
Research, Strategy
Challenge
Conflicts in the Levant region were contributing to instability in Jordan. The client required a nationwide-assessment to identify which governorates were most susceptible to instability, specify the key drivers of instability, and develop courses of action that could be implemented by the client to address the stability threats.
Solution
A multidisciplinary team composed of survey methodologists, regional and cultural subject matter experts, and indigenous researchers was assembled to conduct the assessment in the governorates of Ajloun, Amman, Aqaba, Balqa, Irbid, Jarash, Karak, Ma'an, Madaba, Mafraq, Tafiela, and Zarqa. A mixed-methods research design that included both quantitative and qualitative data collection techniques was utilized to assess the stability threats caused by economic, security, political, and social factors.
Result
The assessment identified which governorates were most susceptible to stability threats thus enabling the client to prioritize its efforts. In addition, the study identified the specific drivers of instability by population segment and the pathways followed by individuals in those segments that engaged in destabilizing behaviors. Finally, the assessment yielded 34 data-driven communication and non-communication based courses of action the client should implement to address the root causes of the stability threats.
Stability Assessment in the Middle East
Development and Security
Aqaba, Irbid and Zarqa, Jordan
Research
Challenge
The client sought to conduct a baseline assessment of stability conditions in multiple countries across the Middle East. The assessment required the collection and analysis of both qualitative and quantitative data in parallel from 12 locations on a range of economic, political, development and social factors.
Solution
A team of Madison Springfield senior analysts with deep regional expertise developed a framework that provided the basis for data collection during the primary research phase. Madison Springfield then assembled a multinational team of more than 100 local researchers from each of the locations to be surveyed to perform the data collection. Once the data was collected, a Madison Springfield analyst presented the client the results of the structured analysis at quarterly intervals to identify stability trends in the region.
Result
Previous client attempts at developing a cross-national assessment framework had met with limited success due to the client's inability to identify the indicators most relevant for tracking stability. Experienced Madison Springfield analysts developed the indicator framework and surveys performed by indigenous research teams provided the client with the visibility needed to forecast development trends for the region.
Engagement Assessment
Development and Security
Amman, Aqaba, Irbid and Zarqa, Jordan
Strategy
Challenge
Prior to launching an strategic communication engagement campaign, the client required the establishment of an attitudinal baseline and the development of a measures of effectiveness baseline (MOE) that could be monitored longitudinally to assess the performance of the strategic campaign.
Solution
A multidisciplinary team of social scientists, strategists and local survey researchers carried out a mixed methods qualitative and quantitative research program. The data collected included attitudes towards prevailing events, messages and narratives as well as the primary communication mediums utilized by the population.
Result
The Madison Springfield team provided the client with a Measurement of Effectiveness baseline that included indicator values. This framework enables the client to collect future comparative indicator scores, which enables the ability to actively monitor and measure results as they pertain to specific strategic business objectives. | https://www.madisonspringfield.com/countries/Jordan/ |
What is the result of repeated freeze/thaw cycles?
What is the result of repeated freeze/thaw cycles?
Background Repeated freezing and thawing of plasma (or serum) may influence the stability of plasma (or serum) constituents. Results Repeated freeze-thaw cycling resulted in significant and relevant increases of plasma renin activity and a small decrease of adrenocorticotropic hormone.
What happens to starch when frozen?
When a frozen gel melts, the starch chains entangle and force water out of the gel. The food industry prevents this by extracting potato starch and chemically altering it to prevent the long chains from entangling.
How does freeze/thaw damage occurs?
The freeze-thaw cycle is a major cause of damage to construction materials such as concrete and brick assemblies. Freeze-thaw damage occurs when water fills the voids of a rigid, porous material and then freezes and expands.
What does freezing and thawing cause?
Freeze-thaw occurs when water continually seeps into cracks, freezes and expands, eventually breaking the rock apart. Exfoliation occurs as cracks develop parallel to the land surface a consequence of the reduction in pressure during uplift and erosion.
What is the freeze/thaw cycle?
What is a freeze-thaw cycle? A freeze-thaw cycle is when the temperature fluctuates from above freezing (32F), to below freezing, and then back to above freezing. This is considered one freeze-thaw cycle, and Minneapolis, Minnesota experiences several freeze-thaw cycles each year.
What is the freeze/thaw method?
The freeze-thaw method is commonly used to lyse bacterial and mammalian cells. The technique involves freezing a cell suspension in a dry ice/ethanol bath or freezer and then thawing the material at room temperature or 37°C.
What starch should be used for better freezing products?
Waxy corn starches, which are essentially composed entirely of the branched-chain molecule amylopectin, are excellent low-cost thickeners for many foods, including frozen sauces, Wicklund says. Dent corn starches consist partly of the straight-chain molecule amylase, and the remainder amylopectin.
Which thickening agent will not deteriorate when frozen?
Arrowroot isn’t affected by freezing; Arrowroot has a very neutral taste, making it good to use with delicate flavours; Tapioca thickened dishes freeze well; Tapioca starch thickens quickly at a lower temperature than other starches.
How do you prevent freeze thaw?
Some common methods for preventing freeze-thaw are:
- Using Deicing Chemicals. One of the simplest ways to prevent concrete freeze-thaw damage is with deicing chemicals.
- Reviewing Concrete Structure and Environment. High-quality concrete can also help to prevent deterioration.
- Applying a Sealer.
How do you do freeze/thaw stability?
Freeze-thaw testing is conducted by exposing the product to freezing temperatures (approximately -10 °C) for 24 hours, and then allowing it to thaw at room temperature for 24 hours.
What are the three steps of freeze/thaw weathering?
Freeze-thaw weathering occurs when rocks are porous (contain holes) or permeable (allow water to pass through). Water enters cracks in the rock. When temperatures drop, the water freezes and expands causing the crack to widen. The ice melts and water makes its way deeper into the cracks.
What is the difference between freezing and thawing?
As nouns the difference between freezing and thawing is that freezing is (uncountable|physics|chemistry) the change in state of a substance from liquid to solid by cooling to a critically low temperature while thawing is the process by which something thaws.
How does the rate of starch retrogradation affect freeze thaw?
Accordingly, the rate of starch retrogradation contributes considerably to freeze–thaw stability. Starch retrogradation is influenced by botanical source, molecular composition and structure, temperature fluctuation, and concentration of starch gel.
How is the stability of a starch gel determined?
► Freeze–thaw stability of twenty-six starch gels were determined and compared. ► Amylose content and distribution of amylopectin short chains influenced syneresis. ► Syneresis of freeze–thawed gel would be predicted from starch structural features. Contributed equally to this work.
What kind of things are affected by freeze thaw?
Scree slopes, which are made of fallen rock pieces, often develop at the bases of cliffs and mountains due to freeze-thaw cycles. Freeze-thaw can also affect many human-made items in a variety of climates across regions and seasons.
How long does waxy starch stay stable after freeze?
Waxy starch gels may remain very stable for weeks to months under refrigerated conditions, but the stability breaks down quickly after freeze-thawing ( Schoch, 1968 ). | https://first-law-comic.com/what-is-the-result-of-repeated-freezethaw-cycles/ |
Where is the stress in the word important?
Where is the stress in the word important?
In the word ‘important’, the second syllable is stressed, so it is pronounced stronger and should read imPORtant. The syllables which are not stressed are called the weak or quiet ones.
Where is the stress in the word underSTAND?
Word Stress Rules
|rule||examples|
|For compound nouns, the stress is on the first part||BLACKbird GREENhouse|
|For compound adjectives, the stress is on the second part||bad-TEMpered old-FASHioned|
|For compound verbs, the stress is on the second part||underSTAND overFLOW|
What is stress in pronunciation?
Stress is the relative emphasis that may be given to certain syllables in a word, or to certain words in a phrase or sentence. In English, stressed syllables are louder than non-stressed syllables. Also, they are longer and have a higher pitch. If you want to, you can listen to the words to hear the stress.
Why is stress important in speech?
Stress refers to an increased loudness for a syllable in a word or for a word in a phrase, sentence, or question. Intonation and stress are important because they assist in communicating additional meaning to an utterance. It helps to strengthen a specific meaning, attitude, or emotion in an utterance.
What is stress in speaking skills?
In phonetics, stress is the degree of emphasis given a sound or syllable in speech, also called lexical stress or word stress. This means that stress patterns can help distinguish the meanings of two words or phrases that otherwise appear to be the same.
What is the effect of stress on communication?
People who are feeling stressed out may become easily frustrated or angry. This can have a negative effect on your communication skills. A person in a heightened sense of emotion can have trouble choosing their words carefully or expressing things in an appropriate way.
How important is intonation in speech?
Intonation is important in spoken English because it conveys meaning in many ways. Changing the pitch in your voice – making it higher or lower – allows you to show surprise “Oh, really!” or boredom “Oh, really. Let’s listen to some intonation patterns used for specific functions.
How can I improve my clear pronunciation?
Speak better: 5 easy ways to improve your pronunciation
- Don’t speak too fast. It’s harder to understand words when they are delivered quickly.
- Listen and observe. Observe how others speak and listen to their pace and intonation.
- Record yourself (audio or video)
- Do exercises.
- Look words up.
How many major functions of intonation are there?
Halliday saw the functions of intonation as depending on choices in three main variables: Tonality (division of speech into intonation units), Tonicity (the placement of the tonic syllable or nucleus) and Tone (choice of nuclear tone); these terms (sometimes referred to as “the three T’s”) have been used more recently.
How can intonation be improved?
. The best way to improve your intonation is simply to become more aware of it. By listening carefully to a recorded conversation (YouTube is a good place to start), you will begin noticing how other speakers use intonation to express themselves. Another idea is to record your own voice.
What is intonation of speech?
Intonation, in phonetics, the melodic pattern of an utterance. Intonation is primarily a matter of variation in the pitch level of the voice (see also tone), but in such languages as English, stress and rhythm are also involved. Intonation conveys differences of expressive meaning (e.g., surprise, anger, wariness). | https://easierwithpractice.com/where-is-the-stress-in-the-word-important/ |
Physical activity is defined as any voluntary bodily movement produced by skeletal muscles that requires energy expenditure. These are simple examples, but the point is that the frequently observed discrepancies between measures may not be solely due to bias or recall problems, but rather to inherent differences in reported and measured data or how physical activity intensity is expressed. Try to find the time for some regular, vigorous exercise for extra health and fitness benefits. Students will e valuate activities that can be pursued in the local environment according to their benefits, social support network and participation requirements. Overview of the Individual Physical Activity Measures Guide, Organization and Navigation through the Guide, Description of the Behavioral Epidemiology Framework, Physical Activity and Sedentary Behavior Guidelines, Uniqueness of Assessment in Children and Adolescents, Advanced Concepts in Measurement Research, Case Study 1: Examining the Independent and Joint Associations of Physical Activity and Sedentary Behavior on Body Mass Index Among Middle and High School Students, Case Study 2: Determining Compliance with Physical Activity Recommendations Across Different Grade Levels, Case Study 3: Identifying Predisposing Factors for Active Commuting in Elementary School Children Who Live in Urban and Suburban Settings, Case Study 4: Testing the Potential of a New Recess-Based Physical Activity Program Designed to Increase the Time Children Spend in MVPA During Recess, Background on Accelerometry-Based Activity Monitors, Newer Monitoring Technologies and Methods, Supplemental Considerations for Scaling and Scoring METs in Youth. Caspersen et al. See everyday activities as a good opportunity to be active. previously described physical activity as âAny bodily movement produced by skeletal muscles that result in caloric expenditure.â15 This definition has been widely accepted but a more recent conception, developed through a consensus conference on physical activity research,6,9provides an operational definition to avoid subjectivity and facilitate assessment: "behavior that involves human movement, resulting in phys⦠Work up to at least 150 minutes of aerobic physical activities each week, along with stretching and strengthening exercises at least two days per week, or whatever treatment goal your healthcare team recommends. Here are some ideas for both types of activities. Structured physical activity is more easily reported on recall instruments and is also easier to detect and quantify with monitors so assessments become slightly easier. Regular physical activity is one of the most important things you can do for your health. Therefore, it is important to consider the definition of time frame as the period of time of interest and account for this variability when characterizing the physical activity behavior even though in most scenarios, if not all, the typical behavior is of most interest. This section describes concepts that are important for understanding the remaining sections of the Guide. Sedentary behavior is considered a construct that is independent of physical activity and that can carry different health implications. Cross-training is the combination of various activities to spread the work among various muscle groups. When speaking about or working with individuals with a disability it is essential to put the person first. c Similar to the child vs. adult comparison, the differences in body composition and impact on REE will likely lead to systematic misclassifications of activity in children classified as having overweight or obesity (see reference 30). Guidelines for sedentary behavior have been harder to establish and are less consistently endorsed. It helps you maintain balance among your various muscle groups. Those with poor mobility should perform physical activities to enhance balance and prevent falls. Resting energy expenditure is primarily determined by body composition and more particularly by muscle mass, but other important predictors include age, sex, and body fat. Experts say to do regular moderate activity and/or vigorous-intensity activity . Another distinction with assessing physical activity and sedentary behavior in youth is that standard physiologic adaptations and relationships do not always hold when applied to youth. Youâll get the most out of physical activities that you enjoy, and youâll keep coming back for more. These were the first guidelines to specifically address recommendations for sedentary behavior in order to improve and maintain health. Healthwise Staff, Medical Review: [an error occurred while processing this directive]. Rather, âphysical activityâ was thought of as ranging from participation in competitive sports to informal and unstructured physical activity for fitness or fun. It prevents boredom by providing variety. For instance, runners who have developed powerful leg muscles might cross-train to strengthen the upper body, which does not get a good workout from running. This information does not replace the advice of a doctor. The transition to adolescence (i.e., ages 12 to 18 years) is typically characterized by drops in physical activity levels and a greater contribution of team sports toward total physical activity accumulated during the day. A more prominent concern is related to the limited ability of children to provide details of past physical activity events with retrospective recall instruments (e.g., previous week, previous month). A short period of exercise will make students more willing to sit still for another 10 to 15 minutes while you give your lesson. Factors that correlate with physical activities may be used for targeting individuals for interventions and developing interventions that promote such activity in older adults. This definition helps to distinguish physical activity from non-volitional forms of movement (e.g., fidgeting) and focuses attention more on larger contributions to energy expenditure. The term âexerciseâ is viewed as a subcategory of leisure-time physical activity that is more structured (e.g., steady state running) and performed with a well-defined purpose in mind (e.g., improving or maintaining physical fitness). Benefits of Physical Activities in ESL Classes. WHO defines physical activity as any bodily movement produced by skeletal muscles that requires energy expenditure â including activities undertaken while working, playing, carrying out household chores, travelling, and engaging in recreational pursuits. Three other important distinctions with the definition are summarized below. Adults often report physical activity for leisure or recreation but in youth this may be captured as simply âplayâ or unstructured activity. The current U.S. When using these measures, ambiguous terms like âphysical activityâ and âmoderate intensityâ can generate confusion as children display a limited understanding of the concept of physical activity and have difficulties reporting the intensity of the activities in which they engage. Second: The new definition of physical activity stipulates that movement needs to be of sufficient magnitude to increase energy expenditure. Healthwise, Incorporated, disclaims any warranty or liability for your use of this information. See more ideas about pe activities, the learning experience, physical education. A critical element in the new definition is the labeling of physical activity as a behavior. However, research has increasingly emphasized the importance of understanding the allocation of time spent in different intensity classifications, as they each contribute directly to overall energy expenditure and health. © 1995-2020 Healthwise, Incorporated. Heather Chambliss PhD - Exercise Science, Author: However, time spent in LPA does not provide benefits that come from participation in MVPA. Long distance track running must not exceed 5,000 metres. Boosts your energy level so you can get more done. In adults aged 18â64, physical activity includes leisure time physical activity (for example: walking, dancing, gardening, hiking, swimming), transportation (e.g. Medical Review: The REE value accounts for about 50 percent to 60 percent of total energy expenditure, but PAEE is usually of more relevance because it is the most variable component of TEE and is highly susceptible to change. However it is clear that children are not just âlittle adults,â24 so special considerations are needed to evaluate and study individual physical activity behavior in this segment of the population. More daily physical activity provides greater health benefits. Children have unique behavioral patterns of physical activity, unique perceptions and cognitions related to physical activity, and distinct physiological and maturational responses and adaptations to physical activity. A wide range of activities has been trialled with older people, from aquarobics through to yoga. If you need to speak to a coworker, walk to that person's office or station rather than using e-mail or the phone. Individuals are encouraged to perform 150 minutes of moderate-intensity physical activity a week but it can be accumulated in different ways. Use your commute to do some extra walking. Benchmark: PE.912.C.2.20 Identify appropriate methods to resolve physical conflict. This increases the amount of oxygen delivered to your heart and muscles. It can help you break out of a slump. Helps you manage stress and tension. Individual Lifetime Activities GVCs: Students will participate several times a week in a self-selected, personally relevant lifetime activity, dance, or fitness activity outside of the school day. 1. Learn how we develop our content . 1. Thus, monitor-based and report-based measures capture different aspects of the same underlying construct of physical activity. Most physical activity research has used a combined indicator that captures both moderate physical activity and vigorous physical activity (MVPA). work), household chores, play, games, sports or planned exercise, in the context of daily, family, and community activities. The current guidelines recommend integration of physical ⦠Jul 21, 2020 - Explore Lindsey Sinclair's board "Individual Activities to Make" on Pinterest. Perhaps the most critical distinction is the difference in metabolic cost of physical activity as a result of aging or growth. Explore The Playground 1. Another independent challenge is capturing the representative nature of their behavior (i.e., assessment of habitual physical activity and sedentary behavior patterns). Positive language empowers and therefore inclusive language is used in the article and should always be used. Physical fitness has generally been defined as âthe ability to carry out daily tasks with vigor and alertness, without undue fatigue and with ample energy to enjoy leisure-time pursuits and meet unforeseen emergencies.â15 It can be subdivided into performance-related fitness and health-related fitness but the latter is more relevant for the purpose of this Guide because the majority of physical activity research is focused on health-related outcomes. Nov 25, 2020 - Lesson plans to help enhance the learning experience and engage your students!. Episodic memory refers to experiences of daily living, such as eating breakfast or exercising, and can be replaced by generic memories (i.e., memories of general events or patterns of events) that are used when individual memories or episodic memories are not available. Here are some ideas for both types of activities. It reduces the risk of injuries because the same muscles are not being stressed in the same way during every workout. Key Concepts for Understanding Individual Physical Activity, NCCOR Youth Compendium of Physical Activities. Or you can use exercise machines that give variety to your program by working muscle groups that aren't heavily used in your primary activity. For instance, researchers in the Sedentary Behavior Research Network have come to agreement that sedentary behavior should be defined as âany waking behavior characterized by an energy expenditure â¤1.5 adult-METs while in a sitting or reclining posture.â20 The threshold of 1.5 adult-METs has been generally considered a cutpoint for identifying sedentary behavior in adults. Parents should try to spend as much time as possible outdoors with toddlers at this age. For example, weight-bearing activities such as walking, running, weight training or cycling are good choices for weight management because they help burn kilojoules. In contrast, an unfit person may have very little absolute movement in a day but it may be moderate in intensity. These factors lead to error when using the standardized value of 3.5 ml/kg/min in adults but more significant errors (and systematic bias) when applied in youth populations.24, 29-32 The error is introduced due to known differences in resting energy expenditure for youth. The combination of frequency, duration, and intensity is often referred to as the dose or volume of physical activity and reflects the total amount of movement performed within a specific time period. Attention should be given to determining the time frame needed to obtain reliable indicators of actual behavior because it has important implications for research with youth. Most foundational work on assessing physical activity and energy expenditure has been derived in adults and the simple assumption has been that these also hold in youth. 2. Students produced small amounts of VPA during individual and movement activities, although this ⦠The guidelines suggest that children and youth should limit recreational screen time to a maximum of two hours per day and reinforce that lower amounts of screen time can offer additional health benefits.21 Other national and international organizations, such as the Australian Department of Health, also have developed specific guidelines for children and youth while reinforcing the importance of avoiding long continuous periods of sitting time. Depth of Knowledge: Low Complexity, Moderate Complexity . 2. Vary the place.Try a new rout⦠), Stationary bicycling, with vigorous effort, Treading water with fast, vigorous effort, Children's games, like hopscotch, 4-square, and dodge ball, Competitive sports like rugby, field hockey, and soccer, Baling hay or cleaning the barn with vigorous effort. Before you start planning a routine of regular physical activities for yourself, take some time to learn more about the variety of physical activities that you can engage in When engaging in regular physical activity or planning your physical activity routine, it is important for you to know the types of physical activity that you should engage in and the benefits they provide: Third: The new definition of physical activity specifically references its contributions to improving dimensions of physical fitness. See more ideas about elementary pe, teaching, pe activities. ABSTRACT Although physical activity facilitates successful aging, involvement in such activity declines with age. Emphasis is then placed on the unique challenges of assessing physical activity and sedentary behavior in youth, as that population is NCCORâs focus. Health Tools help you make wise health decisions or take action to improve your health. May 5, 2019, Author: Healthwise Staff If you cycle, offer to lead a group of schoolchildren on a bike ride to teach bicycle safety. Cross-training has some important advantages: Some exercise machines, such as elliptical cross-trainers, can help you cross-train. You can boost many of the moderate activities in the left column to a vigorous level by doing them faster or harder. Park several blocks away, or get off the bus a few stops early. Physical activity encompasses all activities, at any intensity, performed during any time of day or night. Levels of physical activity are routinely calculated using established ranges (Rest is 1.0 to 1.4, Light physical activity [LPA] is 1.5 to 2.9, Moderate physical activity [MPA] is 3.0 to 5.9, Vigorous physical activity [VPA] is 6.0+). Understanding Energy Expenditure Terms Total energy expenditure (TEE) is generally divided into three components: Resting energy expenditure (REE), the thermic effect of food (TEF), and the more volitional physical activity energy expenditure (PAEE). In addition to variability in overall behavior, it is important to consider inherent variability within a day (e.g., school-based physical activity vs. home-based physical activity), across days (e.g., days with physical education vs. days with no physical education), between days (e.g., school-day vs. non-school day), and across seasons (e.g., winter vs. summer activity patterns). Here are some activities you can do with your children to keep them active, healthy and happy: Indoor Activities vigorous-intensity activity . Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated. For example, short-term or specific physical activity recall questionnaires (e.g., previous day, previous week, number of exercise bouts) are examples of instruments that refer to episodic memories.27 This can be problematic considering that the ânaturalâ intermittent patterns in youth behavior makes these events quite common and therefore harder to recall or report with sufficient level of detail. Badminton. This captures the volitional nature of physical activity and the various physiologic, psychosocial, and environmental factors that influence it. Energy expenditure is typically expressed in units of kilojoules (kJ) or kilocalories (kcal), but it is also frequently expressed as multiples of resting energy expenditure known as Metabolic Equivalent Tasks (METs).16 Resting energy expenditure is often estimated because it is challenging to measure, and the value of 3.5 ml/kg/min has been widely adopted as the oxygen consumption of a person at rest. Vary the place.Try a new route for walking or biking or even a different room for your exercises or stretc⦠However, different assumptions must be considered for children. For example, the resting energy expenditure in a 13-year-old child can be approximated as 4.2 ml/kg/min or 1.2 adult METs.33 If actual child resting energy expenditure values are used, light intensity would be described as activities eliciting up to 6.3 ml/kg/min (i.e., 1.5 times above their actual resting state) rather than up to 5.3 ml/kg/min (i.e., 1.5 METs x adult resting energy expenditure of 3.5 ml/kg/min). For youth, the movement captured in this behavioral definition can be categorized as either structured (i.e., repetitive, organized activity, often led by an adult and performed in physical education class) or unstructured (i.e., play, unsupervised, activity performed during recess or school breaks). A fit person may report performing very little physical activity but the monitor may record considerable amounts. Comments/restrictions. Suggest holding meetings with colleagues during a walk inside or outside the building. Failure to consider this difference leads to systematic over-estimation of childrenâs physical activity intensity and a misclassification of performed activities.b Error is further compounded due to additional variability associated with differences in lean body mass in children classified as normal weight and children classified as overweight or obese.c. E. Gregory Thompson MD - Internal Medicine Research and public health guidelines have distinguished physical activity and sedentary behavior as independent behavioral constructs and they also may have independent effects on health, although this is less established in youth.19 No universally agreed-upon consensus has yet been achieved on defining sedentary behavior for both children and adults, though concerted efforts have been made for adults. This integrated activity may not be planned, structured, repetitive or purposeful for the improvement of ⦠Competition can be a good motivator because: Helping to plan or organize a competitive event instead of entering it can provide friendship and fun with others interested in the same activity. Recommendations for addressing this issue have been included in Section 10. Minimise the amount of time spent in prolonged sitting and ⦠You can boost many of the moderate activities in the left column to a vigorous level by doing them faster or harder.footnote 1 Adding variety to a fitness program is a good way to keep motivated. These challenges become clear when children are asked to indicate how many bouts of moderate or vigorous physical activity they performed in the previous day or past week.
individual physical activities
For The Republic Part 2 Can't Talk To Moore
,
Ip Webcam Apk
,
District Wise Crop Production In Madhya Pradesh
,
6 Bedroom House For Rent Ontario
,
Tahki Yarns Tiburon
,
Role Of Psychiatric Social Worker Pdf
,
Sf Homeownership Workshop
,
Architecture Analysis And Description Language
,
Craig Foster Cape Town
,
Business Analytics Books
,
How To Get Real Estate Leads
, | https://chileva.com/unit-season-xcny/page.php?98f663=individual-physical-activities |
By Rosalind W. Picard
Part 1 of this booklet offers the highbrow framework for affective computing. It contains history on human feelings, specifications for emotionally clever pcs, purposes of affective computing, and ethical and social questions raised by way of the know-how. half 2 discusses the layout and development of affective desktops. subject matters in half 2 contain signal-based representations of feelings, human impact reputation as a trend acceptance and studying challenge, fresh and ongoing efforts to construct versions of emotion for synthesizing feelings in pcs, and the recent program zone of affective wearable computers.
Amazon.com assessment As a scientist who works in desktop improvement, Rosalind Picard is conversant in operating with what's rational and logical. yet in her examine on tips on how to permit desktops to higher understand the global, she stumbled on anything excellent: within the human mind, a severe a part of our skill to determine and understand isn't really logical, yet emotional. accordingly, for desktops to have a few of the complicated talents we hope, it can be invaluable that they understand and, in a few instances, suppose feelings. Affective Computing is not approximately making desktops that get grumpy for those who input repeated blunders or which may react out of worry like 2001 's Hal or The Terminator 's SkyNet; it really is approximately incorporating emotional skills that permit pcs to raised practice their jobs. at the easiest point, this may well suggest fitting sensors and programming that easily permit a automatic approach to figure out the emotional country of its consumer and reply consequently. The ebook additionally mentions concepts comparable to the skill to incorporate emotional content material in computer-moderated communications that paintings much better than modern-day emoticons.
The first a part of Picard's booklet introduces the theoretical foundations and rules of affective computing in a completely nontechnical demeanour. She explores why emotions may well quickly turn into a part of computing know-how and discusses the benefits and the worries of one of these improvement. Picard increases a couple of moral concerns, together with the power for deceptive clients into pondering they are speaking with one other human and the necessity to include liable habit into affective desktop programming, alongside the strains of Isaac Asimov's recognized 3 legislation of robotics. partially 2, the booklet turns into extra technical, even though it remains to be in the comprehension of so much laypeople. This part discusses how pcs should be designed, developed, and programmed so they can realize, exhibit, and even have feelings. This booklet is a great medical creation to a subject matter that sounds like a doorway into technological know-how fiction.
Read or Download Affective computing PDF
Similar human-computer interaction books
The Social and Cognitive Impacts of e-Commerce on Modern Organizations
The Social and Cognitive affects of E-Commerce on smooth businesses contains articles addressing the social, cultural, organizational, and cognitive affects of e-commerce applied sciences and advances on firms world wide. having a look in particular on the affects of digital trade on shopper habit, in addition to the effect of e-commerce on organizational habit, improvement, and administration in companies.
Handbook of Research on Urban Informatics: The Practice and Promise of the Real-Time City
Alive with move and pleasure, towns transmit a fast circulation of trade facilitated via a meshwork of infrastructure connections. during this surroundings, the net has complicated to develop into the leading communique medium, making a shiny and more and more researched box of research in city informatics.
Ubiquitous and Pervasive Computing: Concepts, Methodologies, Tools, and Applications
With the advance of ubiquitous and pervasive computing, elevated and elevated adaptability to altering wishes, personal tastes, and environments will emerge to extra improve using know-how among worldwide cultures and populations. Ubiquitous and Pervasive Computing: recommendations, Methodologies, instruments, and purposes covers the most recent cutting edge examine findings concerned with the incorporation of applied sciences into daily facets of lifestyles from a collaboration of complete box specialists.
Beginning CSS Preprocessors: With Sass, Compass, and Less
Find out how preprocessors could make CSS scalable and straightforward to take care of. you will see how you can write code in a truly fresh and scalable demeanour and use CSS preprocessor beneficial properties comparable to variables and looping, that are lacking in CSS natively. interpreting starting CSS Preprocessors will make your lifestyles a lot less complicated by means of displaying you the way to create reusable chunks of code.
- Taking your iPhoto '11 to the max
- Kasparov And Deep Blue
- Enhancing Learning Through Human Computer Interaction
- Computers, phones, and the Internet: domesticating information technology
- Ecscw 2001
Additional resources for Affective computing
Example text
This situation parallels that of another classic signal-processing problem, the problem of constructing "speaker-independent" speech recognition systems. I propose that it can be solved in a similar way. The goal of speakerindependent systems is to recognize what was said regardless of who said it. Even among people who use the same language, this goal is complicated by 33 Emotions Are Physical and Cognitive the fact that two people saying the same sentence produce different sound signals. They may have a different accent, different pitch, and other differing qualities to their speech.
The ability to express emotions is believed to differ among subjects. Actors and musicians, for whom expressing emotions is part of their profession, show some of the greatest fluidity in expression. Most studies on emotion and cognition have been confined to artificial lab scenarios, where they have been severely limited. Not only have they been limited by the environmental context of being in a laboratory, but they have focused on finding universal human patterns of expression rather than finding personal patterns.
Spontaneous Smiles It is said that the attentive observer is always able to recognize a false smile (Duchenne, 1862)—that is, a smile generated by the will instead of by a genuine feeling of happiness. Duchenne observed: The muscle that produces this depression on the lower eyelid does not obey the will; it is only brought into play by a genuine feeling, by an agreeable emotion. Its inertia in smiling unmasks a false friend. 42 Chapter 1 Neurological studies indicate that true emotions travel their own special path to the motor system. | http://sundanceresearchinstitute.org/read/affective-computing |
The vertebral column, also called the spine or backbone, is made up of multiple vertebrae that are separated by spongy disks and classified into four distinct areas. The cervical area consists of seven bony parts in the neck; the thoracic spine consists of 12 bony parts in the back area; the lumbar spine consists of five bony segments in the lower back area; five sacral* bones; and four coccygeal* bones (the number of coccygeal bones can vary from five to three).
(* By adulthood, the five sacral vertebrae fuse to form one bone, and the four coccygeal vertebrae fuse to form one bone.)
The pelvis is a basin-shaped structure that supports the spinal column and protects the abdominal organs. It contains the following:
Sacrum. A spade-shaped bone that is formed by the fusion of five originally separate sacral vertebrae.
Coccyx (also called the tail bone). This is formed by the fusion of four originally separated coccygeal bones.
Three pelvic (hip) bones. These are:
Ilium. This is the broad, flaring portion of the pelvis.
Pubis. This is the lower, anterior or front part of the pelvis.
Ischium. This is the part of the pelvis that forms the hip joint.
The shoulder is made up of several layers, including the following:
Bones. These include the collarbone (clavicle), the shoulder blade (scapula), and the upper arm bone (humerus).
Joints. These facilitate movement and include the following:
Sternoclavicular joint. This is where the clavicle meets the sternum.
Acromioclavicular (AC) joint. This is where the clavicle meets the acromion.
Shoulder joint (glenohumeral joint). A ball-and-socket joint that facilitates forward, circular, and backward movement of the shoulder.
Ligaments. A white, shiny, flexible band of fibrous tissue that holds joints together and connects the various bones, including the following:
Joint capsule. A group of ligaments that connects the humerus to the socket of the shoulder joint on the scapula to stabilize the shoulder and keep it from dislocating.
Ligaments that attach the clavicle to the acromion
Ligaments that connect the clavicle to the scapula by attaching to the coracoid process
Acromion. The roof (highest point) of the shoulder that is formed by a part of the scapula.
Tendons. The tough cords of tissue that connect muscles to bones. The rotator cuff tendons are a group of tendons that connect the deepest layer of muscles to the humerus.
Muscles. These help support and rotate the shoulder in many directions.
Bursa. A closed space between two moving surfaces that has a small amount of lubricating fluid inside; located between the rotator cuff muscle layer and the outer layer of large, bulky muscles.
Rotator cuff. Composed of tendons, the rotator cuff and associated muscles hold the ball tightly within the glenohumeral joint at the top of the upper arm bone (humerus).
Is Bursitis Busting Up the Joint? | http://www.stvincent.org/Health-Library/Greystone-Adult/Facts-About-the-Spine,-Shoulder,-and-Pelvis.aspx |
Today I want to let you know a bit more about what causes shoulder pain. It is common for a patient to come into a chiropractic clinic complaining of a shoulder pain or injury. Maybe you know the feeling of not being able to lift your arm or put a bra on or reach behind your back all because of a pain in your shoulder.
So again, today you will learn more about the anatomy of the shoulder, including the rotator cuff muscles, and the two most common shoulder injuries we see at the chiropractic clinic: impingement syndrome, and rotator cuff tears & tendonitis.
Table of Contents
- 1 Shoulder Anatomy
- 2 Common Shoulder Pathologies (Injuries)
Shoulder Anatomy
The shoulder girdle is composed the humerus (arm bone), scapula (wing bone) and clavicle (collar bone). These shoulder girdle bones articulate (join) with each other forming the three joints of the shoulder girdle:
- Sternoclavicular joint,
- Acromioclavicular joint,
- Glenohumeral joint.
The primary role of the shoulder complex is to position the hand in space; therefore a large amount of shoulder joint movement is necessary. The increased movement is accompanied with a decrease in bony stability causing an increased demand on the surrounding soft tissues, like the rotator cuff muscles.
The static stabilising mechanisms of the shoulder include:
- The joint capsule,
- Joint adhesion and geometry,
- Ligamentous support.
Dynamically the shoulder stabilising muscles include the rotator cuff muscles comprising of:
- Suprapinatus,
- Infraspinatus,
- Teres minor,
- Subscapularis muscles,
- Biceps brachii muscle.
The rotator cuff muscles provide stability by inserting into the capsule and have a tightening effect upon contraction. The normal shoulder stabilising mechanisms are compromised in alterations in the normal structural alignment of the scapula and humerus and in rotator cuff muscle weakness.
Shoulder Pain Anatomy Video
In the following video you can watch why having weak shoulder blade muscles and bad shoulder movements can lead to a shoulder pain problem.
Shoulder Muscles
The shoulder muscles can be broken down into several categories according to their function.
The first category is “the protectors” which are the rotator cuff muscles. These muscles compress the humeral head into the glenoid cavity of the scapula increasing joint adhesion.
The second category is “the pivoter’s” which pivot the scapula (shoulder blade) under the humeral head during elevation of the arm. The pivoter’s function is to maintain muscle/tendon alignment and tension. The pivoting muscles include the serratus anterior and the trapezius.
The deltoid complex and supraspinatus muscles are categorised as “the positioning muscles”. Their main function is to position the shoulder in varying degrees of elevation.
The strength of the shoulder is controlled by the internal and external rotator muscles which are categorised as “the propellers”. The internal rotators namely the pectoralis, subscapularis, teres major and latissimus dorsi muscles are much larger than the external rotators of the shoulder. A natural imbalance between the stronger internal rotators and weaker external rotators exists. This imbalance is often made worse by the over development of the larger muscles (weights at gym) and by occupational posture (slouched laptop posture). The external rotators are the infraspinatus, teres minor and posterior deltoid muscles.
Common Shoulder Pathologies (Injuries)
1. Impingement Syndrome
Impingement syndrome has the typical presentation of shoulder pain which is worse on overhead activities. Patients suffering from impingement syndrome are usually involved in sports or work that requires overhead positions eg. swimming, tennis, painters, welders etc.
The site of the tenderness depends on the where the actual impingement is occurring. It can be either the front or back of the shoulder.
Shoulder Impingement Risk Factors
There are both structural and functional risk factors in the development of impingement syndrome. Structural causes include a variation in the acromion process which is a structure of the scapula. In some cases this structure is lengthened, hooked or has undergone degenerative changes which decrease the underlying space where the tendons run predisposing them to impingement especially during elevation and internal rotation.
Another risk factor is shoulder instability which allows for excessive upward movement of the humeral head compressing the tendons
Shoulder Impingement Prevention
Functional impingement syndrome can easily prevented be strengthening the rotator cuff muscles i.e. the supraspinatus, infraspinatus, teres minor and subscapularis. This will prevent excessive movement of the humerus when moving the arm.
Shoulder Impingement Treatment
Management of this condition depends on the severity of the symptoms. In the acute stages treatment will be aimed at decreasing inflammation and pain this will be achieved through soft tissue work, shoulder mobilization and manipulation and exercises.
As the pain decreases the treatment is more focused on increasing the rotator cuff strength while avoiding the impingement position. The final stage of treatment includes training the shoulder through the impingement range.
2. Rotator Cuff Syndrome
Rotator cuff syndrome encompasses many conditions including rotator cuff tears and rotator cuff tendonitis.
A torn rotator cuff muscle commonly occurs due to a traumatic event for example lifting a heavy object or falling on an outstretched arm, however, in older patients it can also occur insidiously. Repetitive shoulder movement activities with an underlying shoulder problem can lead to chronic degenerative changes in the muscle tendon. These changes then predispose the rotator cuff tendon to tear. A torn rotator cuff muscle can be classified as partial or complete.
The reason rotator cuff tendonitis occurs in young active individuals is mainly due to the overuse of poorly conditioned soft tissue e.g. tennis players; and in older people mainly due to a poor blood supply to the tendon area. It is common in both males and females.
Rotator Cuff Symptoms
The major symptom of a rotator cuff complaint is pain over the shoulder area. Depending on the involved structure the shoulder pain may radiate down the arm, over the deltoid muscle. If there is a tear in the rotator cuff then weakness in your shoulder and difficulty lifting your arm may accompany the shoulder pain.
Rotator Cuff Treatment
Rotator cuff syndrome represents a chronic shoulder impingement syndrome. That means rest alone is often not sufficient for total resolution of the problem, although it might decrease the shoulder pain. It is important to reduce the mechanical stress on the irritated shoulder tendon or soft tissue whilst trying to reduce shoulder pain and inflammation.
The treatment of the rotator cuff syndrome is dependent on its severity. Initial shoulder pain treatment is conservative and is directed towards symptomatic pain relief. This is done by trying to reduce inflammation and promote a mobile scar tissue development. The ultimate objective is to restore normal, pain-free shoulder and shoulder blade movement. As the shoulder pain subsides a gradual strengthening and shoulder rehabilitation exercise program is initiated.
Your chiropractor can help you with the initial shoulder rehab exercises, but it is recommend that you have one on one exercise supervision. So consulting a biokinetist or physiotherapist to draw up a shoulder exercise plan for you and watch you do them correctly is important for the long-term prevention.
If you have any questions or worries about your shoulder pain and what you should do to help then why not ask a chiropractor a question using our online contact form. | https://chiroclinic.co.za/blog/2010/08/06/shoulder-pain-treatment-rotator-cuff-impingement-syndrome/ |
Complete List of Online Resources (A-Z)
Below are digital media that our libraries subscribe to. They contain loads of information not necessarily found through the Internet. You may need your library card to log into some of these databases.
A to Z World Food – International recipes, hundreds of fascinating culture and ingredient articles, and essential culinary resources from 174 countries.
ArtistWorks – Music & art lessons through free, self-paced videos taught by professionals. ArtistWorks offers 24 instrumental (guitar, piano, violin, banjo, percussion, jazz), voice, and fine-arts classes, from basic to advanced courses.
BC Centre for Disease Control – Investigates and evaluates the occurrence of communicable diseases in BC and is the provincial reporting centre.
BC Laws – Free public access to the Statutes & Regulations of B.C.
BC Open Textbooks – Download free online textbooks from the BC Min. of Advanced Education. The Open Textbook Project provides full post-secondary textbooks on arts, business, health, recreation and tourism, science, social science, trades and upgrading programs.
British Columbia Codes (in Library Use Only) – Includes the current BC Building Code (including Plumbing Services) and the BC Fire Code.
British Columbia Statistics – Statistics on population, business, the economy, labour and society. From BCStats, the central statistical agency of the Province of British Columbia.
Access 2006 Census full (31 page) profiles or any item with a “key” icon beside it is available here. (Note: Remote access is permitted if your library can provide authentication.)
Clicklaw – An Internet subject guide to authoritative legal resources in BC and Canada. Developed by the Legal Services Society of BC and now maintained by the BC Courthouse Library Society. No library card is required to access this resource on the Internet.
ConsumerSearch – Features product reviews from a variety of quality sources.
DataBC – Provided by B.C. government department, this site contains thousands of data sets on a wide range of topics. Includes a geographic service to help map data, and tools to help conduct research, analyze statistics, develop apps.
ELSANet – English Language Services for Adults (ELSA) lists English language programs in BC and provides links of interest to ESL students and instructors.
EnerGuide – The Canadian guide to energy-efficient appliances, heating and cooling, and more.
Federal Science Library – Search multiple Government of Canada science library collections and repositories from a single place, view or request items from a vast collection of publications in science, technology, engineering and health.
Grant Connect (in Library Use Only) – Grant Connect is a searchable database that you can use in order to find your best funding prospects. Search all Canadian foundations, as well as corporate and government granting programs, to explore which grants may apply to your organization.
Health Canada – Federal department responsible for helping Canadians maintain and improve their health.
HealthLinkBC – Call 8-1-1 to speak with a registered nurse, dietitian, or pharmacist.
Healthy Canadians – A Government of Canada website that gives you access to reliable, easy-to-understand information about health and safety relating to many aspects of our lives.
IndieFlix/RB Digital – Watch award-winning shorts, features, and documentaries from more than 50 countries. Streaming movies available on all Internet-enabled computers, tablets (including iPad and Android), smart phones through the Web browser, and on Roku and Xbox.
Internet Mental Health Encyclopedia – Information on mental disorders, treatments and the research that is taking place.
Khan Academy – “Learn almost anything for free.” 3300 videos explain many subjects.
KnowBC – (In Library Use Only)the leading source of British Columbia information.
LearnNowBC – is a single point of entry to distributed learning in British Columbia through the use of Edmark Learning Software.
NewToBC – Find out services, programs and resources avaialable to newcomers, offered by BC Libraries to help you succeed in your new community.
RBdigital – A collection of more than 4,000 downloadable audiobooks from Recorded Books, including 100 Canadian titles from Maple Leaf Audio. These audiobooks are all simultaneous use, with no holds or waitlists. Download titles to your home computer, or to a mobile device using the RBdigital app.
Points to the Past – Access to Historical Databases for all BC Residents.
PLOS Medicine – Public Library of Science: Independent medical journal.
Public Health Agency of Canada – The Public Health Agency of Canada aims to promote health, prevent and control chronic diseases and injuries & infectious diseases. They also prepare for and respond to public health emergencies.
Rocket Languages – An award winning interactive online language learning system allows patrons to learn conversational languages (15) at their own pace. The free program allows downloading for easy access at the user’s convenience. 90+ hrs of interactive instruction, with speech comparison, self-testing, and word games. Patrons can track their progress and downloads do not expire.
The Small Business Accelerator – an initiative of the Irving K. Barber Learning Centre at the University of British Columbia, is curated by professional librarians to help you to research your business idea. Use this website to write a business plan, gage your growth potential, learn about market trends and to discover new opportunities. If you’re looking for secondary market research to inform your next step – you’ve come to the right place.
SeniorsBC – Provides BC seniors with resources for planning and living a healthy life as they age. Seniors’ families and caregivers will find great information here, too.
Teen Health & Wellness – Support and self-help for teens on topics including diseases, drugs, alcohol, nutrition, fitness, mental health, diversity, family life, and more.
Tenant Survival Guide – Gives tenants rental tips and a basic understanding of residential tenancy law in BC and what it means to them.
Vehicle Safety and Inspection Standards Online (In Library Use Only) Current vehicle safety and inspection standards and related BC legislation.
WebMD – Excellent information on any health topic in an easy to understand format.
The World Bank – The Data Catalogue provides download access to over 2,000 indicators from World Bank data sources.
World Book Encyclopedia Discover – Online encyclopedia with articles, dictionary, videos, sound and eBooks.
World Book Kids – Educational activities from the World Book online encyclopedia, with videos, sound recordings, and eBooks.
World Health Organization (WHO) – WHO provides information, reports, and data pertaining to a variety of health topics.
Zinio/RB Digital – Online magazines that you can download for free and read on your computer, tablet or smartphone. | https://sgicl.bc.libraries.coop/my-lists/explore/research/databases/complete-list/ |
Although it can’t be stopped, it can be mitigated. No matter what, though, it will significantly burden already-struggling.
Although small kidney stones can be passed easily through the urine without realising; the large ones can block urine from.
Kidney pain, or renal pain, is usually felt in your back (under the ribs, to the right or left of the spine). It can spread to other areas, like the sides, upper abdomen or groin. If you have a kidney stone, you usually feel the pain in your back, side, lower belly or groin.
A kidney stone is a hard object that is made from chemicals in the urine. After formation, the stone may stay in the kidney or travel down the urinary tract into the ureter. Stones that don't move may cause a back-up of urine, which causes pain.
May 05, 2020 · Pain caused by a kidney stone may change — for instance, shifting to a different location or increasing in intensity — as the stone moves through your urinary tract. When to see a doctor Make an appointment with your doctor if you have any signs and symptoms that worry you.
Higher temperatures are more likely to cause dehydration, which in turn leads to the formation of kidney stones, report.
While some kidney stones may not produce symptoms (known as "silent" stones), people who have kidney stones often report the sudden onset of excruciating,
A kidney stone is a hard object that is made from chemicals in the urine. There are four types of kidney stones: calcium oxalate, uric acid, struvite, and cystine. A kidney stone may be treated with shockwave lithotripsy, uteroscopy, percutaneous nephrolithomy or nephrolithotripsy.
Daily coffee consumption may significantly reduce the formation of kidney stones, according to a study recently published in the United States National Kidney Foundation’s (NKF) American.
The patient, Basavaraj Madiwalar, a school teacher in Karnataka’s Hubli, developed sudden pain near his abdomen, and screening showed presence of a large cluster of renal stones (kidney stones).
Throughout California, as COVID-19 infections deplete their staff of nurses, anesthesiologists and other essential workers,
Oct 06, 2021 · A common misconception is that pain is from having a kidney stone when in fact the pain comes from when the stone gets stuck, typically in the ureter. Urine will back up into the kidney causing it.
So you've been told that you have kidney stones. You want to know how much pain you're going to have and when it's going to end. Read on to find out!
Rising temperatures due to climate change will lead to an increase in cases of kidney stones over the next seven decades, even if measures are put in place to reduce greenhouse gas emissions,
Hospital Treatment. Teens whose kidney stones block the urinary tract or cause severe pain or dehydration may need care in a hospital. They might get.
You'll typically feel the pain along your side and back, below your ribs. It may radiate to your belly and groin area as the stone moves down through your.
Easing Pain Of Kidney Stones Jun 2, 2019. #1: Pain Relief. Passing a kidney stone can be a nightmare – even with small kidney stones of less than 5mm. · #2: Anti-Inflammatory · #3. The doctor said to me I’d probably have to go on dialysis, a treatment that does the kidney’s job and removes. Jonathan’s youth would mean he’d
Dec 07, 2021 · "Symptoms of kidney stones vary but may include anything from a mild ache to severe cramping or shooting pain in the lower back, stomach, and even into the groin.
Climate change may cause additional kidney stones – Entornointeligente.com / [Photo/IC] Climate change in the coming decades could lead to an increase in cases of kidney stones.
Ultrasound technology that moves small, lingering kidney stones toward the ureter to avert episodes of pain or additional surgery may be ready for US Food and Drug Administration (FDA) approval.
The patient, a school teacher by profession from Hubli in Karnataka, developed sudden pain near his abdomen, and screening showed presence of a large cluster of renal stones (kidney stones), a.
Oct 05, 2020 · Kidney stone medications Analgesics. Pain medications are used to manage the pain as a kidney stone passes through the urinary tract. Over-the-counter NSAIDs (ibuprofen or naproxen), or acetaminophen are usually sufficient to manage the pain. On rare occasions, a low dose of opioids including morphine or ketorolac may be used in an emergency.
Sep 8, 2020.
“Kidney stones are almost always on one side, so the pain is either on the right or the left. Usually, you feel it in your back or on your side.
Jun 1, 2018.
Are you experiencing severe pain in your lower back? Is there blood in your urine? Do you have difficulty urinating? You may have a kidney.
If you ever have severe pain in your belly or one side of your back that comes and goes suddenly, you may be passing a kidney stone.
May 5, 2020.
Severe, sharp pain in the side and back, below the ribs · Pain that radiates to the lower abdomen and groin · Pain that comes in waves and.
You might have a procedure or surgery to take out kidney stones if: The stone is very large and can't pass on its own. You're in a lot of pain. The stone is blocking the flow of urine out of your.
Sep 17, 2019 · Kidney stone symptoms are typically severe pain in the back and side, and pain that radiates and fluctuates between the lower abdomen and groin. In addition, urination will likely be painful, discolored and odorous.
In 2018, 25% of adults in the United States reported experiencing lower back pain. Because this is a common condition that can be very disruptive to daily life, back pain is a leading reason for people to seek out medical care. Back pain ca.
The pain can be excruciating. In fact, that’s the primary symptom of a kidney stone that’s stuck in the ureter – the tube that carries urine from the kidney to the bladder. “I’ve had women.
Mar 31, 2021 · Kidney stones are one of the main causes of kidney pain. The pain starts when the kidney tries to get rid of the stone and has a problem doing so. This sort of pain generally comes in waves. Kidney stones often manifest in sudden, extreme pain in the lower back, side, groin, or abdomen.
The most common symptom for kidney stone is pain in the mid-back and side (flank). This pain can radiate to the front of the body below the rib cage, and down.
Sep 08, 2020 · Kidney stone pain is not always severe — or easy to identify. The kidneys are located in an area of the torso called the flank. They sit below the lower ribcage in the back. While you can certainly feel intense pain on either side of the back (depending on which kidney the stone is in), that’s not always the case.
Doctors at a prominent Hyderabad hospital have announced that they have removed a record 156 kidney stones through a keyhole opening from a 50-year-old patient. The doctors used laparoscopy and.
The health care provider will perform a physical exam. The belly area (abdomen) or back might feel sore. Tests that may be done include: Blood tests to check.
Kidney Stone Pain Groin Area It gives clear thought on the development of central members and subjective highlights of business in each area. This market research gives momentum update on income age, ongoing turns of events. Jan 23, 2019 · Intense pain in your abdomen, side, lower back, and groin area is one of the first tell-tale signs that you
Pain is the most common symptom of kidney stones and unfortunately, it is not subtle or tolerable pain. The pain that comes with stones migrating from the kidney through the ureters is excruciating. kidney stone pain is often described as flank pain that begins beneath the rib cage and travels down toward the testicles in men or the labia in women.
May 22, 2015 · INTRODUCTION Kidney Stones, also known as renal calculus or nephrolith, are small, hard deposits of mineral and acid salts on the inner surfaces of the kidneys. If stones grow to sufficient size they can cause blockage of the ureter. kidney—– stone (calcium) gall bladder—- stone (cholesterol oxalates) intestine —– jejunum (hard substance)
Jun 27, 2020 · Kidney Stone Pain Relief. You may be able to take steps at home to ease kidney stone pain: Drink plenty of fluids to try to flush out the stone. Aim for 2 to 3 quarts a day. Water is best.
Rising temperatures due to climate change will lead to an increase in cases of kidney stones over the next seven decades,
They can be smooth or they can have jagged edges. Small kidney stones can cause pain as they pass through the urinary tract. Large stones can become stuck.
The condition has increased in the last two decades and a high number of cases have been reported in women and adolescents, as per the researchers.
Jul 28, 2021.
However, stones typically do cause symptoms when they pass from the kidneys through the urinary tract. Pain — Pain is the most common symptom. | https://passkidneystone.com/kidney-stone-pain/where-would-kidney-stone-pain-be/ |
Virtual reality (VR) refers to computer and other programmable device technologies that use software to generate realistic images, sounds and other sensations to replicate a real-life or imaginary environment as perceived by a user. VR devices may create an immersive experience, and thereby simulate a user's physical presence in the created environment, to the exclusion of visual, auditory and other sensory data of the actual, physical environment of the user. VR systems are used for gaming, education, simulated travel, remote site tool manipulation and meeting attendance, and other applications.
| |
Excerpted from the article published February 12, 2020. Read the complete paper online at https://doi.org/10.1371/journal.pone.0228169
Many insect populations around the world have faced rapid declines in recent decades due to human-induced landscape changes, including massive increases in the area of land devoted to monocrop agriculture [1,2,3,4]. Declines in bee populations in particular have raised alarm because of the essential pollination services that bees provide [5,6], contributing to the global economy and improving human nutrition by making diverse fruits and vegetables cheaper to grow . The most widely managed crop pollinator species, Apis mellifera L., the European honey bee, contributes an estimated $14 billion yearly in pollination services in the United States alone . In recent years, beekeepers have seen increased honey bee colony mortality in several regions, including the United States [8,9].
While exposure to pathogens, parasites, and pesticides undoubtedly contribute to high colony mortality, poor nutrition likely plays a key role [6,9,10,11,12,13,14]. Like all bee species, honey bees require both pollen, their primary source of proteins and lipids, and nectar, their primary source of carbohydrates [15,16,17]. In temperate regions, colonies may need to collect an estimated 25 kg of pollen [17,18] and potentially over 300 kg of nectar to function during the summer and survive the cold winters. In addition to quantity, diet quality is also very important. Diverse pollen sources help honey bees combat pathogens and parasites [20,21,22,23] and increase their ability to detoxify pesticides . Colonies living in the temperate zone must respond to frequent changes in the species of blooming flowers from spring to fall [25,26] and may experience periods of dearth where temperatures remain high but few rewarding flowers bloom [27,28].
Researcher Morgan K. Carr holds a sugar-water feeder and marks honey bees in order to get them to go back to the observation hives and waggle dance for known distances. “That gave us calibration data, or relationship between distance and waggle run duration in their dances, that we used when mapping waggle dances for flowers,” she says.
Many groups around the world are interested in helping maintain healthy bee populations by planting flowers for bees [29,30,31]. Simultaneously, many organizations are more broadly interested in restoring native habitats that had been converted to agriculture or other human uses . Before European colonization, the Upper Midwest of the United States mainly consisted of prairie lands, defined as temperate grasslands with a moderate rainfall and deep-rooted perennial forbs [33,34]. Today less than 2% of the original prairies remain , and there is great interest in restoring native prairie habitats . Unfortunately, governments and organizations interested in helping bees often have limited information about how different land management schemes or seed mixes will affect bee foraging success. Prairie restoration projects are very likely to benefit native bee species, especially bees that specialize on prairie flowers [37,38,39]. It is less clear to what extent non-native honey bees will be attracted to and use patches of flowers in reconstructed prairies.
Honey bees’ unique life cycle and foraging strategy may affect their use of flowering resources in prairies. Honey bees are generalist foragers that have a very wide foraging range, with most foraging trips occurring within 4 km of the nest [40,41,42,43] but some trips as far as 14 km away [44,45]. Honey bee colonies contain thousands of foragers that can communicate with each other about the locations of the most rewarding patches of flowers using a signal called a waggle dance, a behavior unique to bees in the genus Apis [26,44]. This signal involves repeated figure-eight runs in which the dancer waggles her abdomen back and forth during the straight middle portion of the figure-eight, called the waggle run. The waggle run provides dance follower bees with a vector containing both the direction of the flower patch relative to the azimuth of the sun and the distance to the patch . Foragers will only dance to advertise the locations of flower patches that they perceive as profitable (having a favorable ratio of nutrients gained to energy expended) . These characteristics of honey bee foraging behavior may lead colonies to focus on the densest patches of flowers and ignore sparser flowers within non-rewarding grasses, as is common in reconstructed prairie habitats. However, the density and size of flower patches in prairies change across the season as different species of flowers bloom. Even if honey bee foragers only perceive flowers in prairies as profitable resources during part of the foraging season, access to prairies may boost colony health by supplying nectar or pollen when there is a dearth of non-prairie food sources .
The honey bee waggle dance provides us with a window into how honey bee foragers perceive the resources that they encounter in the landscape around their hives . At the level of the colony, the proportion of dances advertising flower patches in a given habitat serves as a measure of the decision-making process that allocates foragers among habitat types based on their relative profitability . At the level of the individual, if a forager brings back food from a particular species of flowers and dances to advertise the site that she visited, it indicates that she perceived those flowers as sufficiently profitable to recruit nestmates. It is currently feasible to determine which flower taxon was advertised in a dance from the pollen that the dancer carried but not from nectar carried by dancers [48,49]. In addition, characteristics of dances advertising nectar sources can give more nuanced information about the perceived profitability of the resource. The total number of waggle runs that a nectar dancer performs in a dance is correlated with her assessment of the profitability of the resource advertised . This relationship has been demonstrated multiple times with artificial sugar-water feeders [50,51,52], but so far has not been demonstrated with pollen or pollen substitutes ( but see ). Multiple studies have also shown that colonies tend to advertise sites at greater distances in order to find profitable nectar sources during dearth periods [27,40,42,55].
Therefore, we took advantage of the information in honey bee waggle dances to answer two main questions about how honey bee colonies perceive flowers in prairie habitats: first, is there any part of the season in which the foraging force of a honey bee colony will devote a large proportion of its recruitment efforts (waggle dances) to pollen or nectar sources within prairies? To better understand seasonal changes in the proportion of dances advertising prairies, we asked two additional questions: 1) During times of year when a large proportion of nectar dances advertise sources within prairies, do dances for prairie sites include more waggle runs and thus indicate a higher perceived profitability than dances for sites outside of prairies, and 2) Is there a seasonal change in the average distance of advertised nectar sources? For our second main question, we asked whether honey bee foragers will advertise patches of native prairie flowers as high-quality pollen sources, and, if so, which taxa will they advertise? To answer these questions, we placed honey bee colonies in glass-walled observation hives with access to two large, reconstructed prairies. The glass-walled hives allowed us to record the dances performed by members of these colonies throughout the summer and early fall. From these recordings we decoded the direction and distance information within the dances, mapped them as probability density distributions using a Bayesian modeling approach , and determined what percentage of the dances advertised sites within prairies during different parts of the season. At sites with a seasonal change in the proportion of dances for food sources in prairies, we also quantified waggle runs per dance to determine if the bees perceived the prairie flowers as more profitable compared to flowers outside of prairies. We also explored whether there were seasonal changes in the average distance of advertised nectar sources. Finally, to both map and identify the taxa that the foragers considered to be profitable pollen sources, we captured a subset of dancing bees who were carrying pollen loads and identified the pollen source using microscopy and DNA barcoding. | https://belwin.org/2019/08/honey-bee-dancing-at-belwin/ |
https://cfaes.osu.edu/news/articles/farm-science-review-announces-2021-hall-fame... LONDON, ... Ohio—The annual Farm Science Review trade show, now in its 59th year, has announced Jim Karcher of Apple ... Creek, Gerald Reid of Wooster, and Ken Ulrich of South Charleston as the 2021 inductees into its Hall of ...
-
Beneficial Plant-associated Microbiomes and Plant Pathology Research
https://plantpath.osu.edu/courses/plntpth-5005
PLNTPTH 5005 The study of beneficial interactions between plants and microorganisms and its applications. This course will equip students with an understanding of the phylogenetic and functional diversity of microorganisms associated to plants, the factor ...
-
Safety Guidelines Related to COVID-19
https://fsr.osu.edu/visitors/safety-guidelines-related-covid-19
COVID-19. Anyone feeling ill, have a fever, or other symptoms of COVID-19 should not attend Farm Science ... diagnosed with COVID-19 are not permitted to attend Farm Science Review. Masking Outdoors o Vaccinated ...
-
2021 Farm Science Review to be live and in person
https://fsr.osu.edu/2021-farm-science-review-be-live-and-person
manager. Called “Farm Science Review Live,” it will “bring content from the Molly Caren Ag Center to ... LONDON, Ohio—The Ohio State University’s Farm Science Review, which was held online last year ...
-
Mobile App
https://fsr.osu.edu/visitors/mobile-app
The Farm Science Review Mobile App is designed to help visitors navigate the show site. An ...
-
2020 Ag Hall of Fame Inductees Recognized
https://fulton.osu.edu/news/2020-ag-hall-fame-inductees-recognized
participated regularly in Fulton SWCD’s 3 rd Grade Ag Fest, FFA Farm Days, school programs and the inaugural ... began as a young girl on her family’s Jersey dairy farm in Indiana and continued throughout her career ... nearly 35 years. As an agronomist for Andre Farms and through her small agronomy business A.M. ...
-
Water Quality in the Western Lake Erie Basin
https://fulton.osu.edu/program-areas/agriculture-and-natural-resources/water-quality
educators and bring current information, applied research, and resources to the agricultural community. Our ... best management practices to improve fertilizer use efficiency and farm profitability while promoting ...
-
Lots to see, learn at Gwynne Conservation Area: Farm Science Review 2021
https://cfaes.osu.edu/news/articles/lots-see-learn-gwynne-conservation-area-farm... LONDON, ...
-
Farm Science Review media credentials available
Writer(s): Alayna DeMartini LONDON, Ohio—This year’s Farm Science Review from Sept. 21–23 offers ... State University College of Food, Agricultural, and Environmental Sciences (CFAES), the annual farm show ... protecting livestock from toxins and poisonous plants; farming in rain, drought, and other weather ...
-
2021 Hall of Fame Inductees Recognized
https://fulton.osu.edu/news/2021-hall-fame-inductees-recognized
Fulton County agriculture for over 60 years on his swine and grain farm in Wauseon. Dale took over the ... family farm from his father Alvin in 1960 and never looked back. As a third-generation farmer, he became ... family farm in the family, extending to the fourth and now fifth generation as Creager Family Farm and ... | https://researchoperations.cfaes.ohio-state.edu/search/site/wooster%20ag%20facilities%20consolidated%20research%20farms?f%5B0%5D=hash%3Abm1igs&f%5B1%5D=hash%3A608jnc&f%5B2%5D=hash%3Atpfulo&f%5B3%5D=hash%3A1pcg3d&f%5B4%5D=hash%3Apwxd80&f%5B5%5D=hash%3Am25q72&f%5B6%5D=hash%3Asqe1np |
Venezuela Begins Debate On Future Without Chavez
Venezuelan President Hugo Chavez, seen talking to the media in Caracas on June 19, says he will win October's election with more than 60 percent of the vote after a poll showed he held a large lead over his opposition rival Henrique Capriles.
In Venezuela, people are beginning to talk about what was once unthinkable: Just who could succeed the all-powerful President Hugo Chavez?
He has been battling cancer, which for much of this year forced him to suspend his once-frequent TV appearances. On Monday, Chavez declared himself free of the cancer, though it's not the first time he's said he was cured.
For 13 years, he has consolidated his hold on power while nationalizing farmland and seizing private companies. And now, despite his infrequent appearances, he remains a political force.
The charismatic populist has lately been projecting that image in fiery speeches ahead of the Oct. 7
presidential election. Chavez is seeking a fourth term even though he has told Venezuelans that he has had three surgeries to remove a tumor.
The cancer treatments have sidelined him for much of the year and sparked speculation that he could die. Aside from Chavez's own declarations, the details of his condition remain a state secret.
But at meetings of the movements that make up Chavez's coalition, there's discussion of how the president's so-called socialist revolution may have to go on without him.
One of those movements, Venezuelan Revolutionary Currents, is led by Ramses Reyes. He says socialism in Venezuela won't end if Chavez is no longer here; it will continue to expand.
Politics professor Luis Alberto Butto says that kind of debate is going on throughout the government's power structure. Obviously Chavez's socialist party needs to consider all possible scenarios that may take place in the short to long term, he says.
Butto adds that if Chavez is forced to drop out before the election, his party would have to choose a successor to run in his place. A six-year term is at stake.
And if Chavez runs and wins, and then dies in the first four years of the new term, new elections would have to be called.
The opposition challenger in the October election, Henrique Capriles, says he wants Chavez to run so he can beat him and overturn his socialist revolution.
We want him to have a long life, Capriles told supporters recently, so he can see the Venezuela of progress that we'll build.
Possible Successors
But if Chavez is forced out by his illness, three aides have emerged as possible successors. They are loyalists who've increasingly been seen inaugurating public projects on state TV or accompanying Chavez during his ordeal.
Diosdado Cabello, the president of Congress, is one of them. In 2002, he worked to ensure Chavez's return after coup plotters briefly ousted the president and read aloud a televised proclamation taking power.
There's yet another possible scenario. Chavez could go with his brother and confidant, Adan. He's close to Cuba's communist government, which supports Chavez. He also introduced a young Chavez to radical political thought.
And then there's Rosa Virginia, one of Chavez's daughters. She doesn't have an official job, but in clips from state TV, she's shown at her father's side on his trips to Cuba for cancer treatments, listening silently as he talks to the cameras.
Eduardo Semtei, who helped take Chavez to power in the 1990s, believes the president will want someone with his last name in the presidential palace.
Semtei says history shows that all-powerful leaders like Chavez often turn to family in time of crisis. | |
Liquidated damages clauses are commonly upheld to be enforceable, even if the result of that clause may seem “unfair”, for example, where the clause only entitles the injured party to a nominal amount.
In assessing an entitlement, Courts will turn their attention towards whether the effect of the clause is such that it would be considered a “penalty”, and in doing so, will consider whether the quantum of the liquidated damages is a “genuine pre-estimate” of the loss that party is likely to suffer as a result of the other party’s breach.
Like all things, the devil is in the detail – careful consideration must be given to the wording, and amount, contained in a liquidated damages clause to ensure that it remains a valid and enforceable means for compensation. With considered drafting, a liquidated damages clause can both, stimulate performance, and compensate late completion.
Penalty
A “penalty” will arise where the clause aims to (or unintentionally) punishes a party for non-observance of a contractual obligation.
If a clause is found to constitute a penalty, it will have no legal effect and will be unenforceable. Whilst it will be dependent on the circumstances of each contract, the Courts have provided some guidance through various tests and considerations which assist in determining whether or not a liquidated damages clause, constitutes a penalty.
These factors include:
- Where the sum (or rate) of liquidated damages is an extravagant and unconscionable amount when compared to the greatest loss that flows from the breach;
- If the relevant contractor breach (which would raise an entitlement to liquidated damages) relates solely to the non-payment of money, and the principal’s entitlement would exceed the amount which should have been paid by the contractor; and
- The amount of liquidated damages is a single lump sum, which is payable on the occurrence of one or a range of breaches, which vary in seriousness and some only warrant trivial damages.
A key factor in determining whether the entitlement is extravagant and unconscionable, is whether the amount being claimed by the party entitled to compensation, is “out of proportion” with all the loss they are likely to suffer. In other words, the clause will not be unenforceable simply because there is some difference in value between amount a party is entitled to be compensated, and the loss actually suffered – it must instead be totally disproportionate.
Examples where the Courts have found that a liquidated damages clause may constitute a penalty include:
- Where the sum does not compensate the principal for the delay to practical completion because that delay is incapable of causing any financial loss (and is therefore extravagant in comparison with the greatest loss that could have been be suffered by reason of delay);
- Where the rate of liquidated damages for delayed completion of a project was exceptionally high when compared to other similar contracts and entitlements for the same breach;
- Where the amount payable does not protect the principal’s legitimate commercial interests;
- Where the clause punishes the contractor, rather than compensating the principal; and
- Where an entitlement to liquidated damages arises under a contract, and is payable, upon an event that did not amount to a breach of contract.
Plainly, a liquidated damages clause will be found to constitute a penalty where, in all the circumstances, the rate of compensation is not considered to be a “genuine pre-estimate” of the principal’s loss.
Drafting Considerations – Genuine Pre-Estimate
Whilst it is the effect of the clause (rather than the words used) which will determine whether a liquidated damage clause constitutes a penalty, careful consideration should be given to the circumstances in which an entitlement to claim liquidated damages will arise, as well as the compensation rate, to ensure the clause remains effective and enforceable.
As mentioned above, a liquidated damages clause will constitute a penalty if the rate for compensation is not a genuine pre-estimate of the principal’s loss.
When drafting your liquidated damages clause, ensure the rate stipulated reflects the negotiated level of risk that the parties agree to in circumstances of a breach, and that it is otherwise a “genuine pre-estimate” of the principal’s loss, calculated at the time of entering into the contract.
Key items to consider when preparing your clause, include:
- An amount is not a genuine pre-estimate, merely because one party believes it to be – it must objectively be so;
- For commercial projects, this amount can often be estimated by calculating possible heads of loss, which may include loss of rent / delayed profit of sale, additional supervision, and admin costs – it can be helpful where the parties identify the sum / formula, or assumptions applied in calculating the rate of liquidated damages;
- Do not simply insert $1 or “nil”, and ensure to always check for any default values which may exist in your standard or template contract; and
- Note in your clause that the parties agree the rate of liquidated damages constitutes a “genuine pre-estimate” (although this alone is not sufficient to prevent your clause from being invalidated as a penalty).
Entitlement to General Damages
As mentioned in Part 1 of this Series, generally, where a valid and enforceable clause exists, parties cannot claim general damages in addition to, or instead of, liquidated damages. However, where the court finds that the clause constitutes a penalty (and is therefore no longer valid and enforceable), the injured party will be entitled to claim general damages that it can prove.
Lamont Project & Construction Lawyers
Our Team have the industry knowledge and experience to assist both Principals and Contractors in all major projects and contractual drafting and contractual disputes. If you would like to discuss any matters raised in the above article or this series as it relates to your specific circumstances, please contact Lamont Project & Construction Lawyers.
The contents of this article is for information purposes only; it does not discuss every important topic or matter of law, and it is not to be relied upon as legal advice. Specialist advice should be sought regarding your specific circumstances. | https://lpclawyers.com/liquidated-damages-part-2-too-good-to-be-true/ |
ASKING OPEN-ENDED QUESTIONS
from Dwight:
Elvis Costello keeps singing in my head. “What so hard about peace, love and understanding?” It starts me thinking about: What’s so hard about asking a question?
Basically, we are asking our people to use a Socratic approach when interviewing each other, asking questions that help them think differently and consider new possibilities. We can encourage them to use questions for building and deepening relationships, solving problems, and developing and empowering their practice. Most importantly, the right questions, delivered with sincerity and openness, help them identify new solutions to their real issues.
Types of Questions
Questions can be categorized into three types: 1) leading; 2) open-ended; and 3) closed. Leading questions have an implied or explicit answer or assumption. Trial attorneys use them expertly.
An open-ended question is one that does not lead and is appropriate when you want to engage another in a deep dialogue. True open-ended questions have the following characteristics:
- They are focused on learning more about the individual, not about satisfying a need of the asker.
- They are asked neutrally, with no emotional charge, hidden suggestion, assumption or contradictory body language from the asker.
- They require some thought and reflection.
- They promote insight.
- They uncover opinions and feelings.
- They allow the question asker to be in control by steering the direction and depth of the conversation.
- They usually require more than a one or two word answer.
Closed questions can be answered quickly, usually without significant thought. Use them when you want a shorter conversation.
Asking the Right Questions: How to Deepen Relationships with Open-Ended Questions
“Listening is not batting practice, where you hit back everything that comes your way.” —John Gottman
Have you ever been asked a question and felt the other person wanted a specific answer (theirs)? How you ask questions conveys much more information than just the question itself and has a significant impact on the quality of your relationship. Most of the time, the questions people ask each other are statements, opinions, judgments, or directives rather than genuine questions. Usually, we put our opinion out there in tone, body posture, or language by asking closed-ended questions, questions where the response is limited to “yes” or “no.” An open-ended question invites a very different kind of experience; it is an invitation for a dialogue of ideas and feelings, an invitation to dance. Asking open-ended questions requires certain skills, including a sense of security in yourself, trust and respect for your partner’s answers, and openness to opinions different from your own. Asking open-ended questions may just be the best thing you can do for your relationship.
An open-ended question is just that—the answer is open-ended, you are not trying to predict or instruct the outcome but want an authentic response from your partner. Whereas closed-ended questions ask for a one word response, open-ended questions invite discussion and sharing. Open ended questions convey the feeling, “Your experience is important to me and I would love to hear about it.” In contrast, closed-ended questions are more like a duel than a dance; they convey the message, “My experience is more important than yours.”
Open-ended questions express a desire for communication and a fondness for your partner. Benefits of asking these kinds of questions include communicating a deep feeling of respect for your partner, and opening the door to a synergy of ideas. They convey interest and are a bridge for communication, cooperation, and understanding. Open-ended questions allow your partner to share thoughts or feelings and to get into the flow of their thoughts and feelings, whereas closed-ended questions can put pressure on your partner for a quick decision even though he or she may not have decided yet.
The following suggestions can help you use open-ended questions to deepen intimacy:
Self Manage: Be clear of your motives when asking a question. Is it about your experience and needs or are you curious about the other person’s experience? Separate your wants from your partner’s: Often, communication is shut down when our own wants are prioritized in our questions. By inviting an open-ended response you are increasing the likelihood your partner will reciprocate and ask for your thoughts; you may then reach a compromise on a topic.
Focus Your Questions: If you ask, “What did you do at work today?” You might get, “Nothing” as the answer. But if you ask, “Tell me about the project you are working on?” You may get more of a response and can then broaden into the day in general. Too broad a focus can be confusing and disconcerting; starting with specifics often makes it easier for the other person to answer.
Invite an answer: Ask questions that allow for a greater response than a simple “yes” or “no.” Avoid, “Do you...” and “Is this...” when your goal is to connect and share information. This means that the outcome may be an unknown. Use questions like, “What do you think of...” and “How do you find...?”
Use Mindful Listening: When listening, many people are simply gathering evidence for their rebuttal, waiting for their time to speak and not really listening. Instead, focus on the words your partner is saying and be curious, “I wonder what she thinks of this...?” As Walt Whitman said, “Be curious, not judgemental.” This active listening helps your partner respond in more depth.
Be Okay with no answer: If you partner is not ready to talk, you might not get an answer right away. If you respond with anger, “Well, see if I ever ask you about your day again!” you decrease the likelihood of an answer next time.
Start Small: Practice with topics that are not high stakes issues. Rather than, “What do you think about living together” if this has been a source of contention, talk about the upcoming trip you have been planning together “How do you feel about staying longer at Disneyland?” Once you have built open-ended questioning skills, you can move to bigger issues.
Examples of open-ended questions:
- What do you think of your job?
- How does this house suit you?
- Tell me more about this issue.
- How are your tennis lessons working out for you?
Compare these to the following closed-ended questions:
- Do you like your job?
- That movie was too long, wasn’t it?
- Do you think I am right or not?
- Do you like tennis?
Closed ended questions have their use; at the drive-through asking, “Would you like small or medium?” makes more sense than “How do you feel about medium sized drinks?” Open-ended questions are useful when intimacy, connection, and understanding are the goals. They are the Lego blocks of relationships, the small pieces that, when put together over time, create a sense of intimacy, trust, closeness. Asking open-ended questions means “Please share your thoughts and emotions with me. I value you and I value your ideas.”
How do you ask open-ended questions in your relationships? (See, it’s easy to do) One way to do so is to remind yourself that some questions are about much more than the answers—they are an invitation to dance. How do you feel about dancing?
References: Gottman, J. (2001). Making marriage work. [audio speech]. Better Life Media. Rogers, C. (1995). On becoming a person. New York: Mariner.
from the IFF Competency Workbook:
As Feldenkrais teachers we know that learning processes need support and respect to be useful and valuable. Feedback can powerfully affect one’s self image and must be offered considerately and with keen awareness. Following these guidelines for asking questions and providing feedback will insure that Peer Assessment will be a pleasing experience.
Questions
When one is listening and supporting a peer in the assessment process, mirroring and asking the right kind of questions may help one’s colleague discover or bring awareness to that which is out of their habitual perceptual field. Imagine how you might “continuously envelope your partner with awareness by asking respectful questions”.
Some tips for asking questions:
Open questions:
These kinds of questions provoke reflection and associations.
For example:
- “What’s your intention behind this/that...?”
- “What is your thinking about this/that...?"
- “ How did you do this/that before you learned this/that?”
- “How did you come to make this decision”?
Closed questions are those that can be answered with a yes or no. They are not as helpful in this process.
Resource-related questions:
These kinds of questions focus attention on the knowledge, abilities, and situational memories that one may access in any situation.
For example:
- “What did help you in this/that situation?”
- “Do you know another way of doing the same?”
- “What knowledge and abilities supported you in this situation?”
- “Could you tell me step by step, how you have managed this situation?”
Questions that focus attention on the center of a problem are not helpful in this process. | https://www.iopsacademy.com/asking-openended-questions |
The history of American art from colonial times to the present. Painting, sculpture, architecture, and crafts will be examined within their historical, political, and sociocultural background. Students learn to identify works by pivotal artists, recognize techniques and formal visual elements, and critically analyze artwork within its contextual framework.
UC/CSU
After successful completion of this course, students will be able to:
Outcome 1: Identify and analyze the formal visual elements and techniques of individualworks of art in different media.
Outcome 2: Define the various styles of American Art, and demonstrate the ability to compare and contrast stylistic aspects and trends.
Outcome 3: Evaluate works of art in relation to the sociological, religious, historical, and cultural context in which they were created.
Outcome 4: Identify individual works of art and architecture by pivotal American artists.
Outcome 5: Summarize the concepts that define and distinguish the American visual tradition, and demonstrate the ability to discuss and assess these concepts contextually. | https://www.ccsf.edu/Schedule/CD/ART%20118.htm |
an experiment that confirmed that atoms have a magnetic moment, the projection of which on the direction of an applied magnetic field takes on only specific values, that is, is quantized with respect to direction in space. The experiment was performed in 1922 by the German physicists O. Stern and W. Gerlach. They investigated the passage of a beam of Ag atoms (and, later, beams of atoms of other elements) in a highly inhomogeneous magnetic field (see Figure 1) in order to verify the theoretically obtained equation for the space quantization of the projection μ2 of the magnetic moment μ0 of an atom on the Z direction. The equation is μ,z = μ0m, where m = 0, ± 1, . . . .
A force F = μz ∂H/∂Z acts on an atom having a magnetic moment and moving in a magnetic field H that is inhomogeneous in the direction longitudinal to Z. The force deflects the atom from its original direction of motion. If the projection of the magnetic moment of an atom could vary continuously, a broad diffuse band would be observed on the photographic plate P. However, in the Stern-Gerlach experiment, the atomic beam was found to be split into two components that were symmetrically shifted by a quantity A with respect to the original direction of propagation; that is, two narrow bands appeared on the plate. The results of the experiment indicated that the projection μ,z of an atom’s magnetic moment on the direction of the field H takes on only two values, ± μ0, that differ in sign. In other words, μ0 is oriented in the direction parallel to H and in the opposite direction. The value of the magnetic moment μ0 that was measured in the experiment from the shift Δ turned out to be equal to the Bohr magneton.
The Stern-Gerlach experiment played a major role in the further development of concepts of the electron. According to the Bohr-Sommerfeld theory, the orbital angular momentum and, consequently, the magnetic moment of the atoms used in the experiment, which have a single electron in the outer shell, should be equal to zero. Therefore, such atoms should not, in general, be deflected by a magnetic field. The Stern-Gerlach experiment showed that, contrary to the theory, the atoms used have a magnetic moment. In 1925, the Stern-Gerlach experiment, together with previous experiments, led G. E. Uhlenbeck and S. Goud-smit to the hypothesis that the electron has intrinsic angular momentum, which is called spin.
REFERENCESSommerfeld, A. Stroenie atonta i spektry, vol. 1. Moscow, 1956. (Translated from German.)
Shpol’skii, E. V. Atomnaia fizika, 4th ed., vol. 2. Moscow, 1974. | http://encyclopedia2.thefreedictionary.com/Stern-Gerlach+Experiment |
Production agriculture, with its implied ecosystem simplification, pesticide and fertilizer use, and emphasis on yield, often appears to be at odds with conservation biology. From a farmer's perspective, the weight conservation biology places on wildlife may seem overly idealistic and naive, detached from economic and sociopolitical reality. In fact, these endeavors are two sides of the same coin, with a shared heritage in decades of population and community ecological theory and experimentation. Better integration of the two disciplines requires acknowledging their various goals and working to produce mutually beneficial outcomes. The best examples of this type of integrated approach result from careful implementation of sustainable agriculture practices that support biological conservation efforts via habitat amelioration or restructuring. Successful integrated approaches take into account both the environmental and economic costs of different farming schemes and compensate farmers for the costs they incur by implementing environmentally friendly farming strategies. Drawing primarily from examples in insect population dynamics, this paper highlights some innovative programs that are leading the way towards a more holistic integration.
Publication Title
Frontiers In Ecology And The Environment
Volume
2
Issue
10
First Page
537
Last Page
545
DOI
10.1890/1540-9295(2004)002[0537:DCIAAC]2.0.CO;2
Publisher Policy
Publisher's PDF
Open Access Status
OA Deposit
Comments
Copyright by the Ecological Society of America.
Recommended Citation
Banks, John, "Divided Culture: Integrating and Conservation Biology Agriculture" (2004). SIAS Faculty Publications. 3. | https://digitalcommons.tacoma.uw.edu/ias_pub/3/ |
The Mary Black Foundation is not only a funder; we are often a partner, convener, facilitator, and leader. We are actively engaged in a wide variety of community partnerships outside of our traditional grantmaking. As a local funder, it is important that we partner with a variety of community organizations, including other funders.
The Mary Black Foundation, Spartanburg Regional Foundation, The Spartanburg County Foundation, and United Way of the Piedmont, often referred to as the Spartanburg Joint Funders, are committed to intentional collaboration and coordination across our individual organizations. As significant funders in Spartanburg County it is important that we work together and, when appropriate, partner on common community efforts. Each funder has its own funding guidelines and application process so we cannot accept joint solicitation requests; instead, organizations are encouraged to contact each funder directly. For more information about each of the Joint Funders and for contact information, click here.
Additionally, we lead several community-wide initiatives that leverage the many roles we play. Below is more information about our Foundation-led initiatives.
Connect
Connect is an initiative of the Mary Black Foundation, funded through the Office of Population Affairs (OPA). In 2014, the Foundation received a five-year, $5.3 million grant from OAH to reduce teen pregnancy throughout Spartanburg County. Connect seeks to promote adolescent friendly services, supports, and opportunities throughout Spartanburg. Connect collaborates with local adolescent-serving agencies to build capacity in adolescent friendly services and implement evidence-based interventions. Partners include health care service providers, schools, faith communities, non-profit organizations, state agencies, law enforcement, employers, teen advocates, and trusted community members. Connect seeks to improve the outcomes for our most valuable resource: our teens.
Healthy Families Initiative
The Foundation launched the five-year Healthy Families Initiative (2017-2022) in partnership with the Children’s Trust of South Carolina to expand the evidence-based Triple P (Positive Parenting Program) framework across Spartanburg County. Serving as the local Activating/Coordinating Agency, the Hope Center for Children has taken on the role of supporting Implementing Agencies in the delivery of Triple P. Parents are a child’s first teacher and, through this effort, we are ensuring that parents have access to supports whenever, and however, the support is needed.
Healthy Schools Initiative
A four-year (2016-2020) initiative that includes nine schools throughout Spartanburg County. The schools participate in the Alliance for a Healthier Generation Healthy Schools Program while receiving technical assistance from the Alliance and Partners for Active Living. Additionally, grant funds are provided to each school to implement changes in policies, practices, and environments.
Way to Wellville
Spartanburg is one of five communities to participate in the Way to Wellville, a ten-year national challenge designed to demonstrate the value of investing in health. Alongside the City of Spartanburg, Spartanburg Regional Healthcare System, and the University of South Carolina Upstate, the Foundation is seeking to develop innovative solutions that amplify and accelerate community health. Currently, we are working to fund and implement Hello Family, a comprehensive set of services for families with young children, Neighborhood Engagement efforts, and the Wellville Exchange, an initiative to bring health and well-being services to small employers in Spartanburg.
Child and Adolescent Behavioral Health Study
In 2012, the Joint Funders, consisting of Mary Black Foundation, Spartanburg Regional Foundation, The Spartanburg County Foundation, and United Way of the Piedmont, conducted a detailed needs assessment of behavioral health in Spartanburg County. Since then, the Spartanburg County Behavioral Health Task Force, led by the United Way of the Piedmont, has worked diligently to increase the capacity of the health and human service entities that provide behavioral and mental health services, as well as the capacity of institutions that provide other direct services to the shared population. | https://maryblackfoundation.org/community-initiatives/ |
Performance objectives for any class should be at the forefront of an instructor's mind as he develops a curriculum for a course. Standards are goals that you hope your students master by the time they leave your class. The Common Core State Standards, or CCSS, were implemented in 45 states in the 2012-2013 school year and they provide a clear map of what performance objectives educators should set for students at all grade levels. Reading is an essential component of these standards, and it is important for teachers to be knowledgeable of what is expected for students.
Key Ideas and Details
Key ideas in the CCSS 8th-grade reading standards deal with the overall text and its connection to the world. Students should be able to provide an objective summary of the text and analyze how a theme is developed throughout it. Analysis of the text for character action, plot advancement or real-world connections is another highlight of these standards. Students should be able to analyze a plot while citing textual evidence to support their claims.
Craft and Structure
The craft and structure standards deal with the actual writing style of an author. Using context clues, students should be able to determine the meaning of unknown words, including understanding connotation and denotation. Learners should also be able to compare and contrast the structure of two texts and critique the organization of a paragraph. They should also analyze the point of view of a writer to understand literary elements or conflicting ideas.
Related Articles
Knowledge and Ideas in Literature
The standards stress reading texts that are of adequate complexity for 8th-grade students. While reading literature such as novels or dramas, students are expected to examine differences in film or live production of stories in relation to the original text. Readers should also develop skills to analyze how modern works of fiction draw on myths, legends or religious works to renew themes or character types. These standards are centered around drawing connections between literary texts.
Knowledge and Ideas in Informational Texts
The standards assume a balance of both fiction and nonfiction texts in an 8th-grade language arts classroom. Students should evaluate the advantages and disadvantages of various mediums -- print, digital or multimedia -- and their impact on the overall message. Readers should also assess the relevance and credibility of an author and understand that sources may be opinionated. At some point in the course, students should read two or more texts on the same topic and analyze the differences of opinion and fact.
SMART Goals
To test whether a performance objective is well-written or attainable, use the acronym SMART. A standard should be Specific to clear up any doubt or misinterpretation. The objective must be Measurable by using an active verb that can be monitored and appraised. Eighth-grade reading students must be able to Achieve the performance objective. Expecting students to be able to do things that are above their ability or reading levels is not a valid aim. The goals set should be Relevant to the learner and their growth. All performance objectives should be Time bound, meaning they are set with an end in mind. In a classroom, the typical timeline to meet set goals is the end of the year or semester. | https://www.theclassroom.com/performance-objectives-8th-grade-reading-14120.html |
Sustainable Remediation of a Nitrate Plume:
It’s all About the Bugs
Background
A release of approximately 15,000 gallons of ammonia nitrate and urea (liquid fertilizer) occurred in 2003 next to a railroad track. Initial response activities included the excavation of two recovery trenches, soil sampling from the two trenches, installation of slotted polyvinyl chloride (PVC) pipe at the base of the trenches, excavation of test pits within and downgradient of the spill area to capture liquid fertilizer in the subsurface, and the installation and sampling of three recovery wells. During the initial response, approximately 600 gallons of liquid fertilizer was recovered. Initial response data indicated elevated concentrations of nitrate and ammonia in the subsurface soils and groundwater in the area of the release. Groundwater was monitored periodically between 2003 and 2012 with minimal natural attenuation of nitrate in groundwater. Excavation of source area soils under the tracks was not practicable, and the agency agreed that the remediation should focus on groundwater.
Basis of Remediation
Carbstrate™, a dextrose and micronutrients (potassium phosphate and yeast extract) amendment produced by ETEC, LLC, is used to enhance biodegradation of nitrates in soil and groundwater by naturally-occurring denitrifying bacteria (the “bugs”). The bacteria utilize the carbon substrate as a food source, converting the nitrate to nitrogen gas.
Additionally, the carbon substrate and water mixture can be applied via gravity injection to existing site infrastructure. This limits the energy used and additional infrastructure required.
Implementation
In 2012, KJ Consultants conducted a pilot study as part of an Interim Remedial Measure (IRM) to stimulate the growth of denitrifying bacteria throughout the vadose and saturated zone. The IRM involved gravity injection of Carbstrate™ via an existing recovery trench adjacent to the railroad tracks, followed by quarterly groundwater sampling events (one pre-injection and four post-injection). The existing trench was approximately four feet deep and 10 feet above the groundwater table, which allowed for flushing and treatment of the source area soils. The sampling results indicated that the pilot IRM was reducing nitrate concentrations in vadose zone soils and groundwater.
In March 2014, an additional IRM injection was performed using the recovery trench. Carbstrate™ was also injected into three site monitoring wells in an effort to introduce the substrate directly into groundwater near the original source area. Quarterly groundwater sampling events (one pre-injection and three post-injection) began in March 2014 through April 2015, with additional semi-annual events in October 2015 and March 2016.
Results
Since implementing the IRM, the analytical results have indicated a significant reduction in groundwater nitrate concentrations:
- Prior to the IRM, nitrate concentrations exceeded the groundwater standard (10 milligrams per liter) in six site wells, with maximum concentrations of nitrates exceeding 500 milligrams per liter.
- Currently, nitrate concentrations are below the groundwater standard in all but two site wells. Concentrations of nitrate in the downgradient monitoring point have been reduced by over 65% following the 2014 injection. An IRM Injection was completed in June 2016 in the remaining two site wells that exceeded the groundwater standard. Results of this injection are pending, however, it is likely that only two years of monitoring will be required to close the site.
Green and Sustainable Remediation
This technology was applied using existing infrastructure (recovery trench and monitoring wells) with reliance on biological remediation processes.
According to the United States Environmental Protection Agency (EPA), Green remediation Best Management Practices (BMPs) focus on the core elements of greener cleanups:
- Minimize total energy use and increase the percentage from renewable energy.
- Minimize emission of air pollutants and greenhouse gases.
- Minimize water use and preserve water quality.
- Conserve material resources and minimize waste.
- Protect land and ecosystem services.
This project utilized green remediation BMPs in the following ways:
- Minimized the total energy consumed by gravity injection of the carbon substrate and water mixture.
- Minimized emissions of air pollutants by only using gasoline-powered pumps during mixing of the substrate.
- Minimized water use by batch injecting the substrate at a high concentration.
- Used in-situ bioremediation, therefore did not generate any wastewater.
- Additional infrastructure was not needed, leading to minimal land disturbance.
- Groundwater samples are collected from monitoring wells using GeoInsights’ Hydrasleeve® sampling device. Purge water is not generated with the Hydrasleeve®; therefore, storage and disposal of purge water is not required.
SIGN UP
If you are interested in more information on Kennedy Jenks, don’t forget to subscribe to our blog! | https://www.kennedyjenks.com/2016/08/03/sustainable-remediation-of-a-nitrate-plume/ |
Climate Data Initiative: Chesapeake Bay Hackathon August 2-3, 2014 Virginia Beach, VA
Despite the creation and distribution of numerous climate related datasets over the last five years, local governments and conservation organizations in the Chesapeake Bay have not made significant steps towards incorporating this new information into local planning efforts. The implications of climate change and sea-level rise are not always included in the current planning process and short term decisions about land use are often made independent of their future consequences. Few, if any, tools are designed to help communities understand and quantify the long-term effects and costs of developing in certain locations, making it difficult for planners to justify avoiding, or limiting, growth in vulnerable areas.The Hampton Roads region, located at the mouth of the Chesapeake Bay, is one of the communities most vulnerable to the impacts of climate change and sea-level rise.
Coastal communities throughout the Chesapeake Bay, and around the world, are constantly making decisions about where to allow construction and new development, as well as where to protect open space for public enjoyment and natural resource protection. Numerous concerns influence these decisions, however they are often focused on short-term benefits that do not consider what the landscape might look like beyond the next thirty years. These decisions can have long-term implications, however, as communities are allowed to expand into areas that will be vulnerable to flooding and expensive investments in public infrastructure are placed in areas that will be impacted by a rising sea level and a changing climate.
Since 2008, the Hampton Roads Planning District Commission (HRPDC) has been engaged in a series of projects, studies, and efforts related to helping the region adapt to more frequent flooding, rising sea levels and other projected impacts of climate change. Throughout these projects, HRPDC has identified a significant need and role for new tools that help incorporate climate adaptation and sea-level rise into local planning efforts. (http://www.hrpdc.org/uploads/docs/07182013-PDC-E9I.pdf)
In addressing these critical needs, the results of the Hackathon will hopefully be transferable to other locations throughout the Chesapeake Bay and around the country. These tools will help local governments and conservation organizations incorporate new climate data into efforts that reduce a community’s vulnerability while offering greater protection for critical coastal ecosystems.
The target audience for the Chesapeake Hackathon will be students and professors from local colleges and universities, local programmers, and members of the Code for Hampton Roads Brigade, as well as other local Code for America groups located in the Mid-Atlantic (Northern Virginia, Washington, DC, Philadelphia, Raleigh/Durham/Cary).
Subject Matter Experts will include experts in climate change and sea-level rise, planning and zoning, community development, and conservation targeting.
Using the model of Intel’s Code for Good Program and their National Day of Civic Hacking, this event will bring together developers, designers, and subject matter experts for two days to focus on creating software-based solutions to a social problem, in this case, climate adaptation and planning in coastal communities.
The Chesapeake Hackathon will focus on creating tools that allow local governments, planning districts, and conservation organizations to better understand where the impacts of climate change and sea level rise will be felt the greatest.
Using ESRI’s ArcGIS software and ArcGIS Online web-mapping platform, Hackathon participants will be able to incorporate geospatial data hosted through the Climate Data Initiative into planning decision support tools and scenario planning tools to better visualize and understand the long-term implications of planning decisions and how proposed, and existing, development will likely interact with the various impacts of climate change.
An additional focus will be put on identifying ways to incorporate Intel’s new Galileo developer board to monitor changing environmental conditions and provide planners and local organizations with a low-cost way to collect additional data that will be helpful in understanding, monitoring, and quantifying the effects of climate change throughout the Hampton Roads region. Potential applications include monitoring water temperature, salinity, and acidity, measuring soil wetness in coastal areas to determine flooding potential, and monitoring air quality. Development work using Galileo during the Hackathon will also help propel the Chesapeake Conservancy’s efforts to establish low-cost water quality monitoring technology that will be used throughout the Chesapeake Bay.
Providing local communities with low-cost, real-time monitoring capabilities, using the Galileo system, will also increase their ability to monitor and quantify changing conditions to better understand when and where the impacts of climate change will be felt hardest. By tracking metrics such as water temperature, salinity, and acidity, measuring soil wetness in coastal areas to determine flooding potential, and monitoring air quality, local partners will have an early indication of when action will need to be taken and will have the data needed to support sometimes difficult management decisions. | https://advancedbiofuelsusa.info/climate-data-initiative-chesapeake-bay-hackathon-august-2-3-2014-virginia-beach-va/ |
I’ve had a few ‘creative phases’ in my life. In high school, it was writing. Later, it was designing and sewing children’s clothes. More recently, I’ve taken a brush to canvas. In each of those creative activities, I have particularly enjoyed the sense of accomplishment that comes when the end product turns out as I’d hoped. If the poem is satisfying, I will repeat it to myself often – just because it pleases me. When a little girl’s dress or a baby’s playsuit is finished, I like to put it on a hanger where I can see it – just for the satisfaction it brings to me. (It doesn’t need to meet anyone else’s standard – just so long as it pleases me.) There is also a creative art to mothering.
Find your creative outlet
There are many different ways people can be creative: maybe cooking, gardening, singing, knitting or crochet, timber craft, sketching, vlogging, blogging, restoration; and, whatever your creative preference, I’m sure we are all familiar with the experience of personal fulfilment that comes through the act of creation.
Finding a creative outlet that you love and that challenges your mind will enrich you as a person. This is super important for us mums and we should not neglect this need during our years of mothering. What are your creative skills?
Creative Mothering
If we take care to nurture our creative side, we may find some of the benefits transfer to how we view mothering. The work of mothering is much more than the sum of our daily tasks! As a mum, you are being creative every day, in ways like:
- fitting your daily plans around baby sleep times.
- finding a balance between adventure and safety in your children’s play.
- strategising non-confrontational ways to get healthy vegetables into resistant kids.
- managing and/or circumventing toddler tantrums.
- creating nutritious meals (and school lunches!) for children with food allergies.
- managing sibling relationships.
- training your child while ensuring their individuality and own personality flourishes.
- keeping the family in clean clothes during an extended wet spell.
- finding ways to help your child learn tasks that are difficult for them.
- coordinating play dates, music lessons, sporting activities and other appointments.
I wonder what you would say is your most creative challenge as a mother?
Pause and think about what you do every day – what plans do you make; what tools do you assemble; what preparations do you make; and, finally, how do you bring them to fruition? Even a necessary activity like making plans requires creativity – sometimes lots!! As you reflect on and enjoy your creative achievements, maybe that same pleasure can be found in reflecting on the creative mothering that you do on a daily basis.
Appreciating
Do you take time to step back and enjoy the results of your work? In the Christian account of the creation of the world, the bible records that, at the end of each day, God “looked at what he had made, and saw that it was good”. So – if you have ever done something like that, you’re in good company!
Are you taking time to enjoy your children? Do you see how they are developing as people? It’s easy to notice the work that still needs to be done – but it’s important to also allow yourself the satisfaction of noticing what has been achieved. Do you allow yourself credit for the many positive experiences you create for your family day in, day out? Please do. It will feed your soul. Allowing yourself to acknowledge that you have done a good job is one of the rewards that comes with the creative work of mothering. | https://mops.org.au/the-creative-art-of-mothering/ |
To join this network, click here. Note: All Memberships in CES Research Networks are open for CES individual members in good standing.
The new CES Environmental Research Network seeks to expand and intensify the research focus of CES related to major environmental challenges of the twenty-first century and their political and social consequences. We aim to focus initially on the six areas outlined below. However, these areas should be taken in the spirit of an opening set of prompts that will evolve over time depending on member interest:
Populism and Environmental Security – As the climate crisis worsens and resource conflicts intensify, a rise in populist rhetoric is likely to follow. This has already been seen in some countries in the mainstreaming of nativist policies under the guise of environmental security, and the growth of narratives that differentiate a ‘pure’ people from corrupt elites in ecological terms. This phenomenon has not remained within political party lines. This research area seeks to analyze, evaluate and predict how populist politics will impact environmental security policy in the decades to come.
Climate Change, Resilience and Infrastructure – Rising sea levels, extreme weather, wildfires and intense seasonal storms are becoming a ‘new normal’, spreading destructive effects that intensify existing kinds of social inequality and create new kinds of precarity (e.g. climate refugeeism). There is a need to understand how to lessen the impacts of these climatological trends upon critical infrastructure, particularly energy, water, healthcare, citizenship and security. This research area is primarily concerned with how states, regions and localities can and are implementing changes to their critical infrastructure networks to make them more resilient to the burgeoning climate crisis.
Energy Transition and Climate Mitigation – One of the greatest challenges facing society in the twenty-first century is the rapid decarbonization of the global economy to hit international climate mitigation targets. Providing a reliable supply of clean, affordable energy for all presents complex and significant social, political, economic, ethical, and technological challenges, all of which must be solved within a generation. This area focuses on the current energy transition dynamics in Europe and will seek to engage and inform policymakers.
Food and Water Security – Maintaining food and water security in the face of unprecedented and unknown environmental changes counts among the most important existential challenges facing both Europe and the world in the twenty-first century. Climate change and other forms of environmental degradation are drivers of problems such as droughts, flooding and water pollution, problems that also limit nations’ and communities’ capacity to provide both high quantity and quality food. The climate crisis directly impacts the ability to provide sufficient clean water and nutritious food, thus exacerbating illness, hunger and starvation, especially among marginalized populations. This area examines food and water availability in connection to environmental degradation and social inequality with the goal of exploring existing practices and how they can be improved to attain a more sustainable and just distribution of food and water.
Emerging Technology and Environmental Sustainability – Innovations in green technology have considerable potential for making more efficient use of resources, creating new pathways to sustainable development and reducing environmental burdens, including climate change. Technologically informed environmental policy likewise has the potential to drive innovation, the creation of new skills and climate adaptation. This area examines the ways that new technology could provide solutions to environmental issues, while at the same time taking seriously the limits of technology-driven approaches to create positive social change and exploring the ethical questions surrounding technological impacts upon vulnerable groups.
New Economic Models and Practices – In recent years it has become clear that growth-oriented economic thinking, whether neoclassical or Keynesian, contributes significantly to unsustainable economic practices and consumption-oriented ideas of wellbeing. Economic theory has diversified rapidly to try to develop new models—among them, circular economy, solidarity economy, care, doughnut economics, degrowth—that seek to encourage new (or in some cases old) economic practices that better commensurate equity and justice goals with environmental sustainability goals. This area will examine these new, sometimes experimental, economic ideas as they are being implemented in communities across the world, assess their outcomes and implications and consider which ones might be capable of scaling to become effective global solutions to contemporary social and environmental challenges.
In each of these six research areas, we see the opportunity to broaden CES’s academic membership (engaging for example scientists and engineers) and to actively engage many groups of non-academic stakeholders, among others, policymakers, social movement leaders, activists, architects and urban planners, peacekeepers and energy and green tech entrepreneurs. Given the intense interest of younger scholars in environmental issues, we also feel that the ERN would be an excellent platform from which to develop a new CES outreach and mentoring program for younger scholars, helping graduate students and recent PhDs to develop both academic and non-academic networks and skills vital to solving contemporary environmental challenges.
Co-Chairs: | https://councilforeuropeanstudies.org/membership/research-networks/environmental/ |
# in "B" -> out "A"
But honestly we can’t do anything more with it.
The first missing feature is memory management. We have implemented the functions that move the pointer to memory cells left and right, but we’re still stuck with a non expanding memory tape of one cell only.
Let’s implement memory auto expansion, turns out it is gonna be very easy.
the second function does not work, it keeps decrementing the memory pointer and can go negative, which is an unwanted behaviour (it should be zero). We should add another if to make it work, not so good.
We are shadowing the original mem variable by reassigning its value.
One of the main selling point of Elixir is pattern matching in function declaration.
But pattern matching alone could not be enough, as we’ve seen in this example.
This is much better, requires no branching, can be added without modifying the existing code and by looking at the definition we can easily guess when the function is going to be called.
The only limitation is that Erlang VM only allows a limited set of expressions in guards.
With this simple addition, we have created a complete memory management system, that can automatically expand on both sides, in a virtually unlimited way.
Implementing brainfuck operators has been quite linear until now. We just had to follow the language specs and we obtained a working implementation.
Loops are a bit harder task though.
Looks like brainfuck author overengineered the loops, making it possible to have both. But when it’s time to implement them, we can choose to check the loop condition only one time (at the beginning or at the end) and treat the other end of the loop as an unconditional jump. I’ve chosen to implement the while loop.
To implement the loop in brainfuck, we need a function that matches balanced couples of [ and ] first (did someone say s-expressions?).
We cannot simply match ] when we find a [ and the reason is fairly obvious: we could not have nested loops ([[-]-] would not work).
remember that in Elixir _ means match everything.
if we reach the end of the input and depth is non-zero, square brackets are unbalanced, raise an error then.
defp match_lend(@empty, _, _), do: raise "unbalanced loop"
This implementation automatically works for nested loops of any depth. Every time a [ command is found, the program is split in a smaller one and executed until the loop condition is met (this does not save you from infinite loops).
We have now a complete implementation of a brainfuck interpreter that can run any brainfuck program.
In the next post I’ll talk about testing the code, creating a project and compiling down to an executable and the command line tools. | https://massimoronca.it/2014/11/10/writing-a-brainfuck-interpreter-in-elixir-part-two.html |
IIn 2008 the IISH acquired an extensive collection of archival materials relating to the anti-apartheid and Southern Africa solidarity groups in The Netherlands. It concerns the archives and related library and documentary collections of the three former anti-apartheid groups which merged in 1997 into the Netherlands institute for Southern Africa (NiZA; since 2012 ActionAid-Netherlands): the Holland Committee on Southern Africa (previously Angola Comittee), the Eduardo Mondlane Foundation and the Institute for Southern Africa, as well as its predecessors – the Dutch Anti-Apartheid Movement and the South Africa Committee. The NiZA collection also includes the archives of the organisation Broadcasting for Radio Freedom, some local groups, the SADWU Supportgroup and individual activists like David de Beer, Willem van Manen and Karel Roskam.
On that stage the IISH already had the archives of the Workinggroup Kairos, the SA/NAM Association, the Defence and Aid Fund Netherlands and the Southern Africa projects of ICCO, as well as various trade union archives with documents on this subject. In the past few years, additional archives were acquired, relating to anti-apartheid and southern Africa solidarity work, such as the papers of Martin Bailey and the archives of the Azania Committee, the National South Africa group of the ABVA-KABO trade union, the Shipping Research Bureau and the Association of West-European Parliamentarians Against Apartheid.
In this way, the IISH developed an extensive and comprehensive anti-apartheid and Southern Africa solidarity collection, which not only includes materials on the Dutch movement but also on and from elsewhere in Europe and South(ern) Africa. The archives and related collections cover the period from 1950 to 2000. The library and documentation material includes large numbers of books, periodicals, photographs, posters and hundreds of videos, cassettes, badges, flags , T-shirts and other memorabilia. Both the archives and other materials in this collection have been processed in the IISH catalogue.
Almost all the archives in the anti-Apartheid and Southern Africa collectionhave been processed since 2008. The other archives mentioned have been processed only partially; see for more information the detailed overview of the archives in this collection. The archives and documentation in the anti-apartheid and Southern Africa collection can be accessed through the website (catalogue) and the reading room of the IISH. It is also always possible to ask for the assistance of Kier Schuringa, the archivist of this collection: [email protected]
These webpages on the anti-apartheid and Southern Africa collection not only provide a description of the archives, documentation and audio-visual materials of the collection, but also feature a section with background information on the Dutch contribution to the international solidarity campaign in support of the struggle against apartheid and colonialism in Southern Africa. This section includes two comprehensive web dossiers on the anti-apartheid work in The Netherlands from 1948 to 1994 and on the link between Nelson Mandela and The Netherlands. Finally, there is a blog with reports on the various activities with and around these archival materials in and outside The Netherlands. | https://socialhistory.org/en/collections/anti-apartheid-and-southern-africa-collection-guide |
solution and a vial of protease solution. After use, the mineral oil and the lysing solution can be stored at 4ºC.
The protease solution must be stored at -20ºC, avoiding repeated freezing/thawing
1.1. SECTIONS OF PARAFFIN-EMBEDDED TISSUES
1. Take 1-4 tissue sections of 10 m thickness (according to the amount of material in each
section) and place in a 1.5 ml microcentrifuge tube using a needle or fine tweezers.
2. Add 500 μl of mineral oil (included in the kit) heat in a heating block for 2 min at 95ºC.
Centrifuge 2 min at 8000 rpm in the microcentrifuge.
3. Remove the oil without disturbing the tissue fragments.
4. Repeat steps 2 and 3. (The remaining mineral oil does not interfere with the DNA extraction
process).
5. Prepare a 50:1 mixture of lysing and protease solutions (for every 50 μl of lysing solution add
1 μl of protease solution) in sufficient volume for processing the tissue samples.
6. To the resulting tissue pellet add an adequate volume (50-500 μl) of the mixture above until
the tissue is fully in suspension (this is important to ensure a good yield of DNA and degrading of
contaminating cell remains which could interfere with subsequent amplification of this DNA).
7. Agitate several times using the micropipette to homogenize, centrifuge for 5 sec to remove
bubbles and incubate for 24-48 h at 55ºC in thermostatic bath or heating block.
8. Heat at 95ºC in the thermal block for 8-10 min to inactivate the protease.
9. Centrifuge for 5 min at maximum speed. After centrifugation there should be 2 phases, the upper
containing the remains of the mineral oil and lower aqueous layer that contains the dissolved
DNA. There may be a small pellet of undigested tissue at the bottom of the tube. COLLECT THE
AQUEOUS PHASE containing the DNA, avoiding any tissue remains at the bottom of the tube.
10. Use 3 μl of this DNA solution for amplification. The sample can be stored in this state, being
stable at 4ºC for 1 week, or at -20/-80ºC for several months.
1.2. FROZEN TISSUE
1. Cut 1-2 sections of tissue in the cryostat or using a scalpel blade and place in a 1.5 ml
microcentrifuge tube using a needle or fine tweezers.
2. Continue with step 5 of the previous protocol for paraffin sections.
Note: To avoid evaporation of the lysing/protease solution during the incubation at 55ºC, a few drops
of the mineral oil included in the kit can be added.
1.3 PERIPHERAL BLOOD OR BONE MARROW SAMPLES
Note: Do not use heparinized whole blood. We recommend the use of EDTA or sodium citrate as
anticoagulants.
1. Isolate the white cell population using the Buffy Coat from 1.5 ml of blood/marrow with EDTA.
Fractionate by centrifuging at 1500-2000 x g for 10-15 min at room temperature. This procedure
separates an upper phase of plasma, a lower phase with the red cell population and a thin interphase
with white cells (the buffy coat). (For a typical clinical centrifuge, 1500-2000 x g is equivalent to 3000-
3400 rpm).
2. Collect the white cell population and place it in a clean tube.
3. Wash with 5 ml of TE 1X buffer and incubate for 10 min at 37ºC to lyse any remaining
haemocytes.
4. Centrifuge at 3000 rpm for 5 min to precipitate white cells.
5. Re-suspend the cell pellet in 1 ml of TE 1X buffer and transfer to a 1.5 ml Eppendorf tube.
6. Centrifuge at 8000 rpm for 5 min in a microcentrifuge and discard the supernatant.
7. Continue with step 5 of protocol 1.1 for paraffin-embedded samples.
Note: To avoid evaporation of the lysing/protease solution during the incubation at 55ºC, a few drops of the
mineral oil included in the kit can be added.
2. AMPLIFICATION REACTION
For each analysis sample of test DNA, four mixtures are amplified: two for IgK, one for IgL and an
internal control mix (CI). The mixes should be protected from light since they contain fluorescence-labelled
primers.
2.1. SAMPLE PREPARATION
1. Thaw 1 blue tube (IgK-A), 1 orange tube (IgK-B), one green tube (IgL) and one yellow tube
(internal control) for each sample, and, keeping them on ice, add the following to each tube:
– 1 μl of enzyme Phire® Hot Start II DNA Polymerase
– 3 μl of the DNA sample*
2. Mix well and centrifuge for 5 sec in the microcentrifuge to remove bubbles.
*If the DNA is of known concentration, 200-500 ng of DNA per tube is recommended.
Note: It is important to keep the samples on ice until they are placed in the thermocycler to avoid nonspecific
binding of the primers.
2.2. AMPLIFICATION
1. Place all tubes in the thermocycler and amplify using the following programme:
98ºC 2 min.
35 cycles:
98ºC 10 s
60ºC 10 s
72ºC 15 s
72ºC 1 min
2. When the reaction is over, keep the tubes at 12-15ºC. If the samples are not to be processed
immediately, they can be stored at 4ºC or -20ºC until use.
2.3. RECOMMENDED AMPLIFICATION CONTROLS
The kit supplies clonal and polyclonal positive control DNAs, which should be amplified with each batch
of samples analyzed.
To control possible cross-contamination between samples or contamination of the common reagents used
during sample handling, we recommend the processing of a blank sample without DNA. This contains all
the same reagents employed in the DNA extraction from samples and is prepared using the same
processing procedures. Because the blank sample contains no DNA, it should give a negative result for all
three mixtures after amplification.
Likewise, although not essential, it is advisable to analyze each sample in duplicate, i.e., dividing the same
genomic DNA to carry out the amplifications of all fragments in duplicate. This facilitates interpretation of
doubtful results and confirms, using the duplicate, the size of monoclonal peaks in the profile of a reactive
polyclonal population.
3. ANALYSIS OF AMPLIFIED PRODUCTS
3.1. GEL ELECTROPHORESIS
WARNING! Given the high sensitivity of the amplification technique, which generates large quantities of a
specific DNA fragment, this amplified product represents a potent source of contamination in the laboratory.
It is recommended that the handling and electrophoresis of amplified products be carried out in a work area
well removed from the sample preparation area to avoid contamination by amplified DNA and consequent
false positive results.
Amplified products can be visualised by using 4% agarose or 6-8% polyacrylamide gels with half-strength
TBE buffer. Polyacrylamide gels give the best resolution.
Master Diagnóstica supplies “ready-to-use” kits for DNA electrophoresis in either agarose or polyacrylamide
gels, including all reagents needed for electrophoresis: molecular weight marker, loading buffer, 0.5 x TBE
buffer concentrate and EtBr (Cat. Nº MAD-003980M and MAD-003990M for agarose and polyacrylamide,
respectively).
Heteroduplex analysis of the PCR products is recommended to differentiate between products derived from
mono and polyclonal populations. If there has been a clonal rearrangement of the IgK/IgL genes, a
“homoduplex” will be obtained, whereas in a polyclonal population, a “heteroduplex” with heterogeneous
binding will be formed after the denaturing/renaturing procedure.
3.1.1. PROCEDURE
1. Denature the PCR products at 94ºC for 5 min.
2. Renature by transferring rapidly onto ice (4ºC) for 10-60 min.
3. Assemble the agarose or polyacrylamide gel in its tray and cover with electrophoresis buffer TBE
0.5X.
4. Take 20 μl of PCR product and mix with 4 μl of loading buffer 6X.
5. Load the samples into the wells in the gel, placing 10 μl of molecular weight marker in one of the
tracks.
6. Carry out electrophoresis for 1-2 h at 100 V. Voltage and running time can vary depending on the type
of gel, the electrophoresis tray, and the size of the amplified product, among other factors.
7. Stain with 0.5 μg/ml EtBr in water or TBE 0.5X and visualize with a UV transilluminator.
3.2. CAPILLARY ELECTROPHORESIS
PCR mixes IgK-A, IgK-B, IgL and internal control mix, contain primers labelled with 6-FAM fluorochrome. In
addition to visualization by conventional electrophoresis, this allows the PCR products to be analyzed using
capillary electrophoresis.
Automatic sequencers based on capillary electrophoresis, e.g., ABI PRISM® 3100 and 3500 Genetic
Analyzers from LifeTechnologies, are systems with high levels of reproducibility and sensitivity in the
sequencing and analysis of genomic fragments. Their performance is superior to that of the majority of
sequence analyses based on the use of polyacrylamide gels. Moreover, these systems allow rapid and
accurate analysis of the results using specific software.
3.2.1. PREPARATION OF THE PCR PRODUCT
1. Mix in a 0.2-ml Eppendorf tube:
15 μl of deionised formamide + 0.5 l of 600 LIZ marker + 1 or 2 μl of amplified DNA (this volume of
amplified product can be changed if the fluorescent signal is outside the optimum range).
2. Homogenize with a pulse of the vortex mixer.
3. Denature at 95ºC for 5 min, chill to 4ºC and load into the sequencer.
Electrophoresis conditions:
Gel: POP7 polymer
Buffer: ANODE and CATHODE BUFFER CONTAINER 3500 SERIES
Electrophoresis: approx. 40 min and 2-6 injection time.
Optimum fluorescence intensity: up to a maximum of 10,000 units
3.2.2. OPTIMUM CONDITIONS FOR SAMPLE LOADING
To test whether the DNA sample has been processed correctly and is of adequate quantity and quality to
generate a valid result in the electropherogram, we recommended loading an aliquot of the PCR products
(including the internal control) and carrying out conventional electrophoresis with EtBr staining before the
capillary electrophoresis analysis.
An excessive load of PCR product in capillary electrophoresis can lead to artefact peaks for the molecular
weight standard and the product, leading to a false reading. Thus, excess of a clonal sample can produce
peaks that resemble the pattern for a polyclonal sample.
To avoid this problem, the fluorescent signal should be kept between 400 and 4000 fluorescence units in
models 3100 and 3500. | https://samatashkhis.com/product/igk-igl-rearrangements-molecular-analysis-kit/ |
Every child is a unique individual and that children learn best through play and discovery and through being actively involved in their learning.
THE IMAGE OF THE CHILD
Our most important, common goal is the wellbeing of the children. We believe that a happy, loving, trusting atmosphere is conducive to learning, to developing positive self esteem, friendships, and to acquiring the foundations of life skills.
Our enriched play based curriculum allows children to make independent choices and allows for individual development.
CURRICULUM
At St Margaret’s we are committed to providing a quality educational program that is inclusive of all children in our care.
We plan a curriculum which reflects the needs of both the individual and the group and values each child’s background, culture, interests and strengths. We promote the development of children’s self esteem, confidence, resilience, problem solving, respect for others, independence and social skills.
ROLE OF ENVIRONMENT
We aim to provide a happy, safe, fun, nurturing, relaxed and enriched environment where children can develop dispositions for learning such as being curious, confident, a communicator, resourceful, cooperative, persistent and a risk taker.
We value our outdoor environment and we recognise that children need opportunities to connect to the natural world fostering an understanding and respect of the natural environment.
We will embed sustainable practises into our curriculum to highlight our responsibilities to care for the environment and provide for a sustainable future.
CHILDREN ARE CONNECTED
Children’s learning is a journey. Each child will learn in their own time and in their own way. The learning is based on their life experiences, the interests they have and their individual needs. Children are given opportunities to try new things, take risks, make mistakes, and be free to explore and experiment and to discover things that extend their knowledge. Children need the opportunity to develop life skills which will enable them to thrive in an ever changing world.
FAMILIES AS PARTNERS
We strive to create a welcoming environment for all families who come into our centre, where positive and supportive relationships are formed. We believe that parents are their children’s first and most influential teachers, and that it is vital to work together in partnership with families to provide appropriate learning experiences for the children. We actively encourage parents to be involved in their child’s learning and the everyday operation of the kindergarten.
CULTURE
We acknowledge and respect the diversity of our families and their histories, cultures, languages and traditions. Children are given opportunities to understand Australia’s cultural diversity, and the importance of being inclusive and respectful to all. | https://stmargky.com.au/services/philosophy/ |
The Effect of Building-Related Illness (BRI) in Commercial Settings
Poor indoor air quality can be a contributing factor in many cases of building-related illness. Finding ways to improve the quality of indoor air can provide added protection for workers and customers in commercial environments. By taking steps to control indoor air pollution and to prevent sick building syndrome, businesses can promote the healthiest workspaces and shopping areas for their customers and employees.
What Is Sick Building Syndrome?
According to the U.S. Environmental Protection Agency (EPA), sick building syndrome can have a significant impact on the general health of those who live or work in a specific area or a specific building. Common indicators of building-related illness (BRI) include persistent coughs, muscle aches, fever, chills, and tightness of the chest. These symptoms could subside or persist after the affected person leaves the area or building.
The EPA has identified some of the known or likely causes of BRI:
- Lack of proper ventilation can allow the buildup of contaminants and pollutants in commercial buildings, which can have a negative effect on the quality of indoor air.
- Biological pollution sources include bacteria, fungal growths like mold, pollen and viruses. When concentrated in indoor air, these contaminants can cause health problems and can contribute to cases of BRI.
- Chemical contaminants can originate outside the building and can enter through ventilation or HVAC system intakes. They can also be produced by products like carpeting, furnishings, cleaning materials, and office equipment inside the building.
If not remediated promptly, these indoor air pollutants can cause a number of medical issues for those exposed to these environments for an extended period of time.
Common Issues Associated With Sick Building Syndrome
The National Institute for Occupational Safety and Health (NIOSH) has identified a number of illnesses that are associated with poor air quality inside enclosed spaces and buildings. Some of the most common health issues reported in commercial buildings include the following:
- Respiratory issues and irritation of mucous membranes of the sinuses, nasal passages, and throat
- Increased frequency of headaches and fatigue reported by those who live or work in the building
- Muscle aches and, in some cases, fever
- Increased risks of asthma attacks, pneumonia and sinus infections
Both Pontiac Fever and Legionnaire’s Disease are associated with sick building syndrome and are caused by exposure to the Legionella bacterium inside commercial buildings.
Resolving Issues With Indoor Air Pollution
As a property owner or business manager, you can take steps to reduce air pollution and concentrated contaminants in your indoor air. Some of the best ways to address problems with BRI include the following:
- Schedule regular maintenance with your local HVAC company. This will reduce the potential for pollutant buildup in your commercial properties and will promote the healthiest environment for your staff members.
- Upgrade your air filter or add an in-duct air purifier to remove more particulates and contaminants from the indoor air of your building.
- Go ductless. Ductwork can provide plenty of places for organic growth, moisture, and dust to hide. By working with your commercial HVAC services provider to convert to ductless heating and cooling, you can eliminate much of the potential for pollution in indoor air.
- Increase ventilation. Your local HVAC company can provide you with practical solutions for ensuring that your commercial buildings receive enough fresh air to maintain a healthy environment for you, your staff members, and those who visit your offices and commercial enterprises.
At Byrum Heating & A/C, Inc., we offer commercial HVAC services designed to help you enjoy the best indoor air quality and the healthiest environment for yourself, your staff members, and your customers. Our NATE-certified technicians work with you to determine the most appropriate and practical solutions for your situation. To learn more about our full lineup of indoor air quality solutions, give us a call today. We look forward to the opportunity to serve you now and for many years to come. | https://www.byrumhvac.com/article/the-effect-of-building-related-illness-bri-in-commercial-settings |
Even the most obtuse, right-wing, head-in-the-sand, consumption-driven, anti-environment yob would at least admit that they’ve heard of forest conservation, the plight of whales (more on that little waste of conservation resources later) and climate change. Whether or not they believe these issues are important (or even occurring) is beside the point – the fact that this particular auto-sodomist I’ve described is aware of the issues is at least testament to growing concern among the general populace.
But so many issues in conservation science go unnoticed even by the most environmentally aware. Today’s post covers just one topic (I’ve covered others, such as mangroves and kelp forests) – freshwater biodiversity.
The issue is brought to light by a paper recently published online in Conservation Letters by Thieme and colleagues entitled Exposure of Africa’s freshwater biodiversity to a changing climate.
Sure, many people are starting to get very worried about freshwater availability for human consumption (and this couldn’t be more of an issue in Australia at the moment) – and I fully agree that we should be worried. However, let’s not forget that so many species other than humans depend on healthy freshwater ecosystems to persist, which feed back in turn to human benefits through freshwater filtering, fisheries production and arable soil accumulation.
Just like for the provision of human uses (irrigation, direct water consumption, etc.), a freshwater system’s flow regime is paramount for maintaining its biodiversity. If you stuff up the flow regime too much, then regardless of the amount of total water available, biodiversity will suffer accordingly.
Thieme and colleagues focus specifically on African freshwater systems, but the same problems are being seen worldwide (e.g., Australia’s Murray-Darling system, North America’s Colorado River system). And this is only going to get worse as climate change robs certain areas of historical rainfall. To address the gap in knowledge, the authors used modelled changes in mean annual runoff and discharge to determine fish species affected by 2050.
The discharge/runoff results were:
- decreased discharges expected along Mediterranean coast, parts of West Africa, southern Madagascar, in many areas of southern Africa
- increases expected in Nile Basin, Lake Chad, Guinean-Congolian forest, coastal Angola, Horn of Africa, several East African freshwater lakes, swamps, and coastal rivers
- runoff had similar pattern except in Nile Delta
- > 80 % of African land surface shows change in long-term average discharge/runoff
- ~ 35 % of region shows changes of > 40 %
- 73 of 87 ecoregions expected to experience absolute change in discharge/runoff of > 10 %
And for the freshwater species:
- of > 3000 described freshwater fishes, ~ 95 % occur in ecoregions with a change in discharge/runoff of > 10 %
- ~ 92 % of endemic species could be affected
- 10 ecoregions expected to experience > 10 % change in discharge/runoff
- ~ 33 % of fish species & 20 % of all endemics occur in ecoregions experiencing extreme of 40% change in discharge/runoff
Of course, just because a flow regime shift takes place doesn’t mean these species will necessarily go extinct; however, we do know that range-restricted/endemic species are more likely to experience range reductions with climate change, and these are some of the first species we know have already gone extinct from climate shifts worldwide. Therefore, we can reasonably expect that in areas with the greatest flow regime changes and highest endemism, extinctions will be highest.
What can we do? The take-home message is relatively simple – make sure historical flow regimes are maintained as best as possible, taking into consideration climate projections for each region. Catchment-level predictions of water availability change and flexibility to adjust water allocation/use management accordingly are paramount.
We also need more studies like this for other parts of the world where flow regime changes are already causing havoc. Murray-Darling Basin Authority – are you listening? | https://conservationbytes.com/2010/09/06/freshwater-biodiversity-conservation/?like_comment=4355&_wpnonce=b5c503b1da |
Accounts Payable Supervisor
A leading global private equity firm, located in Boston, is seeking an experienced Accounts Payable Supervisor to join the Finance – Corporate Accounting team. This position will report to the Accounts Payable Manager and will be responsible for the full-cycle accounts payable process including vendor on-boarding, invoice coding, contract review and matching, and payment processing. This position will offer significant room for professional and financial growth. If you are interested, please keep reading.
Responsibilities:
- Provide oversight to team and processing queues to ensure invoices are paid timely, accurately and in accordance with the business terms, firm policy and procedures, and internal controls
- Coordinate the preparation of more complex vendor payments
- Interpret contracts (engagement letters, MSAs, SOWs) and complex business documents
- Review and approve voucher packages (invoices, contracts, etc.) created by the AP team members to ensure proper general ledger coding, contract application, and business approval
- Supervise the team and prioritize daily work to maximize efficiency and timeliness
- Manage customer relationships (both internal and external) to ensure superior service is consistently provided
- Develop team: monitor performance of direct reports and team. Provide prompt and objective coaching and counseling
- Ensure compliance with policy and procedures, Limited Partnership Agreements, Advisory Agreements, and other one-use and long-term contracts
- Embrace a culture of continuous improvement through automation, embracing and leveraging technology, streamlining, and innovation
Qualifications: | https://daleyaa.com/job/21270-accounts-payable-supervisor/ |
SPECIAL FEATURE Research into Seismic Design and Performance of High Cut Slopes in New Zealand
Advances in NZ Research
Research into Seismic Design and Performance
of High Cut Slopes in New Zealand
Introduction and background
Transportation routes are critical lifelines for the community, particularly in the event of natural hazards. New Zealand has a rugged terrain and high seismicity, which means that high cut slopes are required to form transportation routes. The performance of these slopes in earthquakes is critical to ensure a resilient transportation system. Although the focus of the research was on transport routes, the outcomes will also inform the assessment or design of steep slopes that affect the vulnerability and life safety associated with buildings and other facilities adjacent to steep slopes.
Currently there is very little guidance available for the earthquake design of high cut slopes either in New Zealand or internationally. The potential for topographical amplification of earthquake shaking, and the observation of the large landslides that have affected transportation routes in earthquake events has raised the awareness of the need for research and development of guidelines for the seismic design of high cut slopes. The New Zealand Transport Agency engaged Opus International Consultants to carry out this research and development of guidance.
Research Team
This research has been carried out by Messrs P Brabhaharan, Doug Mason, Eleni Gkeli and Andy Tai of Opus International Consultants, and Graham Hancox from GNS Science, and assisted by other staff from both organisations. Graham Hancox focussed on the landsliding in past earthquakes in New Zealand. The contribution of the peer reviewers, Prof George Bouckovalas, and Dr Trevor Matuschka is gratefully acknowledged. The contribution of Prof Nick Sitar is also gratefully acknowledged.
Research Scope and Objectives
The research included:
1. Review of the performance of high cut slopes in recent worldwide earthquakes
2. Consideration of the influences of the distinctive aspects of New Zealand’s seismicity and topography,
3. Review of relevant recent research on topographical effects from New Zealand and overseas
4. Review of current design practice in New Zealand and overseas
5. Development of guidelines for the earthquake
design of high cut slopes in New Zealand.
Some limited targeted numerical modelling has been carried out as part of this research to test the topographical effects and variation of seismic shaking over the height of the slope, using specific New Zealand-centric parametric analyses. The objective is to enable design of resilient and cost-effective cut slopes, taking into account the variation of peak ground acceleration over the height of the slope.
The results of the initial stage of the research, including points 1 to 4 discussed above, were presented in the 6th ICEGE held in Christchurch in November 2015 (Brabhaharan et al., 2015). The remaining work of the research is now complete and will be published by the New Zealand Transport Agency. A quick overview of the findings of the research is presented in this article.
Lessons from past earthquakes
Detailed review of coseismic landsliding and the performance of slopes during historical New Zealand earthquakes was carried out as part of this research. This has been supplemented by a literature review to ascertain the effects of large earthquakes on slope performance in past worldwide earthquakes. The main findings were:
The general pattern of damage observed in recent large earthquakes is of widespread slope failures characterised by shallow landslides in the surficial layers of regolith and immediately underlying weak, brittle and dilated rock mass in the upper parts of steep slopes. Large or very large landslides are comparatively rare in number and extent compared to the shallow slides, however these tend to be much larger in volume and consequently cause much greater damage.
Widespread slope failures have occurred on slopes steeper than 40° to 50° that are underlain by young (Miocene or younger) sedimentary rocks, which have been observed to be particularly prone to these types of landslides. However, landslides in much gentler slopes in volcanic soils has been observed recently in the April 2016 Kumamoto earthquakes in Japan (Brabhaharan – pers comm).
Steep slopes in competent bedrock are prone to more localised failures (shallow rock slides and rock falls), as well as less common, but larger, defect-controlled failures.
The middle to upper parts of hillslopes are most susceptible to landslides, due to a combination of steep slope angles, weaker rock strength (due to the effects of weathering, dilation, fracturing etc.) and topographic amplification of ground motions in those parts of the slopes.
Hanging wall, topographic amplification and attenuation/directivity effects result in asymmetric patterns of slope failure around the fault rupture, with larger ground motions and consequently more slope failures located at greater distances from the fault in areas that exhibit these effects. The common focal mechanism for earthquakes exhibiting these effects
is thrust/reverse faulting.
Characterisation of New Zealand Topography and Seismicity
The New Zealand land mass is tectonically very active due to its position within the plate boundary zone between the Pacific and Australian plates. Consequently, the topography of New Zealand is highly variable, from mountainous alpine areas, to lower relief mountains/hills, rolling hill country, rounded and incised peneplains, young volcanic terrain, recent alluvial plains and uplifted marine terraces. The differing terrains were grouped into three broad categories for areas where transportation corridors pass through hilly terrain, presented in Table 1.
Table 1: Characterisation scheme for New Zealand topography
Observations from past earthquakes in New Zealand and overseas show that seismicity effects, such as topographic amplification, can have a significant effect on the distribution and concentration of earthquake-induced slope failures, particularly in areas of compressional tectonic regimes dominated by reverse or thrust faulting. The land mass of New Zealand has been characterised based on the seismotectonic type of active deformation, as shown in Figure 1. The seismotectonic regimes vary from the subduction zone and fold-thrust belt in the eastern North Island, dextral oblique-slip faults through the central part of New Zealand, reverse faults in the northwest and east of the South Island, normal faults associated with volcanic rifting on the central North Island volcanic plateau, and low seismicity areas in the northwest North Island and southeast South Island, away from the active areas.
Similarly, the review of published literature documenting the performance of slopes in different materials shows that young (Miocene or younger) sedimentary rocks are particularly prone to widespread shallow landslides in large earthquakes, with hard bedrock more prone to localised wedge failures and rock falls. Figure 1 also shows a simplified geological grouping of formations into four broad categories: indurated bedrock, soft rock materials, volcanic deposits and Quaternary soil deposits, and is based on the 1:250,000 QMAP regional geology maps.
Topography effects
Topography effects is a complex phenomenon and is acknowledged in the literature that study and quantification presents challenges. The following points summarise the general conclusions extracted from literature review regarding topographic effects:
Topography effects are influenced by shape, inclination and height of slope, as well as wave type, wavelength and angle of wave incidence. They are also influenced by the stratigraphy and geology of the slope, such as the presence of softer/weaker rock or soil layer at the top of the slope and presence of rock defects and faults.
Amplification effects due to topography are mainly concentrated at the top of two dimensional step-like slopes and isolated cliffs or steep ridges. De-amplification is observed at the midheight and near the base of slopes and canyons.
High amplification of the seismic motions at a zone near the crest of the slopes occurs due to a combination of the primary SV waves and reflected P, SV and Rayleigh waves. A parasitic vertical component of these waves superimposed on the incoming seismic excitation appears to be of importance.
Local convexities on the slope profile can introduce important changes to the pattern of surface ground acceleration, such as amplification of the vertical component of incoming S waves.
Topography effects fluctuate intensely above the crest with distance away from the slope, so that detecting them on the basis of field measurements alone becomes a very demanding task (Bouckovalas & Papadimitriou, 2006).
The amplification of the seismic motion near the crest has been demonstrated in the numerical studies qualitatively, but was underestimated quantitatively. Time – domain crest:base amplification ratios from the theoretical and numerical studies do not exceed the value of 2, while values as high as 10 were observed during microtremors (Assimaki et al. 2005).
Numerical Analysis
Limited numerical analyses were carried out as part of the current study to provide a better insight into the variation of ground acceleration along the height of the slope, as well as the effects of cut slopes affecting part of the slope.
The topographies examined include ridge and terrace like slopes. The geometries examined are shown in Table 2. The slopes were assumed to consist of rock, so that high complexity of the model is avoided and the interpretation of the results purely concern seismic motion variation due to topography, rather than potential soil amplification effects (Cases 1 and 2 in Table 2). Limited analyses were carried out taking also into account the presence of weaker material near the slope surface, to provide insight into the degree of influence of overburden materials overlying bedrock (Cases 3 and 4 in Table 2). The effect of local convexities present on the slopes, for example created by a man-made cut at the toe of a high slope was also examined at a preliminary stage (Case 5 in Table 2).
Table 2: Summary of geometries modelled in the numerical analyses
The conclusions drawn from the analyses are in general agreement with the conclusions made from the literature review, some of them are summarised below:
The amplification effect is found in our analyses to be mostly influenced by the frequency of the excitation and the slope height, when the slope consists of rock. It is also influenced by the presence of weaker overburden soil material or highly weathered rock overlying unweathered or slightly weathered bedrock. De-amplification of the seismic ground motions is observed at the toe of the slope and inside the slope at high frequencies.
For the terrace topography, the amplification effect at the upper part of the slope is negligible for the small frequencies (≤ 2 Hz) when the slope consists of rock. However, amplification factors of the order of 1.2 and 1.4 are indicated by the numerical analyses for the 50 m high and 100 m high slopes respectively, for higher frequencies, when the slope consists of rock.
High amplification effects are observed at the top part of the ridge for all the cases examined, with higher amplification factors occurring for the high ridges. The amplification of the top of the ridge is higher than that of the corresponding terrace-like slope.
Considerable vertical component of ground acceleration was observed for some of the cases examined, although the harmonics used in the analysis introduced horizontal motions only to the slope. The presence of a significant vertical component has been indicated by previous researchers, as concluded in the literature review.
Amplification of seismic acceleration was observed in the case of a ridge with a cut excavated at its toe, of slope angle 45° to 50°. The amplification factors show a tendency to increase as the irregularity becomes more pronounced.
Current Design Practice
Design standards available for use in the seismic design of high cut slopes have been searched for and reviewed as part of the literature review. This included New Zealand, European, Canadian and United States of America standards.
New Zealand
In New Zealand, it is only the Bridge Manual third edition (New Zealand Transport Agency, 2014) that provides specific recommendations on design standards and selection of seismic design parameters for slopes associated with state highways in New Zealand. The Bridge Manual provides earthquake loading parameters, such as Peak Ground Acceleration (PGA) and the Effective Magnitude by classifying the roads depending on their level of importance.
It should be noted that the Bridge Manual requires cut slopes to be designed to a lower level of earthquake hazard, compared to other structures (bridges, retaining structures and embankments) along state highways. It is not clear why cut slopes are required to have a lower level of earthquake performance although failure of cut slopes can give rise to closure of important primary lifeline routes for six months or more (Mason and Brabhaharan, 2012), or as observed in the failure of slopes along State Highway 3 in the Manawatu Gorge, which was closed for more than 12 months.
Further, no allowance is made in the Bridge Manual for factoring seismic actions, either to allow for amplification (such as due to topographical effects) or reductions (to allow for incoherence of motions where the slopes are of significant height). Where slopes are to be designed for permitting displacement under earthquake loading, specific guidance is provided for engineered fills and retaining walls, and these same clauses are referred
to for cut slopes.
The common engineering practice followed by the industry for slope stability analysis is to apply the provisions of NZS 1170.5:2004, with respect to the limit state requirements (Ultimate Limit State and Serviceability Limit State) and NZS 1170.5:2004 or the Bridge Manual for estimating the design actions. When simplified pseudo-static methods of analysis are followed, only the horizontal component of the seismic inertia forces are taken into account, without any factoring to account for topographic amplification near the crest or de-amplification near the base (Brabhaharan and Stewart, 2015). Similar assumptions are made when a displacement based design is carried out.
Rouvray et al. (2015) present a performance based design approach used for the design of the Transmission Gully motorway north of Wellington, New Zealand. This relied on the recurrence intervals specified for different serviceability and ultimate limit states in the current Bridge Manual (New Zealand Transport Agency, 2014). Toh and Swarbrick (2015) present a method of assessment of seismic loads considering topographic amplification.
This method involves selection of a slope crest topographical amplification factor based on the slope angle and slope height
Internationally
A common element in the international design practice followed for pseudo-static slope stability analysis, is factoring of the horizontal seismic acceleration applied for the calculation of the equivalent acting force on the sliding mass. A factor of 0.5 is used in the Eurocode, of which we have not been able to clarify the background, and a factor feq is used in the Californian Code (Southern California Earthquake Centre, 2002).
Amplification of seismic waves due to topographic irregularities has generally received limited attention in the context of design codes and standards. Recommendations were found only in the French Code PS-92 (Jalil, 1992) and the Eurocode (European Committee for Standardization, 2004). Both codes have considered step-like slopes and refer to the topography effects as an amplification of the seismic motion at the upper part of the slope. The focus of the Eurocode is in providing topographical amplification for buildings built on slopes, near the crest or at some distance from the crest, rather than providing a method for assessing the effect of topographical amplification on the earthquake performance or design of the slopes themselves.
Development of Design Guidelines
Draft guidelines for design of high cut slopes have been developed to aid design for transportation projects in New Zealand, as part of this research project. The concept of resilience is introduced to the new Design Guidelines. Resilience is the ability to recover readily and return to its original form from adversity. From an infrastructure and building perspective, this requires us to develop our built environment in a way that reduced damage and the consequent loss or reduction in its functionality, and the ability to recover quickly from such reduction. Brabhaharan et al (2006) adapt this concept for resilience for application to transportation networks as conceptually illustrated in Figure 2.
Figure 2: Resilience of route/network
Applying this to transportation routes, the cut slopes should be designed for resilience by adopting a design that has a low vulnerability to failures and closure of the route, by minimising the size and nature of failures, thus enabling the functionality of the transportation route to be restored quickly.
Classification of cut slopes in the new Design Guidelines is initially based on selecting the appropriate Importance Level of the route in order to select appropriate levels of earthquake shaking. The Importance levels (IL) proposed are those defined in AS/NZS 1170.0 supplemented by the Bridge Manual for Highways and arterial slopes. The aim of this selection has been to use the existing importance level framework in the New Zealand Standards and Bridge Manual.
A further classification of the cut slopes has been introduced to the guidelines, the Resilience Importance Category (RIC). It is recognised that the resilience expectations even for the same Importance Level of transportation route varies significantly depending on the regional context of the routes. For example IL3 routes in Christchurch or Auckland would have a lot more redundancy because of the many routes given the terrain, whereas in places like Wellington, Dunedin or Central Otago there is very little redundancy. Resilience Importance Categories have been developed to provide for incorporating the local context and resilience expectations into the design process.
Table 3: Illustration of the developments introduced by the new design guidelines for cut slope design
A four-level design approach is presented, to suit the design importance level of cut slopes. A fundamental difference of the design approach given compared to that stipulated in NZS 1170 is that the design is for resilience rather than life safety alone. The different design approaches depend on the importance level of the transportation corridor, the resilience expectations for the route and the scale and complexity of the cut slopes.
The guidance also includes assessment of suitable topographical amplification factors for typical hilly terrain where highway cut slopes are commonly formed in New Zealand, and factoring of the peak ground acceleration for derivation of equivalent pseudo-static loads for use in design.
Table 3 provides a comparison of how the proposed guidelines address difference aspects of the design compared to existing standards. The proposed approach uses a novel resilience based approach to cut slope design, and addresses significant gaps in current design guidance for cut slopes.
Closure
It is recognised that this is an area of recent research and development, and there is more research yet required to develop a good understanding of the topographical amplification issues. However, these guidelines will help bring up to date research and current knowledge to the design of transport infrastructure, at a time when major transportation infrastructure involving high cut slopes is currently happening or planned for the near future in New Zealand.
These draft guidelines provide an approach based on the research carried out, and may be converted into guidelines that are published for design use, and for eventual incorporation into design manuals such as the Bridge Manual, or into design standards.
References
Australian / New Zealand Standards (2002). Structural design actions – Part 0: General principles. June 2002.
Assimaki D, Gazetas G, Kausel E. (2005). Effects of local soil conditions on the topographic aggravation of seismic motion: Parametric investigation and recorded field evidence from the 1999 Athens Earthquake. Bulletin of the Seismological Society of America 95 (3).
Brabhaharan, P. (pers com) (2016). Comments following the Learning from Earthquakes mission to Japan to observe damage from the Kumamoto Earthquakes in April 2016.
Brabhaharan P., Mason D., Gkeli E. (2015). Research into Seismic Design and Performance of High Cut Slopes in New Zealand, 6th International Conference on Earthquake Geotechnical Engineering, 1-4 November 2015, Christchurch, New Zealand
Brabhaharan, P (2006). Recent Advances in Improving the Resilience of Road Networks. Remembering Napier 1931 – Building on 75 Years of Earthquake Engineering in New Zealand. Annual Conference of the New Zealand Society for Earthquake Engineering. Napier, 10-12 March 2006.
European Committee for Standardization (2004). Eurocode 8. Design of structures for earthquake resistance. Part 1: General rules, seismic actions and rules for buildings. EN 1998-1: and Part 5: Foundations, retaining structures and geotechnical aspects. EN 1998-5. November 2004.
Ministry of Business Innovation and Employment and New Zealand Geotechnical Society (2016). Guidelines for Earthquake Geotechnical Practice in New Zealand, Module 1: Overview of the Guidelines. March, 2016.
| |
Background and Purpose - It is unclear whether disparities in mortality among stroke survivors exist long term. Therefore, the purpose of the current study is to describe rates of longer term mortality among stroke survivors (ie, beyond 30 days) and to determine whether socioeconomic disparities exist. Methods - This analysis included 1329 black and white participants, aged ≥45 years, enrolled between 2003 and 2007 in the REGARDS study (Reasons for Geographic and Racial Differences in Stroke) who suffered a first stroke and survived at least 30 days after the event. Long-term mortality among stroke survivors was defined in person-years as time from 30 days after a first stroke to date of death or censoring. Mortality rate ratios (MRRs) were used to compare rates of poststroke mortality by demographic and socioeconomic characteristics. Results - Among adults who survived ≥30 days poststroke, the age-adjusted rate of mortality was 82.3 per 1000 person-years (95% CI, 75.4-89.2). Long-term mortality among stroke survivors was higher in older individuals (MRR for 75+ versus <65, 3.2; 95% CI, 2.6-4.1) and among men than women (MRR, 1.3; 95% CI, 1.1-1.6). It was also higher among those with less educational attainment (MRR for less than high-school versus college graduate, 1.5; 95% CI, 1.1-1.9), lower income (MRR for <$20k versus >50k, 1.4; 95% CI, 1.1-1.9), and lower neighborhood socioeconomic status (SES; MRR for low versus high neighborhood SES, 1.4; 95% CI, 1.1-1.7). There were no differences in age-adjusted rates of long-term poststroke mortality by race, rurality, or US region. Conclusions - Rates of long-term mortality among stroke survivors were higher among individuals with lower SES and among those residing in neighborhoods of lower SES. These results emphasize the need for improvements in long-term care poststroke, especially among individuals of lower SES.
|Original language||English (US)|
|Pages (from-to)||805-812|
|Number of pages||8|
|Journal||Stroke|
|Volume||50|
|Issue number||4|
|DOIs|
|State||Published - Apr 1 2019|
Bibliographical noteFunding Information:
This research project is supported by a cooperative agreement U01 NS041588 from the National Institute of Neurological Disorders and Stroke, National Institutes of Health (NIH), Department of Health and Human Service. Representatives of the funding agency have been involved in the review of the manuscript but not directly involved in the collection, management, analysis, or interpretation of the data. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute of Neurological Disorders and Stroke or the NIH. Dr Elfassy was supported by the American Heart Association postdoctoral fellowship (17POST32490000) and is currently supported by the University of Miami Clinical and Translational Science Institute, from the National Center for Advancing Translational Sciences and the National Institute on Minority Health and Health Disparities (KL2TR002737). Dr Zeki Al Hazzouri was supported, in part, by a grant from the NIH, National Institute on Aging (K01AG047273). | https://minnesota-staging.pure.elsevier.com/en/publications/sociodemographic-disparities-in-long-term-mortality-among-stroke- |
Would you like one day to achieve such a level of spiritual evolvement that you yourself are considered a spiritual master? At this spiritual level you not only transform yourself but have the ability to positively influence humankind. This can only happen through spiritual growth. The issue is that the spiritual level of most seekers only increases by 0.25% per year. Under the guidance of H.H. Dr Athavale seekers have surpassed these odds and have achieved spiritual growth rates of up to 10% a year. As of November 2015, there are close to 60 seekers who have attained Sainthood and over 750 seekers who have attained the spiritual level of 60% and above. In this section, we share the underlying principles and teachings behind such astonishing spiritual growth rates so that seekers can make the best use of their life and reach God-realisation in one lifetime itself.
Research articles
Who is a Seeker?
Throughout the SSRF website, we have frequently used the word seeker to refer to a person who is earnestly seeking spiritual growth. In this article we go into detail to explain the qualities of a seeker and how they may differ from those of an average person.
What is Spiritual Level?
The concept of spiritual level is perhaps one of the most intriguing concepts to our readers. We often get asked about spiritual level and how one can gauge one’s spiritual level. In this article we explain this concept in detail and also provide you with some benchmarks to estimate your own spiritual level.
Levels of Spiritual Evolvement
The type of degree we hold can have a lot of importance in the world. In Spirituality, we do not get degrees, but there are spiritual milestones in one’s spiritual evolution that can be compared broadly to worldly degrees.
What is a Spiritual Experience?
Have you ever said a prayer and found that it was answered instantaneously? Have you ever had a dream that was a premonition of an event to come? There are some things that happen to us in our lives that modern science fails to explain. These are examples of spiritual experiences that often accompany spiritual practice. Spiritual experiences are given either by the Divine or by negative energies. This article explores the concept of spiritual experiences in depth and provides guidance on what to do when one experiences the supernatural.
Featured Articles
The Spiritual Science Research Foundation (SSRF) uses the term ‘spiritual level’ to describe a person’s spiritual maturity or spiritual capacity.
The type of degree we hold can have a lot of importance in the world. In Spirituality, we do not get degrees, but there are spiritual milestones that can be loosely compared to worldly degrees.
Research suggested
Spiritual Growth – A Guideline on How to Grow Spiritually
There is a saying that if you always do what you always did then you will always get what you have always got. If there is one thing that is essential for spiritual growth it is increasing one’s level of spiritual practice regularly. Spiritual growth cannot occur if one does the same practice throughout their lives.
Beyond the Spiritual Benefits of Yoga (Asanas and Pranayama)
For the vast majority of those who attend yoga classes, they are nothing more than a form of physical exercise. However for those seeking spiritual growth, will such yoga classes provide a boost to one’s spiritual journey? In this article we explore the options for the seeker who wants God realization.
SSRF’s Bi-annual Spiritual Workshop
SSRF holds a bi-annual spiritual workshop at its premier Spiritual Research Centre in Goa, India. The workshop is conducted free-of-charge for all the seekers attending with the sole purpose of assisting them in their spiritual growth.
After we start our spiritual journey, it is important to gradually increase the level of our spiritual practice on a regular basis. This article gives practical guidance on how to do so in order to ensure continued spiritual growth.
Related sections
Case studies
Positive Changes After Only 4 Months
Marisa started spiritual practice only 4 months ago but has found happiness and inner peace through spiritual growth.
Positive Changes After Only 9 Months
Read how JT overcame drug and alcohol addiction among other problems through spiritual practice.
Seeking the Truth
Adrian witnessed an all-round positive change from lifestyle to relationships after starting spiritual practice.
From Extreme Addiction and Suffering to Bliss
Despite extreme addiction and suffering from early age, due to faith in God and spiritual practice, Annette is now experiencing Bliss.
Finding God Again
Silvia was eager to find God in her life and in this article she shares with us about this quest.
Overcoming Negative Thinking and Anger
Christie faced negative thinking and anger for much of her life. Spiritual growth helped her to overcome it.
Bliss as the Utmost Purpose of Life
When depression consumed AK’s life, only spiritual practice helped him to overcome it.
From Hardships to Bliss
Maria faced many hardships in her life, but has come out happier and blissful due to spiritual practice.
Getting Out of Trouble
Learn how Auritro has overcome drug and alcohol abuse through spiritual practice.
Related videos and tutorials
Spiritual Blog
Overpowered by Time, Yet Empowering Others
People are often afraid of old age and the problems they may face due to possible health issues. People may worry about being a nuisance to others due to not being able to care for themselves. Here we present the case of a spiritually evolved seeker who had transcended fear of disease and old age and was able to conduct spiritual healing sessions to help others in their spiritual practice. | https://www.spiritualresearchfoundation.org/spiritual-practice/spiritual-growth/ |
by Bence Sarkadi
In an article submitted to the Millstone in August 2013, entitled “Puppeteering Shakespeare”, I talked about the possibilities of adapting Shakespeare to the puppet stage. I pointed out the fundamental differences, as well as some common points, between the genres of the live theater and puppetry and attempted at presenting an overview of the specific tools of puppetry. I also discussed ways in which the live theater can make use of the methodology of the puppet theater when presenting a Shakespearean play.
In the present paper I would like to give the reader an insight into the actual workings of the puppet theater when dealing with a Shakespeare drama. This time I will focus on the process of adaptation and discuss some very practical questions a director/creator of the puppet theater is faced with when working on an adaptation. The basis for the analysis will be my own adaptation of Hamlet to the puppet stage, entitled “Re:Hamlet”.
While the aim of this essay is to provide an objective analysis of the process of creating an adaptation, such an analysis could not be completely neutral and thereby ‘correct’ even if it were performed by an impartial critic as opposed to the creator of the adaptation. As Jan Kott, one of the most influential Shakespearean scholars of the 20th Century and a great source of wisdom in Shakespeare studies says,
It’s virtually impossible for a director, and even more so for a critic to be ‘correct’. If someone wants to be ‘correct’, they must become a proof-reader and not a director or critic.
In this particular instance, I stand as director and critic and even though all the key concepts of the production are either supported or deliberately questioned by setting them against the theories of scholars and observations of notable theater practitioners, it is impossible for someone in such a position to perform a completely impartial analysis of his own work. The point of the exercise, however, is not to assess the value or success of the production, or even to decide whether the concept of Re:Hamlet is in any way better than any other concept of adaptation, but to show how one particular system of translating Shakespeare’s text to a nonverbal medium can be applied in practice. Even if such an investigation cannot claim to be absolutely objective, it can and will aim to be thorough (considering multiple aspects of the method applied), diagnostic (examining the problems involved in applying this method) and authentic (providing a practical insight into the workings of the puppet theater). I sincerely hope that the readers of the Millstone will be as open to this investigation as they have been to my previous article.
Adaptation as creative interpretation
Hamlet on the puppet stage
What exactly are we are doing when we are adapting Shakespeare? It is Jan Kott again, who provides an explanation that may serve as a starting point for describing this process. In a recorded conversation with Charles Morowitz, a radical minded and immensely creative director of the theater, Kott makes a statement that addresses some very important aspects of adaptation:
One says to Shakespeare: thank you very much for providing us with this story, but we don’t really need you any longer as we are going off in another direction. In just the same way as Shakespeare might have said ‘thank you’ to Belleforest or Kyd or Holinshed or Boccaccio.
Although Kott’s observations are generally very accurate, this time I believe he is exaggerating a little, even if only to emphasize a perfectly valid point. I will argue for the validity of his statement shortly, but first I would like to point out a very important problem with his interpretation of the process of adaptation. The fact is, the above description of adapting Shakespeare is applicable only as long as one considers himself at least as great, or even greater than Shakespeare. Shakespeare could easily say farewell to his sources because he only needed the stories they had written (or had themselves adapted from other sources), but he did not need their name and fame, that is to say all that is connected to a ‘brand name’.
It is quite obvious that in choosing to stage a Shakespeare play as opposed to, for example, a possibly similarly great Marlowe paly, we are jumping on the bandwagon. Shakespearean adaptations have enjoyed, and will probably continue to enjoy the attention of a much wider audience than productions of any other playwright. If we forget, for now, about the immense burden of expectations connected to a Shakespearean adaptation and concentrate only on the benefits of staging his plays, it becomes clear that we do need Shakespeare, not only for the stories his plays provide, but also for the ‘Shakespeare brand’, that is, all that his name means, and which makes a production appeal to the public more than, say, the ‘Ben Jonson brand’ or even the highly popular ‘Chekov, Shaw, Moliere or Ibsen brands’. This is not to suggest that there is no other motivation for staging Shakespeare than his name being a sure bet to bring in the audience. Shakespeare does speak to all of us and his works not only provide a source of learning about history, culture and the human psyche but they are also immensely entertaining. One should not forget, however, that the reason for producing an adaptation as opposed to an entirely original piece of art is precisely to create something that is innovative, but at the same time recognizably stemming from an existing, and in most cases, already successful work. The truth is that while thanking him for providing us with a story, we continue to need Shakespeare, as him being the most famous playwright of all time is at least one of the reasons for choosing one of his plays to adapt.
With this said, we should recognize that Kott does hit the nail on the head when he talks about “going off in another direction”. Whether it is a literary genius rewriting a story or an average dramaturge, director or puppeteer transforming a play according to the requirements of another medium, what all adaptations will necessarily have to do is find their own direction. In the same interview, Kott says, “we have to force the classical texts to give us new answers. But to obtain new answers, we have to bombard them with new questions”. These new questions will shape the direction a certain adaptation will take and define the answers one can get.
To be very specific then, adapting Hamlet to the puppet stage will require that we: (1) look for and find those elements of the drama which make it relevant for our time and social environment and which can lead us to a new direction for presenting the play; (2) define and follow this newfound direction which differs but also stems from the original in order to produce a creative interpretation of the work and; (3) in following this direction transform, reduce and emend the play, within the limits of recognizability, to render it suitable for the medium of the puppet theater. Based on these three steps three questions can be formulated. These questions are the following: (1) WHY (why is the adaptation justifiable and relevant); (2) WHAT (what is the new direction the adaptation will take) and; (3) HOW (how can puppetry serve the play and how can the play be suited for the puppet stage). These three questions have served as the backbone of the process of staging Re:Hamlet. As the focus of this paper is the process of adaptation, I will, through the analysis of the production, try to provide an answer to the third question.
How can puppetry serve the play and how can the play be suited for the puppet stage?
Puppetry, like every other performing medium, has its own set of tools and finding the ones most suitable for achieving a given effect will not only define the individual style, but ultimately the success of the final product. But how does one start working on an adaptation at all? Firstly, the director of the puppet theater, just as the director of the live theater or of a film, will have to decide where he wants to place the emphasis and how he can best reconcile the values of Shakespeare and the values of the performing medium. Just as in the case of the cinema where the director has to translate some, or even most of the words to pictures and action, the puppet theater will also need to create its own ‘screenplay’. As Syd Field, one of the greatest names in screenwriting today observes: “a screenplay is a story from play-script to screenplay told in pictures, and there will always be some kind of problem when you tell the story through words, and not pictures”. The starting point for the ‘screenplay’ of Re:Hamlet was the understanding that the puppet theater, just like the cinema, has a unique ability to tell a story in pictures and action: to translate verbal poetry into visual imagery and movement.
But what is the reason for this unique ability? The answer is to be found in an area of theater studies concerned with the kinds of signs the art of puppetry uses to transmit its message to the audience: the semiotics of the puppet theater. According to Steve Tillis, a theoretician I have also quoted in my previous article, the puppet theater makes use of three signs: that of design, movement and speech. Tillis talks about the three signs of having equal importance in puppetry, this view, however, cannot account for the evident ability of the puppet theater to translate verbal poetry into visual imagery and movement. I believe that the puppet itself is primarily distinguished from other, non-performing, inanimate objects by the necessity of the signs of design and movement, while the sign of speech is only an optional constituent of the puppet. This hierarchy of signs in the puppet theater becomes more distinct if, in contrast, we look at how the same signs are applied on the live stage.
Although the three basic signs of design, movement and speech are also the basic constituents of the live theater, it is the order of their significance that distinguishes the two genres. While speech is only a complementary tool of puppetry, on the dramatic stage it is the spoken word that generally serves as the main channel of communication. At the same time, movement (of the actors) and design (of the make-up, costumes, props and scenery) are only optional elements that may become important means of communication in many cases, but are not inherently essential to a live theater performance. Any live stage performance can decide to omit the use of props, scenery or costumes and even movement, and build its communication exclusively on the sign of speech. Although the spectator of such a performance will surely benefit from the gestures and facial expressions of the actor and the atmosphere of the space in which the performance takes place, the event will effectively function as a theater play and an audio play at the same time. Such a performance can be recorded and published as an audio book without any editing, and will be fully enjoyable as such. Radio plays or audio drama presentations (performances created for the purpose of being listened to) are also examples of performances without the use of visual elements.
With the essential constituents of puppetry being design and movement, a performance not based in visuality would lose its meaning in the puppet theater. It is just as impractical to imagine a non-visual puppet performance as a non-visual exhibition of paintings, a non-visual pantomime or a non-aural music concert. Puppetry makes use of the capacity of the graphic or fine arts for expressing complex metaphors, allegories or entire stories through visual signs, and as such, it can produce creations of the fine arts. Design in the puppet theater, in the broader sense, includes not only the design of the puppet, but of the scenery and props, as well, and a finely sculpted head, a masterfully painted face or background or a skilfully sewn costume can easily have the expressive strength of a painting. But just as there is no sense in speaking about a painting without ever looking at it, there is no point in discussing a puppet performance without experiencing its visual signs. Without the sign of design, the puppet theater cannot exist.
Movement is the other essential sign of puppetry without which the puppet theater becomes just as meaningless as without design. Puppetry, where movement gives life to design and therefore becomes an essential element of the meaning, is like dance or pantomime where movement is the meaning, in that neither of the three can exist without movement. Pantomime, in fact, is so close in its methodology to the puppet theater that Fijan, Ballard and Starobin in Directing Puppet Theatre declare that
Pantomime is the basis of all puppetry. It is the visual development of the story without words. A person whose hearing is impaired should be able to watch a puppet performance and understand the story because the movements the puppets make, the positions they assume, and their relationship to the stage are all clear and concise.
Although pantomime does not, by definition, use any text in its performances while puppetry can and often does work with the spoken word, it is a fact that puppetry is based on movement more than on anything else. Fijan, Ballard and Starobin make the very important observation that
If words are necessary to explain the action, something is wrong with the blocking. If it hasn’t been said in pantomime, it hasn’t been said. Too many puppeteers rely on dialogue to tell the story when it should be apparent from watching the pantomime.
The creation of Re:Hamlet was based on the understanding that puppetry can work most effectively by employing the tools of the graphic arts as well as those of dance and pantomime to translate verbal poetry into visual imagery and movement. One cannot, however, take all the metaphors, allegories and rhetorical structures in Hamlet, the 184 different types of figures of speech as defined and categorized by Henry Peacham, and simply ‘draw a picture’ or ‘construct a choreography’ from each one. One needs to find suitable methods for the process of translation to the idiom of puppetry. One such method was the use of a technique originally aimed at developing puppeteers’ ability to substitute words with non-verbal signs and borrowed from the teaching repertoire of the Ernst Busch Academy of Dramatic Arts in Berlin.
The process starts out with a puppeteer performing a monologue or selected lines from a longer speech, for example part of the “O, that this too too solid flesh would melt” (I.ii.129-159) soliloquy from Hamlet. Whenever a puppeteer performs a text, it is of course the puppet that is performing. The puppeteer animates the inanimate object and gives his own words to the puppet, placing the puppet in the spotlight and himself in the background. For students of puppetry or actors new to the world of the puppet theater this in itself is immensely difficult, because one has to repress the natural urge to communicate feelings and thoughts through one’s own movements, gestures and facial expressions. Once all the metacommunicative elements of the scene are successfully given to the puppet, comes the exercise of eliminating the text. The actor now speaks only the first half of every line with the aim of expressing the same feelings, the same dramatic content with the puppet as before. When this is achieved, he continues with speaking only the first word of every line, then only the first word of every sentence. Finally, he will act out the whole scene with only the first and last lines of the monologue spoken.
This technique is quite useful when trying to teach students how the tools of puppetry can be effectively applied in translating the signs of one medium into the signs of another. It is, however, difficult enough even when the object is translating only a few lines. How can it work if the material to translate is an entire play? The answer is that the technique could be effective since the objective of Re:Hamlet was never to translate the entire play, as it is rarely the intention of any adaptation to use the complete, uncut text. Any version of the play prepared for any type of performance will be inevitably and intentionally different from the written text, and because of this, differences between adaptations will not only be defined by the visible differences in the interpretative media or by the various ways directors use scenery, lights, puppets or the camera, but also by the differences in the text of their screenplay. Speaking of directing Hamlet for the stage Trevor Nunn, former Artistic Director of the Royal Shakesepare Company and the Royal National Theatre and currently of the Theatre Royal, explains that “the cutting is virtually the production. What you decide to leave in is your version of the play”.
This is obviously true for all those productions that use some version of the actual text, but in the case of a puppet adaptation which aims to make the most of its potential for translating verbal material into images and movement, it may seem that every single word, every sentence, every soliloquy or dialogue that has been translated to this other medium is effectively cut, since it will not appear as text in the performance. This, however, does not mean that the version we are left with is empty or even non-existent, the reason being that translation and cutting are not the same notions but different processes that can and should be applied consecutively. When the director of the live theater decides to cut lines, those lines are truly left out and will generally not appear in any form in the performance. The fact that only some lines, few lines or even no lines at all are spoken in a puppet performance, however, does not necessarily mean that those lines were cut. Cutting is part of the process of adaptation, but it happens before translation.
The director of the puppet theater will have to prepare a version of the text from which he can start working on the translation to the sign system of the adapting medium. This is what director Balázs Szigeti and I did: we prepared a text variant for Re:Hamlet containing all the elements we intended to include and omitting those scenes, speeches and dialogues which were not central to our interpretation. The version thus achieved was what we may call the script of the performance, a play-book similar to what any dramaturge or director of the live stage or the cinema would prepare for an adaptation. This script essentially kept the structure of acts and scenes (albeit with some rearrangement), it contained dialogues and monologues, and, had it been performed on the live stage, it would have made for a two-hour performance.
The final length of the puppet performance based on this play-book was just under one hour. What accounts for the shortening of performance time is the fact that the language of the puppet theater is more concise than spoken language, because it is able to express complex metaphors, sentences, even entire dialogues or speeches with a few images or movements. This is not to suggest, of course, that every single word of the script was translated to the idiom of puppetry without any loss of information. Even in the case of intralingual or interlingual translation (as described by Jakobson) there will be some loss of information as well as the inevitable addition of new layers of meaning due to differences in the source and target languages. In the case of intersemiotic translation, or translation to a nonverbal sign system, this will be true many times over, and thus, it would be an illusion to expect a perfect translation of the text. The puppet theater cannot metaphrase the text but it can paraphrase the experience, the impact, or the essence, thereby creating an instance of the work, rather than a mirror translation of the words. The most the puppet theater can strive for is to know and understand the text as thoroughly as possible; to find those elements of the drama that are presentable (and at times, presentable only) by the tools of puppetry; and to create an instance of the work that makes the drama relevant and thus, of interest to its audience. This instance of the play will never be completely accurate in its interpretation of the original, but if it remains faithful to the perceived essence of the work it may enrich the Shakespearean experience with something that will be worth the effort.
Producing a creative but still recognizable interpretation of Shakespeare’s greatest tragedy through an adaptation that works by translating the drama to the sign system of a nonverbal medium had seemed quite an intimidating challenge for the creators of Re:Hamlet. Even with the understanding that it was not only the words but also the work, the play with its independent life, the cultural entity existing outside its text, the so called ‘Hamlet phenomenon’ that we would be working with, the first and most difficult task was letting go of the text without letting go of the meaning, or rather the essence of the text. For someone like Balázs Szigeti, who had directed and played the lead role of Hamlet in the live theater, having a performance without speaking any of the lines must have seemed unorthodox, if not straight insane. After all, what remains of Shakespeare if we take away his words? Charles Morowitz, who never produced a Shakespeare play entirely without text, nor does he ever mention such an intention, nevertheless provides an important insight as to what we might look for in Shakespeare, other than the text itself. He writes:
Language itself is no longer the plays’ essential ingredient. It is their metaphysic, their subterranean imagery, that means most to us today. We are interested in Hamlet, not because of what he says, but because of the way the character connects with our own 20th century sense of impotence and confusion.
It is the images of the play, reinterpreted by each generation, and the associations connected to the story and the characters that we are looking for, and which have made Shakespeare’s plays new, exciting and thought provoking for over four centuries. These images and associations have been separated from the text, and live on as individual elements of our cultural heritage in a repository of motifs arts of all genres can and will draw on. This repository of motifs is what, in great part, constitutes the dramatic content of the play, and it is the ability of the puppet theater to express the dramatic content with using its own unique set of tools that served as the starting point for staging Re:Hamlet. Iconic motifs, such as the poison, the duel or the skull in Hamlet, for example, are images with connotations just as strong as those of the text. These iconic motifs are, as Maynard Mac explains in Everybody’s Shakespeare, “eloquent moments that his plays have etched on the whole world’s consciousness, moments that speak to the human condition as such”. To illustrate one of the most “eloquent moments” he brings the example of Hamlet standing in the graveyard with Yorik’s skull in his hand, about which Szigeti observes that
Even though there is no textual indication of Hamlet ever taking Yorik’s skull into his hand, the picture of a pensive young man with a skull in his hand is just as much a part of our knowledge of the play as the words “to be or not to be”.
Puppetry can make use of our recognition of these iconic images (regardless of whether they are actually in the text or only in our knowledge of the play), and thus, can stage certain textual or cultural images as actual, moving pictures.
To take a concrete example from Re:Hamlet, let us look at its treatment of the fact that Claudius is not only the murderer, but also the only living witness of his crime, a motif understood but unstated in the text of Hamlet. There is no direct textual reference to this fact, and even if the director of the live theater decided that the dual position of Claudius, effecting his reaction to the performance directed by Hamlet was important to emphasize, it would be very difficult to do so without adding some lines to the Shakespearean text to explain this idea. As the methodology of Re:Hamlet was finding and translating motifs, images and metaphors, whether explicit or implied, the puppet performance was able to make use of the opportunity to express something the reader of the text or the spectator of a live theater production would not immediately see.
After the mousetrap scene in Re:Hamlet, images of the actual murder of Old Hamlet are projected from the puppet Claudius’s eyes onto a white material, which, in a previous scene, had served as the body of Old Hamlet’s ghost. This, combined with sudden darkness and the only light coming from the king’s eyes, and the opening bars of the Commendatore aria from Mozart’s Don Giovanni, effectively expresses Claudius’s state of mind as he rises from his seat and gasps: “Give me some light: away!” (III.ii.269), and at the same time conveys the implied understanding that the memory of the murder only exists within Claudius’s mind, and when he sees the play he is confronted with this knowledge.
Another example of how puppetry can translate textual metaphors to visual signs can be observed in a scene based on these words, spoken by the Ghost in reference to Claudius: “know, thou noble youth, / The serpent that did sting thy father’s life / Now wears his crown” (I.v.38-40). The word ‘serpent’ carries the strong connotations of evil and sin, which are obvious elements in Claudius’s personality, but ‘serpent’ is also connected to sexuality and lust (the snake itself being a phallic symbol); to fear (ophidiophobia being one of the most primal fears of mankind); to our general revulsion against slippery, slimy, wriggling animals; as well as to the identification of the venom of the snake with the poison of lies. This association becomes especially important since lies enter through the ears just as the lies of the Biblical serpent enter Eve’s ears, and just as the poison poured by Claudius enters the old king’s body through his ear. The character of Claudius, if played well, will evoke all these associations in the spectator of the live theater, but few will directly connect the image of the king lying about his sorrow over the old king’s death (I.ii), the image of the “incestuous” and “adulterate beast” seducing Gertrude (I.v), and the image of the stabbed king still trying to wriggle his way out of an obviously hopeless situation when he says “O, yet defend me, friends; I am but hurt” (V.ii.324) with the image of the serpent. Re:Hamlet works with trying to understand the core of these verbal metaphors, capturing the essence of each character’s personality and inner conflicts, and transforming them into material that can be presented on the puppet stage. When Claudius literally appears as a snake and seduces the mourning Gertrude into placing the crown onto his head with her very own hands, we are constructing a scene to which there is no direct textual reference, but to which the whole text serves as a testament. As Szigeti says,
We take a motif or metaphor and deconstruct it to its last bits, then reconstruct it in visual signs, showing more layers of meaning than we would expect. This is both different and more than what the live theater can do.
Another scene that reconstructs the text as visual signs is based on what Hamlet tells his mother in the “closet scene”:
Look here, upon this picture, and on this,
The counterfeit presentment of two brothers.
See, what a grace was seated on this brow;
Hyperion’s curls; the front of Jove himself;
An eye like Mars, to threaten and command;
A station like the herald Mercury
New-lighted on a heaven-kissing hill;
A combination and a form indeed,
Where every god did seem to set his seal,
To give the world assurance of a man:
This was your husband. Look you now, what follows:
Here is your husband; like a mildew’d ear,
Blasting his wholesome brother. Have you eyes?
Could you on this fair mountain leave to feed,
And batten on this moor? Ha! have you eyes? (III.iv.53-67)
After this Hamlet goes on to further pronounce his mother’s guilt and shame in loving a murderer, and questions her virtue, even her judgment, to which Gertrude answers:
O Hamlet, speak no more:
Thou turn’st mine eyes into my very soul;
And there I see such black and grained spots
As will not leave their tinct. (III.iv.88-91)
There are two conflicts in this scene: one is between Hamlet and Gertrude, the other within Gertrude herself. As Re:Hamlet was, from the very first moment, designed as a one-man show (and the marionette technique used in the performance generally allows for the manipulation of a single puppet at a time), one of the basic methods of producing the screenplay of the adaptation was finding the inner conflicts of the characters. The most rewarding moments for puppetry are those points in the drama where a character is faced with a dilemma and has to make a decision. The more difficult the decision, the more interesting the scene will be, as it is in the case of Gertrude, who is in a truly impossible predicament. After marrying Claudius, probably out of love, should she suddenly throw away her happiness in the face of an unproven accusation of murder and be faithful to the memory of a dead man, someone she might never have truly loved, or defy her son’s pleas, warnings and allegations and try to be happy with a man who might or might not be a murderer, but because of whom she will surely loose her son? This is the kind of dilemma that can physically destroy a person, and it is this physicality of the situation that the puppet theater can work with.
In Re:Hamlet, Gertrude enters the scene and kneels for prayer in front of a black curtain. She stands up and reveals a gigantic portrait of her late husband behind the curtain. She begins dusting the picture frame and as she works, her hand snags on another black cloth on the other side, accidentally revealing another portrait, this one of Claudius. At first she only glances across at the second picture, trying to devote her full attention to the dead king’s portrait, but after a while she cannot resist going over to the other side, and she starts wiping the other picture. Now she not only cleans the frame, but rather tenderly, even lovingly strokes the picture itself. Suddenly the portrait of Old Hamlet tilts dangerously, threatening to fall off the wall. Gertrude hurries over to the picture and pushes it back in its place. The cycle repeats: she goes over to Claudius’s portrait; the first picture tilts; she pushes it back. But now, as she leans on the old king’s picture after wrestling it back into position, the portrait of Claudius tilts. As she starts toward it the first picture tilts again, and she is left in the middle with both gigantic paintings hanging in mid air, threatening to crush her. Unable to reach both pictures at the same time she takes a few tentative steps toward the picture of Old Hamlet, then stops, turns, and without looking back she goes to the painting of Claudius and secures it in its place on the wall. As she stands there panting, practically embracing Claudius’s likeness the late king’s portrait finally crashes to the ground – revealing behind it another painting of Claudius, identical to the first. Now the ever-intensifying music (In the Hall of the Mountain King from Grieg’s Peer Gynt) reaches its summit with a crash, and the scene ends with Gertrude collapsing beside the fallen ‘corpus’ of Old Hamlet while the two images of Claudius tower over her from both sides.
Is the entire dialogue incorporated in this scene? Certainly not word for word, but in essence, yes. The overpowering presence of the two paintings evokes the force with which Hamlet had demanded that his mother look at the pictures and had not allowed her to look away. The painting of the old king shows him the way Hamlet sees him: majestic in his armour and with his crown on his head. But the eyes reveal a kind of blankness, an expression that is nothing like Hamlet’s recollection of his father with “An eye like Mars, to threaten and command” (III.iv.57). This is the face of a man who does not fully comprehend the world around him, and is, thus, vulnerable to treachery and deceit. The portrait of Claudius is no less imposing in its appearance, and is far from the image of “A king of shreds and patches” (III.iv.102), as Hamlet sees him, but is rather a painting of an assertive man with a regal stature as beheld by Gertrude. But again, it is the eyes that reveal something more and, in this case something only Hamlet can see: there is something undoubtedly cunning and calculating in those eyes.
The three scenes described above are by no means a representation of the only way the puppet theater can work with a Shakespearean text, nor are they examples of ideal solutions. They are merely examples of possible ways of bringing together the worlds of Shakespeare and the puppet stage. The work of any creator or director of the theater, and not only the puppet theater, begins by taking a first step towards that “other direction” Jan Kott talks about, which in this case is the translation of verbal poetry into visual elements. All of the visual details work toward the interpretation of the text: images present in the given dialogue, as well as ideas and metaphors understood from other parts of the drama are transformed into visual signs. Although the number of possible directions is endless, without this first step one may never be able to transform Shakespeare’s plays from the two dimensions of the page to the three dimensions of the stage. As Morowitz puts it:
“The director who is committed to putting the play on the stage exactly as it is written is the equivalent of the cook who intends to make the omelet without cracking the eggs.”
Jan Kott in: Charles Morowitz, Recycling Shakespeare (New York: Applause Theatre Book Publishers, 1991) 3.
Morowitz 110.
Morowitz 109.
qtd. in Russel Jackson, “From play-script to screenplay” The Cambridge Companion to Shakespeare on Film: Second Edition. Ed. Russell Jackson. Cambridge: Cambridge University Press, 2007. 16.
See the complete BBC Radio Shakespeare, for example.
I have used the word ‘impractical’ as opposed to ‘impossible’, since the latter would imply that a blind person, for example, could never enjoy an art exhibition, a pantomime, a dance performance or a puppet show, or that a deaf person could never appreciate a concert. There are numerous ways of offering involvement for the visually impaired in the visual arts, such as explanations, descriptions or even hands-on experience, while a deaf person may enjoy music by feeling rather than hearing the rhythm. I have performed for blind audiences, and according to their accounts they not only gained information from touching the puppets after the show, but also from the feel of air currents generated by the movements, the ’music’ of the wind playing on the strings of the marionettes and the pattering of the puppets’ feet on the floor. The fact remains, however, that while the live theater can choose to build on aural experience to communicate its message without needing further support, puppetry generally has to rely on visual signs as its primary channel of communication.
This is true even in the case of object animation where there may be no apparent design to the performing object. Design, in the sense used here, does not necessarily denote the act of drawing, carving, assembling, painting etc. a figure, but rather consciously using an inanimate object to represent something other than itself.
Carol Fijan and Frank Ballard with Christina Starobin, Directing puppet theatre step by step (San Jose, Calif.: Resource Publications, 1989) 17.
Ibid. 17.
Henry Peacham, The Garden of Eloquence, 1577 (London: British Library, Historical Print Editions, 2011)
According to Roman Jakobson, “we distinguish three ways of interpreting a verbal sign: it may be translated into other signs of the same language, into another language, or into another, nonverbal system of symbols. These three kinds of translation are to be differently labeled: (1) Intralingual translation or rewording is an interpretation of verbal signs by means of other signs of the same language; (2) Interlingual translation or translation proper is an interpretation of verbal signs by means of some other language; (3) Intersemiotic translation or transmutation is an interpretation of verbal signs by means of signs of nonverbal sign systems.” (Jakobson, R. (2000). On linguistic aspects of translation. In L. Venuti (Ed.), The translation studies reader (pp. 113-118). London, United Kingdom: Routledge, p. 114) It is important to note that I have been and will continue using the term ‘translation’ in this third sense of the word, since it is precisely the “interpretation of verbal signs by means of signs of nonverbal sign systems” that I am referring to when I speak about the translation of a written text to the nonverbal sign system of puppetry.
This is a method that, with some variation, I now use in my work as a teacher of puppetry. The technique had been explained to me by puppet director Ágnes Kuthy, graduate of Ernst Busch Academy of Dramatic Arts. The version presented here is my own.
Margaret Jane Kidnie, Shakespeare and the Problem of Adaptation (London: Routledge, 2009) 35.
Morowitz 59.
Mac 2.
Szigeti 2011
Szigeti 2011
Morowitz 3. | https://millstonenews.com/puppeteer-bence-sarkadi-discusses-shakespeare-and-the-puppet-stage/ |
Join us for our InfoSec Seminar on Oct 14Posted on: Oct 7, 2021
The Information Systems Security and Assurance Management (ISSAM) Department in Mihalcheon School of Management proudly invites you to the monthly InfoSec Seminar. The seminar series will be open to anyone who is interested in security research and technologies, not only to Concordia University of Edmonton (CUE) members.
White-Box Cryptography and Some Implementations
Over the past two decades, the field of cryptanalysis has experienced major changes with the violation of the black-box premise that the attacker only has access to observations external to an implementation of a cryptographic algorithm. Attacks focused on the implementation details of cryptographic primitives in devices are called white-box attacks, which have become common in recent years. With white-box attacks, one can no longer assume that an operating environment can be trusted. An adversary can directly observe and tamper with the implementation to extract information about the cryptographic key. Therefore, the traditional cryptographic algorithms that have been built on the assumption that the adversary has only access to the external data as per a black box model functionality generally incorporate no countermeasures to such attacks. White-box cryptography (WBC) was proposed to address this problem – it aims to provide security even where the attacker is able to perform invasive active attacks on the software implementation of the cryptographic algorithm.
See you there!
Date: Thursday October 14th, 2021
Time: 12:00 noon – 1:00 PM
Meeting Link: Google Meet
About the Speaker
Tingting Lin, PhD, Software Developer, Irdeto, Ottawa, ON
Tingting graduated from Shanghai Jiao Tong university and got her doctoral degree in 2016. After that, she was funded by Mitacs Accelerate to work on white-box cryptography as a postdoctoral fellow in Queen’s university. Her research focuses on white-box cryptography and software protection. Now Tingting is working in Irdeto for white-box design and implementation of Chinese ciphers.
Contact
If you have any questions regrading this seminar series, please do not hesitate to email Shawn Thompson or Eslam G. AbdAllah, MISSM, Faculty of Management. | https://concordia.ab.ca/join-us-for-our-infosec-seminar-on-oct-14/ |
In the last blog post discussing Hallmark #1, we talked a lot about how DNA damage accompanies aging. A particular brand of DNA damage is telomere attrition or loss, the second hallmark.
Here, again, it’s good to understand a few things about DNA:
Chromosomes
DNA is not just one long double helix. Instead, it is divided into a number of sections of uneven length. At certain points in the cell cycle – the series of events that take place for a cell to divide into two daughter cells – the nuclear DNA is organized into chromosomes.Chromosomes are a key part of the process that ensures DNA is accurately copied and distributed in the vast majority of cell divisions.Humans have 46 chromosomes (two sets of 23 chromosomes). If you are envisioning 23 sets of the letter X, you’re not too far off.
What are telomeres?
The ends of each chromosome (the tips of the Xs) are called telomeres, which have a few functions. Every time your cells divide, copies of the chromosomes are made, but a bit of the telomere is lost. The DNA is, to an extent, designed for this loss, as the DNA found in the telomere is noncoding (doesn’t code for a protein) and is essentially a pattern repeated over and over for thousands of base pairs. Eventually, when all of the telomere DNA is lost (after about 50 divisions), the cell can no longer replicate and dies; the limit to the number of divisions, called the Hayflick limit, applies to all cells except stem cells.
Telomeres are bound by a multiprotein complex called shelterin, which prevents DNA repair proteins from accessing the telomeres and being “repaired” as DNA breaks. This is important because it prevents the chromosomes from fusing together. Because shelterin prevents the DNA repair proteins from accessing the DNA, damage in the telomere is both persistent and highly efficient in inducing senescence and/or apoptosis.
A reminder that to be named a hallmark, telomere attrition must fulfill three criteria: Telomere attrition happens during normal aging, after every cell division, so it fulfills the first hallmark of aging. To fulfill the next two criteria—accelerate telomere loss or dysfunction to accelerate aging and decelerate loss to slow aging—takes us back to the lab (at least through some pretty impressive publications).
Telomere deficiency (due to shelterin) leads to aging
Dysfunction or deficiency in the shelterin complex cause telomere uncapping (you don’t want that), chromosome fusions (you don’t want that either), and some cases of disease like aplastic anemia, among others. The various loss-of-function models, models in which scientists have modified the shelterin complex so it is nonfunctional, are characterized by the rapid decline of the regenerative capacity of tissues and accelerated aging. Notably, when the shelterin complex is knocked down but the length of the telomere is normal, tissue decline and accelerated aging still occur.
Longer telomeres, longer life
Many animal studies have established causal links between telomere loss, cellular senescence, and organismal aging. For mice at least, shortened telomeres mean decreased lifespans and lengthened telomeres mean increased lifespans. This idea is supported in humans. For example, cells whose telomeres are not as readily lost, like white blood cells, generally live longer.
More divisions, more damage
Because we know from the first hallmark that genetic damage accumulates as we age – or, on a cellular level, the more a cell divides – the later stage divisions have more likelihood of passing on genetic damage.Longer telomeres afford cells additional divisions during which mutations can creep into the genetic code. One such mutation is the activation of telomerase in differentiated cells.
Telomerase
Telomerase is an enzyme thatadds telomere repeat sequences to the end of telomeres. These enzymes are normally activated only in sex (eggs and sperm), stem, or tumor cells. In the lab, adding telomerase is sufficient to confer immortality to otherwise mortal cells. A study in 2011 by Jaskelioff et al also showed that prematurely aging in mice can be reverted by telomerase activation. Further, in a 2012 study, normal aging in wild-type (normal) mice can be delayed without increasing the incidence of cancer with regular viral transduction of telomerase. A 2013 meta-analysis supports the strong relation between short telomeres and mortality risk.
A quick recap:
To grow and function properly, an organism’s cells must produce new cells to replace old, worn-out cells. They do this through cell division. The number of times a cell can divide is restricted by the length of its telomeres. When the telomeres are dysfunctional, or unusually short, premature development of diseases and aging occurs. When the telomere length is long or maintained, cells can live longer. All the hallmarks of aging are fulfilled and it seems the fight against aging has, at last, a simple answer: Let’s figure out a way to lengthen telomeres and we’ll all live longer!
Why not? Cancer.
The Big C.In the large majority of cancer cells, telomere length is maintained by telomerase. It turns out that telomerase and telomere length is crucial for both cancer initiation and the maintenance of tumors. “The data show that if you’re born with long telomeres, you are at greater risk of getting cancer, ” said Titia de Lange, Leon Hess Professor at Rockefeller University, in a 2020 article discussing her work.
The Future
The mechanisms for maintaining telomere length and activating telomerase are, of course, complex. A more in-depth understanding of these mechanisms is the focus of many a cancer biologist (and anti-aging enthusiasts, I imagine). It is possible that regulating telomere length may enable both successful cancer and longevity treatment (or prevention!). | https://oxfordhealthspan.com/blogs/aging-well/hallmark-of-aging-2-telomere-attrition |
This Vietnamese Meatball Soup is a delicious, fuss-free meal that is both nutritious and healing.
Vegetables (optional) sliced very small is best so they cook quickly. I generally add whatever I have at the time, often broccoli, carrot, snow peas, shiitake mushrooms, and a handful of kale or baby spinach added right at the end.
In a large pot, heat the coconut oil. Add the ginger and garlic and sauté for a minute until the garlic just starts softening.
Now add the chicken broth, coconut milk, turmeric, fish sauce, coconut sugar, kaffir lime leaves and spring onion and bring to a rolling simmer.
Whilst that’s heating, make your meatballs by putting the ginger, coriander, grated carrot, chicken mince, coconut flour, sea salt, black pepper, lime zest, into a large bowl. Mix together until well combined and form into golf sized balls.
When the soup comes to a rolling simmer, add the meatballs and allow to cook for 5 minutes or so until cooked through (the larger the meatballs the longer they will need).
If you are adding vegetables, carefully remove the meatballs and divide between the serving bowls and add the vegetables to the soup to blanch them for a couple of minutes (take care not to over cook them). Divide the vegetables between the bowls.
Lastly, add the lime juice to the soup, taste and add more seasoning if needed (salt and black pepper). Divide the soup between the bowls, garnish with the coriander and serve with chilli flakes.
Start with the meatballs. If you are going to make your own chicken mince, do this first and set aside.
Chop the ginger and coriander, 5 seconds, speed 8. Add the carrot 10 seconds, speed 8.
Add the chicken mince, coconut flour, sea salt, black pepper, lime zest, mix 30 seconds, reverse speed 2 (you may need to scrape the sides during the mixing) – make sure it is well combined and form into golf sized balls. Set aside.
Now start the soup. Mince the ginger and garlic, 5 seconds, speed 8.
At this point I transfer it to a pot as the blades will break up the meatballs when cooking them. Continue from step 4 in the method above.
Omit and replace with a pinch of chilli flakes.
Replace the fish sauce with a teaspoon of sea salt. | https://www.havenmagazine.com.au/recipe-vietnamese-meatball-soup/ |
Level 3 Diploma In Pre U Foundation
Level 3 Diploma In Pre U Foundation Studies is designed for individuals to help them in developing their understanding and knowledge of the pathway they wish to pursue at Higher Education Level.
The progression opportunity may take a varity of paths depending on the level of qualfication achieved, it may be appropriate for the candidate to progress to a higher level 4 qualification in their chosen area of study, a higher level of qualification, vocational qualification or employment at a supervisory management level.
LRN Level 3 Diploma In Pre U Foundation Studies is also accepted in various universities in the U.K. for admissions on to further programmes. This qualifications at Level 3 represent practical knowledge, skills, capabilities and competences that are assessed in academic terms as being equivalent to GCE AS/A Levels.
Who is it for
This course is designed for individuals
- Who are new to the work environment.
- Have a limited experience of work and want to increase their knowledge.
- Do not have a formal qualification to access higher education and want to obtain one.
Content
The Level 3 Diploma in Pre U Foundation Studies consists of a total of 120 credits. To complete the qualification, candidates must complete the 3 mandatory units and 3 optional units of the course.
Mandatory Units:
- Foundation Mathematics
- Foundation Computing
- Study skills
Optional Units:
- Foundation Biology
- Foundation Chemistry
- Foundation Physics
- Further Mathematics
- Foundation Business and Management
- Foundation Economics
- Foundation Accounting
- Foundation Psychology
- Foundation Sociology
- Foundation Law
- Foundation Hospitatlity
- Foundation Government and Politics
Entry requirements
- At least 4 GCSEs at grades A* – C (or equivalent}
- Specific subject matter expertise or knowledge (e.g. – science, law, hospitality)
- Level 2/ First Diploma (in a relevant subject with merit or distinction)
- NVQ Level 2 or equivalent Level 2 qualification, or relevant experience (for mature applicants)
Candidates should also have a speaking, listening, reading and writing ability that is commensurate to CEFR Level B1 (or equivalent). This is to ensure that they meet the communication requirements for this qualification.
What is Included
All online study materials and student handbooks are supplied. You will be allocated a tutor for academic support who you can contact as often as you like by telephone and email. You will also have access to a Student Support Co-ordinator for administrative support. You will also complete an online induction and have access to an online virtual learning environment.
How is it assessed?
The assessment consists of written assigments that are set by LRN. The making of the assignments will be carried out in accordance with the assessment criteria listed in the assignments. To ensure a rigorous quality assurance model is applied, each of the marked assignments will be moderated.
How do I study and how long does it take?
The total duration of this course is 1575 hours. | https://www.iqualifyuk.com/course/level-3-diploma-in-pre-u-foundation/ |
This course reviews the various ways to measure both segregation and diversity at the neighborhood scale.
Virtual Reality for Planners 1: Introduction
This course provides an overview of virtual reality (VR) and the elements that lead to effective urban design simulations and, ultimately, how to produce VR applications that enable users to explore urban design scenarios.
Urban Design for Planners 1: Software Tools
This six-course series explores essential urban design concepts using open source software and equips planners with the tools they need to participate fully in the urban design process. The first course introduces the software you’ll use to create analytical maps, 3D models, and 2D graphic designs.
Building a Transit Map Web App
This course examines the entire process of building an interactive, web-based mapping application.
Photoshop CC for Planners 1: Basic Functions
Adobe Photoshop CC is widely recognized among design professionals as the premier image editing software, with a number of useful applications for urban planning. This course gives you a step-by-step introduction to the basic tools of Photoshop CC.
GIS Fundamentals: An Introduction
This first of a series of courses covering Geographic information Systems (GIS) will guide beginners interested in learning more about GIS, especially with the use of Esri's ArcGIS software.
AutoCAD 101
This course provides an introduction to AutoCAD’s essential functions for first-time users and demonstrates how to create site plans, street sections, and other two-dimensional scaled diagrams.
SketchUp for Planners - An Introduction
Get started using SketchUp, the popular, easy-to-learn 3D digital modeling program. This course provides an introduction to how planners and architects represent three-dimensional objects in two-dimensions, with step-by-step instructions for creating and using simple 3D models.
Drawing in the Landscape: The Adventure Begins
This is the first course in the Drawing series. In this course, you will learn techniques for drawing, sketching and rendering in the field. You will learn how to draw lines and contours, understand perspective, and render tone and value with light and shadow. | https://courses.planetizen.com/courses?f%5B0%5D=%3A141&f%5B1%5D=credit%3A141&f%5B2%5D=credit%3A423&f%5B3%5D=popular%3A1&f%5B4%5D=topic%3A305&f%5B5%5D=topic%3A312&f%5B6%5D=topic%3A317&%3Bf%5B1%5D=topic%3A315&%3Bamp%3Bf%5B1%5D=software%3A340&%3Bamp%3Bf%5B2%5D=topic%3A305&%3Bamp%3Bf%5B3%5D=topic%3A395&%3Bamp%3Bf%5B4%5D=topic%3A411&%3Bamp%3Bkeywords= |
Cultivating a Base of Emotional Tranquility
Recent posts have suggested tools of focused processing to help you attend to and make sense of your emotions, to manage and rewire difficult negative emotions, and to use positive emotions to shift the functioning of your brain and to shift your moods and states of being. You can continue using those tools whenever necessary; you’re building new neural circuity in the brain that will support a deeper emotional equanimity and thus resilience.
This post offers a tool to come to that emotional equanimity using the spaciousness of defocused processing – the default network of the brain. You learn to let your emotions “dissolve” into a peaceful, easy tranquility – the home base of emotional well-being that all of these practices return us to.
You may have experienced moments of well-being when you feel centered, grounded, at peace, and at ease. These result from the positive activation of our parasympathetic nervous system when there is no danger. We are calm and relaxed, able to REST – relax and enter into safety and trust. “God’s in his heaven; all’s right with the [inner] world.” You can deliberately evoke that sense of well-being with the exercise below and, with practice, sustain it over time.
Resting in Well-Being
1. Lie down on a bed, a couch, the floor, or the ground, somewhere you feel comfortable and safe and won’t be interrupted for five minutes. Slowly let your body relax. Let the weight of your body drop, feeling supported by the surface beneath you. You don’t have to “carry” yourself for a moment.
2. Breathe slowly, gently, and deeply, taking slightly longer on the exhalations. Breathe in a sense of ease, calm and tranquility. Breathe out, one by one, all the worries, thoughts, and feelings you might still be carrying. Breathe in and breathe out, as many times as you need to.
3. Notice a spaciousness inside your body, even inside your skull. Feel the space between any lingering tension, any lingering thoughts, any lingering feelings, like the space between notes of music. Notice the ease the calm of the spaciousness.
4. Begin to gently focus your awareness on the presence of the spaciousness more; notice the absence, the emptiness around the feelings, thoughts, and tensions. Rest in the presence of this spaciousness for a few moments.
5. Name this feeling of spaciousness to help you evoke it later. Call it peacefulness, tranquility, calm, ease, or well-being – whatever works for you.
6. Return to your focused awareness and reflect on your experience of this exercise.
We often notice the storms of emotions in our lives, but we don’t always pay attention to the blue sky of well-being they are blowing through. Cultivating and deepening an awareness of that background equilibrium is a way of strengthening your resilience. The spaciousness allows your brain to hold bigger, more challenging, more difficult emotions as you move through your life. | https://lindagraham-mft.net/cultivating-a-base-of-emotional-tranquility/ |
BACKGROUND OF THE INVENTION
This invention relates to a combination helmet and body protection device useful for the sports of American football, lacrosse, and other sports or activities where contact with other participants and/or objects may occur. It is well known that many sports and occupations present a high degree of risk to the participants. One of the key areas of risk is in trauma to the cervical spine, which often result in injury to the participant's spinal cord, the bony structure of the spine, disks and connective tissue, and result in some level of disability and/or death.
A large number of these injuries result from axial loading, i.e. , a high load being placed on the top of the participant's head, as during tackling or blocking in American football. For example, since at least the mid-1970's football players at all levels have been taught to keep their heads up during tackling and blocking, and not to use their head as a "battering ram." Use of the shoulders during these activities reduces the chances that the player will suffer a catastrophic spinal cord injury during blocking or tackling. However, such injuries do still occur at alarming numbers due to a number of possible reasons. For example, it is often reflexive for a player to lower his head during a tackle. In addition, "freak accidents" have occurred at all levels of sport. Improper coaching techniques and equipment which is not designed to offer protection to the participant from such injuries can contribute to these problems, particularly at the lower age levels. In addition, catastrophic cervical spine injuries can occur from contact with the ground during a fall or from other contact that, due to the nature of the sport or activity, cannot be predicted or controlled.
Using American football as an example, currently available equipment is very similar in overall design. The equipment available to the professional is better in many respects to that available to other levels but these differences are often in the materials with which the products are made, and not in the actual design of such devices.
The currently available protective equipment for the upper body generally includes a helmet and shoulder protection, and, for some, neck braces and rib protectors. When an individual is equipped with the standard helmet and shoulder pads currently available, the force of a blow to the top of the head is transmitted solely through the participant's neck, thus resulting in a high axial or lateral load on the cervical spine and exposing the wearer's neck or spine to potentially crippling forces.
A further problem which is also likely to result in serious damage to the participant's neck is whiplash resulting from a quick and violent wrenching of the head in one direction due to, among other things, the player's face mask being grabbed and pulled from the side or the head being forced violently to the front or back.
There have been other attempts made to create football equipment which eliminates or reduces the above-described problems. However, none of these prior attempts have been ultimately successful, as is demonstrated by the fact that catastrophic cervical spine injuries are still occurring at alarming rates at all levels of sport. See, e.g., Cantu and Mueller, Catastrophic Spine Injuries in Football (1977-1989), Journal of Spine Disorders (vol. 3, no. 3, 1990).
One of the main reasons that prior attempts to reduce spine injuries have failed is due to the nature of the sport itself. Football, for example, is a fast and violent sport which requires great athletic ability and flexibility. Many prior art devices are either cumbersome and would not be practical in regular usage, or would excessively and unacceptably limit the flexibility and/or visibility of the players; thus making many of these devices difficult for the players to use.
These same problems exist when attempting to provide sufficient protection in other activities such as driving, cycling, riot control, fire fighting, etc.
SUMMARY OF THE INVENTION
It is an object of this invention to create a head and body protection unit for use during football and similar sports, which will protect the head, neck, spine and upper body of the player, while still maintaining a high level of flexibility. This equipment should also be adaptable to meet the different needs of different players. For example, quarterbacks and wide receivers require much greater flexibility and visibility than do linemen. It is a further object of this invention to provide football equipment that protects a player from cervical spine injuries due to axial loading from blows on top of the head by transferring the force of the blow to the player's shoulders and upper body.
It is a further object of this invention to allow the use of a combination helmet and upper body protector which prevents the player's neck from being wrenched violently to one side, while still allowing a full range of both direct and peripheral visibility.
The above objects are satisfied by the use of a combination helmet and body protection device which comprises a semi-rigid shock resistant structure which protects the head, neck, shoulders, chest and lower torso of the player. Any external forces directed to the head, neck and shoulder area are transferred to the musculature of the upper chest and back.
The present invention consists of three major sections, a helmet, an inner helmet, and an upper torso section which includes attached padding that may extend down to the waist. All of the sections except the inner helmet are connected to form a single unit. It is recommended that the gear be made from a high impact plastic material.
The invention requires an inner helmet to be worn with a chin strap. The outer surface of the inner helmet conforms to the inner surface of the high impact outer helmet. This inner helmet should be constructed of high shock absorbent material that can be sized to accommodate various head sizes. The outer surface of the headgear will conform to the inner surface of the outer helmet to allow for free movement of the head with an adequate air space.
This inner helmet will be fabricated from foam type material that compresses under pressure and returns to original shape when pressure is removed. This foam inner shell is covered for hygienic reasons and ease of care and includes a chin strap to offer control and stabilization. The inner helmet may be designed and sized to directly correspond to the user's head size. Between the foam inner helmet and the hard outer helmet will be attached a thin membrane of plastic, or microshield, to allow the inner helmet to rotate within the outer helmet, and to flex and extend to normal range of motion within the membrane. This membrane will be affixed directly to the foam and will minimize friction between the two parts of the helmet, thus allowing the head to move within the outer shell. The inner padding between the outer shell and the membrane will cushion the two areas and absorb noise and shock. The potential for concussion is dramatically reduced through incorporation of the inner helmet and micro shell when used in combination with the outer shell with attached padding strips.
The outer helmet will be similar in shape to the present football headgear, with the following modifications. First, the face opening is higher and wider, extending to the top of the forehead. The sides and back extend down to allow for easy attachment to and removal from the upper torso section. The face mask is attached by flexible strapping to the helmet, and this strapping is composed of a material that can be readily cut so as to allow for the rapid removal of the mask in emergency situations. The mask will generally follow the outer contours of the helmet and cover the entire face opening.
The connection of the helmet to the upper torso section is configured of opposite shelves that abut one another, thus offering resistance to axial loading by preventing further compression or distortion in the axial direction. This design also protects against medial and lateral impacts, to prevent distortion within the helmet. This design also protects against undue amounts of pressure in the helmet resulting from impacts to the anterior or posterior regions of either the helmet or the body shield.
The connection of helmet and upper torso section is strengthened by the use of three positive locking devices to ensure the connection during use. These three keyed lock points include one posterior lock located at the rear of helmet and one lock point by each ear area. The posterior lock would be the first point connected by the user when donning the helmet. The connection is simple, in that the user first inserts the lock point at the rear of the helmet, and swings the helmet over the head in an arc from behind the head, thus bringing the helmet over the head to the remaining two lock points. The connection of these three lock points automatically locks the rear part to the shell of body armor as well as the two points located near the ear area. This also prevents the outer helmet from rotating with respect to the rest of the unit.
The upper torso section of the present invention comprises a semi- rigid shell with sufficient padding to withstand violent impacts and to assist in the distribution of forces throughout the wearer's back and upper chest. This torso section is also provided with arm openings which are shaped to allow sufficiently free movement of the wearer's arms and shoulders. The shoulder area of the torso section includes a hemispherically-shaped shoulder guard which provides a protective shell for the shoulder while still allowing a wide range of motion. This shoulder guard includes a padded interior surface and is attached to the torso section by means of a wide flexible strap at the edge of the top of the shoulder section closest to the edge of the wearer's shoulder. A further curved shoulder protection plate is provided which is attached over the torso section and above-described guard at the juncture thereof, to provide further protection to the wearer's shoulder. This curved plate is also attached to the upper torso section by means of a flexible strap.
The upper torso section is open on either side, with the opening extending from the arm openings down the entire side of the section. This design facilitates the donning of the entire unit, and the front and back of the torso section are secured to one another by a set of flexible straps.
Attached to and extending out from the upper torso section is a cloth- encased system of padding that will serve as a means of transferring and softening impact to the musculature of the chest and back. This padding may extend down as far as the waist to provide additional protection to the wearer's rib cage, lower back and abdomen as may be desired by the participant. It may also be secured by strapping at the sides if desired.
The upper torso section extends around the neck area in such a manner so as not to constrict movement of the shoulder areas where the padding will ultimately be attached. This will prevent any contact on top of the shoulders, thus eliminating the possibility of direct contact with the wearer's clavicle area or proximal scapula. These connections are formed so as not to interfere with any shoulder joint function (glenohumeral joint). The shell will fit snugly on the wearer's posterior scapula (back) and chest (pectoralis) area.
The equipment may be further modified to include a bullet-proof vest, filters and gaskets to allow the wearer to perform in a gas or smoke environment, and a clear protective mask to shield the face from debris and gases, if sealed to the helmet with gaskets.
Further benefits and embodiments of this invention will be obvious upon review of the drawings and descriptions thereof.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a front view of the combination helmet and body protection unit in accordance with the present invention.
FIG. 2 is a side view of the combination helmet and body protection unit in accordance with the present invention.
FIG. 3 is a top view of the body protection unit in accordance with the present invention without the helmet attached thereto.
FIG. 4 is a partial cross-sectional side view of the body protection unit and the helmet.
FIG. 5 is a detail view of the connection of the helmet to the body protection unit at the inside rear of the helmet.
FIG. 6 is a cross-sectional side view of the helmet-body protection unit connection along the line A--A in FIG. 5.
FIG. 7 is a detail view of the connection of the helmet to the body protection unit at the side of the helmet.
FIG. 8 is a cross-sectional side view of the helmet-body protection unit connection along the line B--B in FIG. 7.
FIG. 9 is a front view of the inner helmet liner in accordance with the present invention.
FIG. 10 is a side view of the inner helmet liner in accordance with the present invention.
FIG. 11 is a cross-sectional side view of the outer helmet section in accordance with the present invention.
FIG. 12 is a cross-sectional side view of the outer helmet and the inner helmet combination in accordance with the present invention.
DETAILED DESCRIPTION OF THE DRAWINGS
FIG. 1 is a front view of body protection unit 10 comprised of helmet 12, upper torso unit 14 and lower torso unit 16, while FIG. 2 is a side view of the same elements. Helmet 12 includes face mask 15 which is securely attached thereto. Upper torso unit 14 includes two means for protecting the wearer's shoulder, including upper shoulder shield 17 and shoulder guard 19. Also shown is upper arm pad 18, which may be securely attached to shoulder guard 19 on both sides to afford protection to the upper arm of the wearer without limiting his mobility. This upper armpad includes an arm strap 28 which extends entirely around the wearer's arm to securely mount the arm pad 18 and to prevent unwanted movement thereof. All of these shoulder and arm protection elements may be connected to the upper torso unit 14 through the use of standard rivets and straps 25, which may be made of webbing of, e.g., dacron and/or leather.
As shown particularly in FIG. 2, the upper torso unit 14 and lower torso unit 16 comprises a one-piece unit having a front and back portion, with an opening therebetween to enable the wearer to easily put on or remove the unit. A plurality of flexible, adjustable straps 20 connect the front and back portions of these two units to secure the entire unit on the wearer.
As shown in FIGS. 1 and 2, the helmet is similar to helmets currently in American football with some significant exceptions. First, the face opening 11 is much larger than current standard helmets to give the wearer a wider range of peripheral vision. Second, the face mask 15 extends up higher to provide full protection to the head and face, and it is secured by means of strong, flexible straps 51, which are preferably made out of rubber/plastic type of material or leather, which can be readily cut to facilitate removal of the face mask 15 in an emergency.
Third, the helmet incorporates a separate inner helmet section 40 which fits securely on the wearer's head and includes pads or bladders 42 to absorb much of the shock from direct blows to the head. The general shape of inner helmet 40 is shown in FIGS. 9 and 10, while FIG. 12 shows the placement of inner helmet 40 in outer helmet 12. Inner helmet 40 is covered by a plastic surface or microshield 41 to facilitate movement of the head in the helmet. This lining surrounds inner pads 42, and incorporates chin strap 43, which functions to hold the inner helmet 40 in place relative to the wearer's head. As shown in FIG. 12, there is an air space 44 between microshield 41 and the inside of outer helmet 12 of adequate room to facilitate mobility and protection. Air space 44 is exaggerated in FIG. 12. Inner helmet 40 also incorporates a plurality of air/compression holes 46 to facilitate air circulation, protection and material expansion.
The connection of the helmet to the body protection unit is critical to the operation of this unit. As shown in FIG. 3, 4, 6 and 8, upper torso unit 14 includes a generally flat collar surface 21 having a top or external surface and a bottom or internal surface, and having two side collar openings 22a and 22b and rear collar opening 23. Upper torso section 14 has a lip 27, while helmet has a corresponding lip 31 shaped so as to fit on lip 27.
FIGS. 5 and 6 show the connection of the rear of helmet 12 to the upper torso unit 14, including rear tab 29, which is inserted into rear collar opening at an angle as shown by reference numeral 12' and 29'. After helmet is moved into its standard position, tab 29 acts to lock the helmet in place through the interaction of notch 30 on the bottom surface of collar 21. This notch prevents the helmet from being moved vertically, and the rear tab 29 cannot be removed from rear collar opening 23 until it is moved to the position shown as 29' in FIGS. 4 and 6.
FIGS. 7 and 8 show the connection of the sides of the helmet 12 to upper torso unit 14. Specifically, helmet 12 includes at least two side tabs 32 which may be inserted into side collar openings 22a or 22b. After the rear connection described above is made, helmet is moved into proper position so that side tabs 32 can slide into side collar openings 22. The interaction of notch 33 and the bottom surface of collar 21 prevents the vertical movement of the helmet, and the interaction of tabs 32 and the sides of collar openings 22 also prevent rotation of the helmet. The helmet may be easily removed from the rest of the unit by the user inserting a finger or thumb on either side of his head to contact tabs 32 and to move them away from the upper torso unit so that notch 33 no longer is directly below the bottom surface of collar 21. This enables the user to apply an upward force to the helmet and remove it from the upper torso unit. Since the helmet is still connected at the rear thereof, it tends to swing in the proper angle to allow removal of rear tab 29 from rear collar opening 23 as described above. Thus, the entire removal operation can be easily performed in a matter of seconds.
Outer helmet 12 also incorporates a plurality of pads 38 to cushion blows received to the wearer's head, and these pads may be formed out of materials generally known in the art for such padding. The pattern of such pads 38 is shown in FIG. 11.
As can be seen in FIG. 8, the upper torso unit includes a ledge 35 in addition to lip 27. Ledge 35 generally extends about the rear and sides of the unit and corresponds to lip 31 on helmet 12. When a large force is imposed on the top of helmet 12, the interaction of lip 31 on ledge 35 acts to disperse this force over the entire upper torso unit instead of allowing that force to be transferred solely through the wearer's neck, as is the case with prior equipment. The location of these elements with regard to the rest of the helmet is shown in FIGS. 11 and 12.
The rear tab 29 and side tabs 32 may be composed of the same material as helmet 12 and integrally formed therewith, or they may be formed from a different material such as metal and attached to the helmet through the use of rivets or other connection means which can provide sufficient strength to withstand the lateral and vertical forces applied to these elements. This latter embodiment is believed to provide the stronger connection.
The upper torso unit is preferably composed of a strong hard material such as Kevlar, ABS or polycarbonate, or a combination of all three. The preferred embodiment is made of ABS. Another unique feature of this invention is the incorporation of lower torso unit 16, which is made of a softer material, such as T-foam, Aliplaist or Pelite. Lower torso unit 16 acts to absorb much of the energy from blows to the lower regions and, if extended down far enough, can protect against blows to vital organs such as the kidneys. | |
What many people don’t fully recognize is that depression can be both chemical and biological.
Depression Is A Medical Condition
Clinical depression is linked to a chemical imbalance in the brain, in which a chemical in the brain is not being produced enough for normalization (1).
To balance this chemical imbalance, most individuals who are diagnosed with depression are prescribed anti-depressants.
In many cases, the imbalance is due to a reduction of a particular neurotransmitter in the brain. A neurotransmitter, being a chemical in the brain, sends chemical messages from one nerve cell to the next. When the messages aren’t being delivered effectively symptoms of depression can arise. (2)
Thus, neurotransmitter production is important in that any chemical imbalance, due to a decrease in production, can result in multiple symptoms of depression.
In particular, most people who are depressed show a reduction in the following neurotransmitters: serotonin, dopamine, norepinephrine.
Do Hormonal Imbalances Cause Depression?
Science has suggested that a chemical imbalance in the brain doesn’t necessarily lead to depression. Rather, it is a contributor to several symptoms of depression. (3)
3 Major Neurotransmitters Involved In Depression
As mentioned before, depression has been linked to chemical imbalances in the brain.
But what chemicals are specifically associated with depression?
Serotonin — a neurotransmitter that is involved in controlling many bodily functions such as sleep, aggression, eating, sexual behavior, and mood (4). Selective Serotonin Reuptake Inhibitor (SSRI) is a kind of antidepressant used to treat someone with a serotonin imbalance. You can read more about it here.
Norepinephrine — a neurotransmitter that helps our bodies recognize and react to stressful situations. Thus, those who show a decreased production of norepinephrine have difficulty dealing with stress. (5)
Dopamine — a neurotransmitter involved in controlling one’s desire to seek rewards and the ability to obtain a sense of pleasure. Those who show a decreased production of this neurotransmitter show a low interest in activities compared to before they were depressed. (6)
These are the three major chemical neurotransmitters that are commonly associated with depression. Other neurotransmitters involved include acetylcholine, glutamate, and Gamma-aminobutyric acid (GABA).
Using Anti-Depressants
As mentioned above, a decreased production of certain neurotransmitters is highly linked to depression.
This is when anti-depressants are prescribed.
Anti-depressants are prescribed to counter an individual’s inability to produce enough of a particular neurotransmitter.
Anti-Depressants Aren’t Always The Solution
Anti-depressed don’t work all the time. In fact, they work on only 1/3 of the population that takes them. They work partially in another 1/3 and for the other third…anti-depressants don’t work at all. (7)
In addition, many people who take anti-depressants choose to stop taking them due to the side effects they produce. Sometimes the side effects can be much worse than their initial depressive symptoms.
Also, it is important to discuss with your doctor of any plans to get off antidepressants as there may be consequences. Please read "Don't Stop Taking Your Antidepressants: At Least, Not Yet" to learn more.
In Conclusion
Anti-depressants are not the ultimate solution to resolve depression. Rather, it is a way to manage it chemically.
Everyone is different and anti-depressants are not one-size-fits-all. Sometimes depression doesn’t necessarily need to be treated medically but rather a combination of treatment for depression can include therapy, management, and medication (if necessary). | https://www.wellnite.com/post/treating-depression-with-anti-depressants |
05 Oct 2017
This is Part Two of a two-part blog series on the role your human resources department can play in implementing your company’s IP policy. Read Part One here.
When it comes to protecting your company’s IP assets, it’s important for you to take a proactive stance — starting as early as the interview and hiring process. As a first step, employees should sign employment agreements, setting out (among other things) their obligation to protect the the company’s confidential information and other IP assets. And when an employment agreement isn’t used, employees should sign a written statement agreeing to the company’s policies on confidential information and IP protection.
But you must also follow through on these policies to ensure that confidentiality concerns remain top of mind for your employees. In this post, we’ll outline the best practices you can take to consistently reinforce your company’s expectations and IP policy over the course of an employee’s tenure, and when offboarding an employee.
COMMUNICATING IP POLICIES DURING EMPLOYMENT
It’s important to set up ongoing awareness and education campaigns, and also to take precautionary measures against any technology-related oversights.
1. TRAINING AND EDUCATION IN COMPANY IP POLICY
Consistently reminding your employees of your company’s IP policy may help prevent accidental disclosures. On top of being laid out in employment agreements and written training materials, your company’s IP policy should be emphasized through a formal training course for all new employees.
Subsequently, your IP policy should be communicated in written documents available to employees for reference, and further reinforced through at least one “refresher” course per year.
2. ENACTING “BRING YOUR OWN DEVICE” POLICIES
One of the most common ways confidential information gets “leaked” is through your employees’ personal devices. If your employees are bringing their own phones or laptops to work, you need to make sure that they don’t accidentally or intentionally use their device in a way that could compromise your company’s proprietary information.
Policies you could enforce include:
- Encrypting or password-protecting all devices
- Monitoring all file-sharing methods, like external drives and cloud-based platforms
- Monitoring email accounts
- Prohibiting the use of cameras, including those on personal phones
- Disabling USB ports on devices
- Implementing a social media policy to determine what work-related information can or cannot be shared
- Immediate reporting in the case that the device is lost or stolen
- Reserving the right to wipe devices, whether on-site or remotely
3. RESTRICTING EMPLOYEES’ ACCESS TO CONFIDENTIAL INFORMATION
Employees should only have access to the information they need for their job. In other words, access to highly sensitive information should be limited to people who really need it. For example, your marketing team may not need to receive information about your latest invention, especially if it hasn’t been implemented in a product yet.
If an employee can access trade secret information, they should be told that it’s a trade secret, and instructed on how to handle trade secrecy.
4. OFFERING INCENTIVES AND REWARDS FOR EMPLOYEES WHO CREATE VALUABLE IP
The most effective way for your company to track the IP that an employee has created is by encouraging employees to document their contributions by submitting invention disclosure records (IDRs), which often go by other names such as “invention memos” or “invention disclosure forms.” Basically, it’s an internal document where employees provide all the relevant details about their company-related inventions.
Completing an IDR often requires a lot of work, so your IP policy should include some incentive or rewards for employees who create and report valuable IP. For example, companies often provide cash bonuses, stock options, or recognition for employees who submit IDRs, are listed as inventors on patent applications, or are listed as inventors on issued patents.
To help employees who are engaged in research and development figure out when to submit IDRs, you may want to educate these employees about the inventions your company has already patented or obtained trade secret protection for, as well as what ideas may generally be patentable or protectable by trade secrets.
HOW TO ENFORCE EFFECTIVE OFFBOARDING STRATEGIES
Whether your employees depart voluntarily or not, take steps to ensure that confidential information doesn’t leave with them!
1. REVOKING ACCESS TO ANY CONFIDENTIAL INFORMATION
Depending on the employee’s role and ability to access information, you’ll need to monitor all locations where the employee could have stored information.
At the most basic level, the employee must remove all company e-mail, documents, and information from their personal devices, like their smartphone and laptop. They’ll also need to return all physical materials that belong to the company (like work laptops), and destroy any physical materials in their possession (such as by shredding documents or disks).
2. PERFORMING AN EXIT INTERVIEW
The exit interview is your best opportunity to check in with a departing employee, and ensure that they no longer possess or have access to any confidential information.
In the exit interview, you should remind the employee of their obligations as outlined in their original employment agreement or company IP policy, and review a checklist of the materials and data that need to be returned or destroyed. To conclude, have them sign a statement affirming their continued confidentiality obligations.
If you aren’t able to conduct an exit interview and have the employee sign this statement in person, you could send them a letter reminding them of their obligations instead. If necessary, you may even include a copy of their signed employment contract.
3. FOLLOWING UP WITH SUBSEQUENT, COMPETING EMPLOYERS
In the event that the employee leaves to join a competitor, you may want to send a letter to the new employer to inform them of the employee’s confidentiality obligations to your company.
MAINTAINING A COMPANY CULTURE OF CONFIDENTIALITY
While you should establish expectations upfront about how employees should handle confidential information, it’s also critical for you to continue enforcing an internal culture that values confidentiality through education, training, and strong offboarding policies.
It also doesn’t hurt to make the documentation process straightforward to use: You can streamline the process of completing an IDR by offering a template that covers all the bases. If you don’t have one, download our FREE template to immediately get started with documenting your inventions!
PROTECT YOUR INTELLECTUAL PROPERTY
Tracking the intellectual property that your employees create is not only a good business practice — it also helps to streamline the patent process itself.
Our FREE invention disclosure template is a simple document that helps you:
- Record essential details about your invention
- Provide evidence of important dates
- Speed up the process of conceiving an invention and filing a patent application
- Craft stronger patent claims
Fill out the short form on this page to get the template now.
GET THE TEMPLATE
Michael K. Henry, Ph.D.
Michael K. Henry, Ph.D., is a principal and the firm’s founding member. He specializes in creating comprehensive, growth-oriented IP strategies for early-stage tech companies. | https://henry.law/blog/how-to-implement-confidentiality-policies/ |
For years, I have seen how the variety, volume, and velocity of change and innovation have impacted businesses all over the world. Organizations are responding to such disruption by rethinking, reimagining, and resetting their business models, organizational structures, business processes, and products and services. These shifts – no matter how small – are influencing how people live, work, and relate to one another.
In this digital era, employees and leaders who are deeply empathetic, collaborative, agile, and accepting of risk, failure, and ambiguity are in high demand. These talented people are fit, fast, and focused. When they combine their strong cognitive health with an insatiable curiosity for learning, they embrace change and new ways of working, while building trusted connections with their colleagues. But more important, they are humans just like you and me.
Everyone has the power to be such a digital business superhero. It’s just a matter of knowing how to tap into that part of ourselves that allows us to rethink, reimagine, and reset in ways that will help our businesses succeed in the digital economy. Here are a few essential truths about the future of work that can help you get started.
1. Today’s digital world is the new normal
New technologies bring the promise of higher productivity, increased efficiency and effectiveness, consistent quality, and improved safety and convenience. However, this is only a part of the larger digital transformation picture. Organizations still need an agile, skilled, and creative workforce that can leverage these innovations to innovate continuously, communicate effectively, think critically, work collaboratively, and interact empathetically with others.
How can you upskill your workforce to take on the challenge of the digital age? Discover how this new digital normal is impacting the future of work.
2. Mindset matters just as much as technology in the digital age
When individuals and teams operate with an inward mindset, they invite resistance that undermines collaboration and creativity, ultimately dooming projects. We’ve all seen this at one time or another. Yet, as soon as people demonstrate empathetic concern and operate with an outward mindset, such friction is defused and resistance gives way to collaboration that enables project success.
According to research conducted by McKinsey and Company, organizations that identify and address pervasive mindsets are four times more likely to succeed in organizational change efforts than companies that overlook this stage. Consider the practical power of outward mindset approaches and the correlation of outward mindset efforts to enhanced collaboration, innovation, productivity, and organizational change success.
3. Emotional intelligence is a critical power skill
For years, organizations thought that bright, intelligent people were the secret to superior performance. However, recent research has concluded that high IQs or stellar grade point averages do not always translate into equally exemplary job performance. Instead, the ability to recognize, understand, use, and manage emotions in yourself and others is emerging as an accurate predictor of high performance.
Here’s the good news: We all have the capacity to learn and develop this emotional intelligence.
Review the four emotional intelligence skills critical for superior employee and customer relations. Understand the lifestyle habits that can sabotage emotional intelligence and offer practical advice for continually growing and improving this power skill.
4. Mindfulness promotes a high-performance business culture
More and more of the leading executives are creating high-performing cultures by raising their leaders’ and workforce’s awareness of mindfulness. This approach is not just a way to keep stress under control and improve attention, but also empowers people to readily see patterns and connections and make better decisions.
Find out how you can use mindfulness training to unleash leadership potential in your employees and prepare your next generation of millennial leadership.
5. Digital fluency depends on digital IQ
The magnitude of training and re-skilling is enormous for workers who are not digital natives. The shortage of digital skills in the current marketplace is unprecedented. In fact, 50% of employees believe they have the skills they need to be successful, and only 34% believe their company can provide adequate training. However, no matter the generation, all employees need to learn skills such as building a personal online brand, virtual facilitation, social media outreach, content curation, and online etiquette.
How is your business addressing the shortage of digital skills while building digital fluency? Determine which digital skills are most relevant to today’s business landscape and how to further your organization’s digital fluency.
6. Innovation, creativity, and design thinking lead to long-term success
Innovation is about finding the right problem to solve. But to succeed in today’s digital economy, companies need disruptive innovation.
How? Through design thinking.
Dive into the processes, skill sets, and mindsets that will revolutionize your business culture. Learn about the value of design thinking when implementing an effective framework for you as an individual and within your organization.
7. It is important to future-proof your career now for tomorrow’s workplace
Up to 47% of jobs are highly susceptible to computer replacement within the next decade, according to one landmark study. What can you do to prepare their careers for tomorrow?
There’s no better time than now to future-proof our careers. Reflect on the trends affecting the future of work and identify the best practices that successful people use to stay current.
Get the practical tips and resources you need to prepare for tomorrow
Future success requires curiosity, continuous learning, and personal growth. In other words, people who regularly monitor their career – present and future – will be rewarded as they search for skills and roles that are worth investing and avoid those that are fruitless.
This is the primary reason why Americas’ SAP Users’ Group (ASUG) decided to dedicate a full day at the ASUG Annual Conference to introduce a framework that will allow you, your team, and your leaders to explore how your beliefs, personal wellness, and human skills play a critical role in digital transformation. Throughout this event, we will explore these essential truths about the future of work and connect with other attendees and build personal learning networks and mentor/mentee relationships that can further your career and personal dreams.
As Oliver Wendell Holmes so perfectly said, “A mind that is stretched by a new experience can never go back to its old dimensions.”
Stretch your mind, and you will stretch your career.
Arrive at SAPPHIRE NOW and ASUG Annual Conference on May 15, 2017, in Orlando, Florida, to join the discussion. Register today for the special full-day ASUG pre-conference event. | https://www.digitalistmag.com/future-of-work/2017/03/07/rethink-reimagine-reset-for-digital-age-04950496 |
Z Score Definition – Updated 06/12/20
The z-score is the statistical measure of the difference between two randomly drawn samples. It is expressed as the number of standard deviations a test result is from the mean in the standard normal distribution. The larger the z-score, the higher the probability that the two means have not been drawn from the same population, and the difference between the samples is statistically significant.
When you standardise a score (calculate the average score) the mean is always 0, and the standard deviation (variance) is in increments of 1. The value of a z-score starts at -3 standard deviations (on the left of the normal distribution curve) and rises to +3 standard deviations (on the right side of the normal distribution curve).
To use a z-score for a statistical test you will often be asked for the mean and the standard deviation of the population distribution. In practice you may have to use the sample mean and standard deviation instead if the statistics for the population are unknown.
Different Z-Score Tables:
There are a number of different z score tables. This can be confusing because they are all called z-score tables, but they are calculated differently and are used for specific purposes.
Each tables is explained in the sections below and how it is used. The z score table above, also known as the standard normal distribution table, shows the area between the mean (0) and a positive z-score. This allows us to estimate the area within a certain number of standard deviations which we use when calculating a confidence level or interval.
To estimate the percentage of the area which falls within a certain number of standard deviations of the mean (±) we would need to multiply the score by 2 because the above table only shows figures for one side of the normal distribution. If we want to know the percentages below (to the left) for the whole area of the normal distribution we need to add 0.5 to the z-score.
When you see a z score table with values starting at 0.500 (see below in section 1. – How is the z score used), this is a positive z score table. The scores in the positive table have been adjusted to represent the percentage of values below (to the left) a positive z score in the standard normal distribution (the whole normal distribution to – 3 standard deviations). We will use this kind of z score table later to estimate the values to the left of our test score.
1. How is the Z Score Used?
The z-score and table allows you to compare the results of a test or survey to a “normal” population (i.e. the mean). To achieve a “normal” distribution your sample needs to be drawn randomly to ensure it is representative and it is a large sample (i.e. over 100 as an absolute minimum). The z-score indicates how your test result compares to the population’s mean for the success metric.
Example:
Consider we have a test result which generates a z score of 1.59. To find the percentage value to the left of a positive z score we use the table below which uses decimal figures to display the percentage of the area below the test score.
Look down the left-hand side column to find the one decimal place z-score (1.5) and then look up the second decimal place along the top of the table (.09). Then look across the table to where they converge and you will see a z-score of 0.94408.
Standard Normal Distribution: Area to the Left of Z score
This corresponds to 94.41% of the standard normal distribution being below (to the left) of the test score. That’s because we multiply the decimal value by 100 to get the percentage and rounded up to the nearest two decimal places.
2. The 68-95-99.7 Rule:
There is a easy way of remembering what proportion of the area under the curve is accounted for by different standard deviations from the mean (0).
-
-
- Approximately 68% of the population are within ± one standard deviation of the mean.
- Approximately 95% of the population are within ± two standard deviations of the mean.
- Approximately 99.7% of the population are within ± three standard deviations of the mean.
-
You can calculate these figures by using the z score table below which shows the area between the mean (0) and the z-score. However, this only gives you the score for positive standard deviations. To convert the figure to both sides of the distribution (i.e. ±) you will need to multiply the score by 2. For example, go to 1 standard deviation in the table (.3413) and multiply it by 2 will give you a score of ± 1 (.6826) or 68%.
The most common z score used to determine a confidence interval is 1.96. Let’s use the table below to estimate the proportion of values that are accounted for by 1.96 standard deviations. As you can see this generates a score of 0.4750 or 47.5%. However, this only represents one side (tail) of the curve and so we need to multiply it by 2 to give us ± 0.95 or ± 95%.
You can’t do this with the other z table we use as that gives the area left of the standard deviation and all the way to – 3 standard deviations. That’s why you need to be careful to use the appropriate z score table dependent upon what you are trying to calculate.
3. A Negative Z Score Table:
When we want to estimate the area under the normal distribution to the left of a negative z score we can use a negative z score different table. This gives the values in the shaded area of the curve to the left of the negative z score. We would use this if our test score was below the population mean and we wanted to test if the score was significantly lower than the mean (i.e. a one-tail test).
4. The Z-Score Formula – Single Sample:
The formula to calculate the z-score for a single sample is as follows:
-
-
- Z = ( x – µ ) / σ
-
- Where x = test score
- µ = test mean
- σ = standard deviation of the mean
For example, let’s say your experiment generated a score of 200, the mean is 140 and the standard deviation is 30.
-
-
- Z = (200 – 140) / 30 = 2
-
This shows that your test score is 2 standard deviations above the mean. In many instances we don’t have know the population mean or the standard deviation. For this reason we are often forced to use the sample mean and standard deviation. However, you should only use the sample statistics if we are confident the sample has been drawn using an appropriate random method and is sufficiently large to assume a normal distribution. Be careful not to fall into the trap of the law of small numbers.
5. The Z-Score Formula – Two Samples:
With most experiments we have multiple samples because we want to compare our test result with that of the default (the existing experience). In such instances we need to measure the standard deviation of the different sample means (i.e. the standard error).
Z = ( x – µ ) / ( σ /√n )
Here the z-score describes how many standard errors there compared to the sample and population mean.
It is also used as the critical value in the calculation of the margin of error in an experiment. The margin of error is calculated using either of these equations.
Margin of error = Critical value x Standard deviation of the statistic
Margin of error = Critical value x Standard error of the statistic
When you know the standard deviation of the statistic you can use the first equation. Alternatively use the second equation.
6. Calculating the Z-Score For Multiple Samples:
Let’s say we conduct an experiment involving a new registration form. The mean conversion rate for the existing form is 20%, with a standard deviation of 5%. What is the probability of increasing conversion to 35% with a sample of 100 sessions.
-
-
- Z = ( x – µ ) / ( σ /√n )
- Z = (35 – 20) / (5 / √100) = 15 / 0.05 = 3
-
We already know that 99.7% of values within a normal distribution fall within 3 standard deviations of the mean. This tells us that there is only around a 0.3% probability that the existing registration form could achieve a conversion rate of 35%. We can, therefore, be confident that such an increase in conversion would likely to be down to the new experience and not a random result.
Statistical significance is dependent upon more than a formula and the outputs. The formula we use for calculating the z-score is based upon certain assumptions being met. These relate to the sample size being large (i.e. over 100) and the sample being drawn using an appropriate random method (random number cluster). Unless these assumptions are met your high level of statistical significance can be meaningless and misleading.
7. Skewness and Kurtosis:
Despite the distribution of many phenomena following a bell-like curve, in reality the vast majority of data sets don’t create a perfect normal distribution. It’s more common for a data set to display some skewness to the left or the right and not to have a symmetrical peak.
Skewness is a measure of the asymmetry of the distribution of a variable. The skewness of a normal distribution is usually zero, indicating a symmetrical distribution, though this may not always be the case. When the skewness is less than zero (negative skewness) it means the left-hand tail is longer than the right-hand tail. This indicates the bulk of observations lie to the right of the mean. A positive skewness indicates that the right-hand tail is longer than the left side of the curve and the majority of observations lie to the left of the mean.
Kurtosis is an indication of the peakedness and thickness of the tail ends of a distribution. A distribution with a large kurtosis will have a tail longer than the normal distribution (e.g. five or more standard deviations from the mean). The normal distribution has a kurtosis of three and so many statistical solutions use the ‘excess kurtosis’ which is obtained by subtracting 3 from the kurtosis (proper). Thus the excess kurtosis for a standard normal distribution will be zero.
A positive excess kurtosis distribution is referred to as a leptokurtic distribution, meaning a high peak. A negative excess kurtosis distribution is called a platykurtic distribution, indicating a flat topped curve.
8. Normality Test Using Skewness and Kurtosis:
We can conduct a normality test using skewness and kurtosis to identify if our data is close to a normal distribution. A z score can be calculated by dividing the skew value or excess kurtosis by their standard error.
Z = Skew value / SE skewness
Z = Excess kurtosis / SE excess kurtosis
However, because the standard error becomes smaller as the sample size increases, z-tests with a null hypothesis of normal distribution are often rejected with large sample sizes even though the distribution is not much different from normality. On the other hand, with small samples the null hypothesis of normal distribution is more likely to be accepted than it probably should. This means we should use different critical values for rejecting the null hypothesis according to the sample size.
-
-
- Small sample sizes (n < 50). When the absolute z-score for either skewness or kurtosis is greater than 1.96 (or 95% confidence level), we can reject the null hypothesis and assume the sample distribution is non-normal.
-
-
-
- Medium sized sample (n = > 50 to <300). If the absolute z-score for either skewness or kurtosis is larger than 3.29 (or 95% confidence level) we can reject the null hypothesis and decide the sample distribution is non-normal.
-
-
-
- Large sample size (n > 300). Here we can use the absolute values of skewness and kurtosis without consulting the z-value. An absolute skew value of greater than 2 or a kurtosis (proper) above 7 (4 for excess kurtosis) would indicate the sample distribution is non-normal.
-
9. What to do when your data in non-normal:
When you discover that your data is non-normal the automatic response is to seek a nonparametric test. Before going down this route it’s important to consider that a number of the tests that assume normality are in fact not sensitive to normality. This includes t-tests (1-sample, 2-sample and paired t-tests) which are often used by A/B testing software, Analysis of Variance (ANOVA), Regression and Design of Experiments (DoE).
You should also consider why your data is non-normal. Some causes of data not being normal include:
-
-
- Large samples will often diverge from normality.
- Outliers or mixed distributions are present.
- Skewness is observed in the data.
- The underlying distribution is non-normal
- A low discrimination gauge is used with too few discernible categories.
-
Tests which are not robust to non-normality include:
-
-
- Capability Analysis and determining Cpk and Ppk.
- Tolerance Intervals
- Acceptance Sampling for variable data.
- Reliability Analysis for estimating high and low percentiles.
-
10. Non-Parametric Tests:
The most common non-parametric tests are Pearson’s chi-squared, Mann-Whitney U-test and Fisher’s exact tests. Pearson’s chi-squared is designed for categorical data (two or more categories) to determine if there is a significant difference between them. For examples, this would be useful to look at differences between new and returning visitors.
Chi-square tests work by comparing the difference between observed (i.e. the data from the test) and the expected values in each cell in a table. Expected values are estimated based upon the null-hypothesis being correct (i.e. that there is no difference between the control and test groups).
Fisher’s exact test is frequently applied instead of the chi-square test for small samples or when the categories are imbalanced. Unlike the chi-square test, Fisher’s exact test estimates the exact probability of observing the distribution seen in the table rather than expected values.
The Mann-Whitney U-test is a non-parametric test which requires continuous data. This means it’s necessary to be able to distinguish between values at the nth decimal place and data needs to be from an ordinal, interval or ratio scale of measurement. For example, a conversion rate is a ratio and so the Mann-Whitney U-test could be used to measure statistical significance.
11. The Null Hypothesis:
We have briefly mentioned the null hypothesis and how it is often based upon the variant not being significantly above or worse than the control. This is what we call a superiority test, we want to ensure we don’t implement an experience we think is better than the control, when in fact it is not.
However, with A/B testing, we often just want to avoid implementing an experience which is significantly worse than the existing design. For example, we may believe the new experience offers improved usability or it better informs our users about a legal aspect of our service. We might not expect this to improve our conversion rate, but we will be prepared to implement it provided it doesn’t significantly reduce conversions.
This second type of null hypothesis is called a non-inferiority test. As a consequence of this type of test we might set the null hypothesis at around – 2% rather than zero with a superiority test. This means that a small uplift in conversion might become statistically significant with a non-inferiority test and yet it would have been dismissed as not significant had we set our null hypothesis at zero.
12. When to use Non-Inferiority Tests?
You now understand the importance of setting the right kind of null hypothesis. That doesn’t mean it is always appropriate to use a non-inferiority test. When the stakes are high or you have a senior stakeholder who is dead against implementing an experience you will probably have to use a superiority test. We have to evaluate each test separately and consider some of the following factors:
-
-
- High risk of even a small decline in conversion having a relatively large impact on revenues or conversions (e.g. ecommerce checkout or cashier in gaming).
- High implementation costs and difficult to roll back change.
- Significant ongoing maintenance and related costs to support the new experience.
-
Non-inferiority tests tend to be more appropriate when we don’t see these issues and so we should consider them when:
-
-
- Implementation costs are low
- Little if any maintenance costs
- Easy to roll back the change if it needs to be removed
- Very little opposition to the new experience from internal stakeholders.
- Low risk to revenues or other important conversions
-
This means that you can use non-inferiority tests for many small and trivial changes such as button colour, image changes, copy updates, live chat, and email capture forms. We don’t expect many small changes to improve conversions, and so it is totally appropriate to set the bar lower and use a non-inferiority test.
13. Online Experiments (A/B Tests and MVTs):
With an online experiment, understanding the probability of an outcome is very low and accepting the null hypothesis is proven (i.e. there has been no significant improvement), indicates that one of these is true;
-
- There has been a real improvement in our variant compared to the default experience.
- There has been no real improvement and we recorded a rare outcome that happens by chance (e.g. at 95% statistical significance we would expect this to be observed one in twenty times).
- Our statistical model in incorrect.
When we are measuring changes in proportions, such as comparing two conversion rates, we can ignore 3 because we will be testing a classical binomial distribution. If we tried to test a continuous variable, such as revenue, this would be problematic because the distribution would clearly not be of a binomial nature.
Statistical significance ( e.g. 0.05 or 95%) is a measure of uncertainty. At a statistical confidence of 95% we know there is a one in twenty chance that what we observed was just a natural variation from the mean. This is not an error, but simply a rare event. The level of uncertainty you are willing to accept should be related to the risk of making a wrong decision (i.e. dismissing the null hypothesis when the null hypothesis is true).
The p-value you intend to use for your experiment should be agreed before proceeding with a test. Otherwise there is a danger that you will adjust the level of statistical confidence to obtain a positive outcome (i.e. dismiss the null hypothesis). It’s also important not to simply set a default level of statistical confidence (e.g. 95%) without considering the nature of the experiment and the risks involved.
The same is true for experiments relying on confidence intervals. They use the same logic as p-values and it’s good practice to look at both p-values and confidence intervals to understand the level of uncertainty an A/B or MVT generate.
As part of your preparations for any A/B test always estimate the required duration of an experiment. The VWO test duration calculator allows you to estimate the length of the test so that you won’t let your test run for longer than necessary.
This is important because it will improve your chances of getting a conclusive test result and help set expectations with our stakeholders. There is always an opportunity cost with running an A/B test because we could use the time and resources for something else (e.g, another experiment). If we fail to set the duration before an experiment begins there is a danger we will be subject to the sunk cost fallacy and allow the test to run for longer than it should.
14. History of Standard Normal Distribution:
The French mathematician Abraham de Moivre (1667 to 1754) is credited with being the first to propose the central limit theorem which established the concept of the standard normal distribution. He was fascinated with gambling and was often employed by gamblers to estimate probabilities. Moivre discovered a bell shaped distribution whilst observing the probability of coin flips. It is suggested he was trying to identify a mathematical expression for the probability of x number of tails out of 100 coin flips.
This turned out to be a breakthrough in helping to discover that a large number of phenomena (e.g. height, weight and strength) approximately form a bell shaped distribution. Lambert Quetelet (1796 to 1874), a Belgium astronomer, first noticed a ratio between weight and height which became the basis for the body mass index (BMI).
The normal distribution has often been used in the measurement of errors. When it was first discovered it was used to analyse errors of measurement in astronomical observations. Galileo observed that errors were systematic and later it was noted that errors also follow the normal distribution. Today scientists and social science use the normal distribution extensively. In finance traders have tried to use the normal distribution to model asset prices. However, real life data rarely, if ever, follows a perfect normal distribution.
Resources:
Google Optimize: How to set up and run experiments with Google Optimize.
Law of small numbers: Does the law of small numbers explain many myths?
A/B testing software – Which A/B testing tools should you choose?
Types of A/B tests – How to optimise your website’s performance using A/B testing. | https://www.conversion-uplift.co.uk/glossary-of-conversion-marketing/z-score/ |
In marketing, everything comes back to the customers. Whether it’s attracting new business, addressing pain points, or supporting recent clients, marketers need an understanding of their customers (and potential customers) to be successful.
Buyer personas are the go-to approach to document and distribute information about a specific segment of an audience. They act as a sort of checklist, helping marketers to flesh out their understanding of their customers.
This week’s infographic, which comes from GetCRM, walks you through the process of creating buyer personas. It covers everything from the “ingredients” you need for a buyer persona to the ways you can “serve and enjoy” personas.
A few key points from the infographic include:
- Start your personas with a list of important fields you want to define
- Persona information can be gathered from studying current customers or performing outside research
- Give your persona a name (it might seem silly, but it helps to humanize the persona)
Let us know what you think:
- Do you use buyer personas?
- How do you apply buyer personas to your marketing efforts?
- Would you add any information to this guide? | https://www.hipb2b.com/blog/infographic-build-buyer-persona-recipe-success |
Plagiarism is defined in dictionaries as the “wrongful appropriation”, “close imitation” or “purloining and publication” of another author's “language, thoughts, ideas, or expressions” and the representation of them as one's own original work, but the notion remains problematic with nebulous boundaries. There is no rigorous and precise distinction between imitation, stylistic plagiarism, copy, replica and forgery. The modern concept of plagiarism as immoral and originality as an ideal emerged in Europe only in the 18th century, particularly with the Romantic movement, while in the previous centuries authors and artists were encouraged to “copy the masters as closely as possible” and avoid “unnecessary invention”.
The 18th century, new morals have been institutionalized and enforced prominently in the sectors of academia and journalism, where plagiarism is now considered academic dishonesty and a breach of journalistic ethics, subject to sanctions like expulsion and other severe career damage. Not so in the arts, which not only have resisted in their long-established tradition of copying as a fundamental practice of the creative process, but with the boom of the modernist and postmodern movements in the 20th century, this practice has been heightened as the central and representative artistic device Plagiarism remains tolerated by 21st century artists.
Plagiarism is not a crime per se, but is disapproved more on the grounds of moral offence, and cases of plagiarism can involve liability for copyright infringement
Etymology and history
In the 1st century, the use of the Latin word plagiaries, literally kidnapper, to denote someone stealing someone else's work, was pioneered by Roman poet Martial, who complained that another poet had “kidnapped his verses”. This use of the word was introduced into English in 1601 by dramatist Ben Jonson, to describe as a plagiary someone guilt of literary theft.
The derived form plagiarism was introduced into English around 1620. The Latin plagiārius (kidnapper), and plagium (kidnapping), has the root plaga (snare/net), based on the Indo-European root: plak (to weave), seen for instance in Greek plekein, Bulgarian плета: pleta, Latin plectere, all meaning (to weave).
The modern concept of “plagiarism as immoral”, and “originality as an ideal”, emerged in Europe in the 18th century, with the Romantic Movement. For centuries before, not only literature was considered “publica materies”, a common property from which anybody could borrow at will, but the encouragement for authors and artists was actually to “copy the masters as closely as possible”, for which the closer the copy the finer was considered the work. This was the same in literature, music, painting and sculpture. In some cases, for a writer to invent their own plots was reproached as “presumptuous”. This stood at the time of Shakespeare too, when it was common to appreciate more the similarity with an admired classical work, and the ideal was to avoid “unnecessary invention”.
The modern ideals for originality and against plagiarism appeared in the 18th century, in the context of the economic and political history of the book trade, which will be exemplary and influential for the subsequent broader introduction of capitalism. Originality that traditionally had been deemed as impossible was turned into an obligation by the emerging ideology of individualism. In 1755 the word made it into Johnson's influential A Dictionary of the English Language, where he was cited in the entry for copier “One that imitates; a plagiary; an imitator”. “Without invention a painter is but a copier, and a poet is but a plagiary of others", and in its own entry, denoting both: “a thief in literature, one who steals the thoughts or writings of another", and “the crime of literary theft”.
Later in the 18th century, the Romantic Movement completed the transformation of the previous ideas about literature, developing the Romantic myth of artistic inspiration, which believes in the “individualized, inimitable act of literary creation", in the ideology of the “creation from nothingness” of a text which is an “autonomous object produced by an individual genius”.
Despite the 18th century new morals, and their current enforcement in the ethical codes of academia and journalism, the arts, by contrast, not only have resisted in their long-established tradition of copying as a fundamental practice of the creative process, but with the boom of the modernist and postmodern movements, this practice has been accelerated, spread, increased, dramatically amplified to an unprecedented degree, to the point that has been heightened as the central and representative artistic device of these movements. Plagiarism remains tolerated by 21st century artists.
Self-plagiarism:
Self-plagiarism also known as “recycling fraud” is the “reuse of significant, identical, or nearly identical portions of one's own work without acknowledging that one is doing so or without citing the original work”. Articles of this nature are often referred to as “duplicate or multiple publications”. In addition to the ethical issue, this can be illegal if copyright of the prior work has been transferred to another entity. Typically, self-plagiarism is only considered to be a serious ethical issue in settings where a publication is asserted to consist of new material, such as in academic publishing or educational assignments. It does not apply except in the legal sense, to public-interest texts, such as social, professional, and cultural opinions usually published in newspapers and magazines.
In academic fields, identifying self-plagiarism is often difficult because limited reuse of material is both, legally accepted as fair use and ethically accepted.
It is common for university researchers to rephrase and republish their own work, tailoring it for different academic journals and newspaper articles, to disseminate their work to the widest possible interested public. However, it must be born in mind that these researchers also obey limits: if half an article is the same as a previous one, it will usually be rejected. One of the functions of the process of peer review in academic writing is to prevent this type of recycling.
The concept of self-plagiarism
The concept of "self-plagiarism" has been challenged as self-contradictory or an oxymoron.
For example, Stephanie J. Bird argues that self-plagiarism is a misnomer, since by definition plagiarism concerns the use of others' material.
However, the phrase is used to refer to specific forms of potentially unethical publication. Bird identifies the ethical issues sometimes called “self-plagiarism" as those of “dual or redundant publication". She also notes that in an educational context, “self-plagiarism" may refer to the case of a student who resubmits “the same essay for credit in two different courses". David B. Resnik clarifies, “self-plagiarism involves dishonesty but not intellectual theft”.
According to Patrick M. Scanlon, “self-plagiarism" is a term with some specialized currency. Most prominently, it is used in discussions of research and publishing integrity in biomedicine, where heavy publish-or-perish demands have led to a rash of “duplicate” and “salami-slicing" publication, “the reporting of a single study's results in least publishable units within multiple articles” (Blancett, Flanagin, & Young, 1995; Jefferson, 1998; Kassirer & Angell, 1995; Lowe, 2003; McCarthy, 1993; Schein & Paladugu, 2001; Wheeler, 1989). Roig (2002) offers a useful classification system including “four types of self-plagiarism”: (1) duplicate publication of an article in more than one journal, (2) partitioning of one study into multiple publications, often called salami-slicing, (3) text recycling, and (4) copyright infringement.
References
1. Oxford English Dictionary, First Edition (1928-1989).
2. Clarke Roger (2006): Plagiarism by academics:”More complex than it seems”. Journal of the Association for Information Systems, 7(1):91-121.ISN 1536-9323.
3. Hart, Fresher, Tim (2004): Plagiarism and poor academic practice; a threat to the extension of E-learning in higher education. Electronic Journal of E-learning.
4. Dellavalle, Robert P & Banks (2007): Frequently asked questions regarding self-plagiarism; how to avoid recycling Fraud. Journal of the American Academy of Dermatology 57(3) ( 527). | http://uro-egypt.com/?p=612 |
It is a term to describe the action to manipulate interactions with other people to create a favorable outcome for yourself, otherwise known as “Social Engineering”. Through manipulation, phishers can try to trick someone into making mistakes and with human error, it is the key that makes social engineering work. Rather than trying to outsmart well-built and thought out layers of security, one only need to trick one layer that could make a mistake, to get access to everything. There are various “attacks” involved with social engineering where the goal is to trick the target through manipulation into revealing information without them knowing that they are handing it out voluntarily.
The final product turned into hundreds of lines of code all connected onto one page, with lots of organization and unique IDs to keep it all working, and one semester later, I've successfully created a phishing website hidden behind a blog website that lures the user with multiple traps/chances to provide bits of information for me to "steal" and use to inform the user about the dangers of phishing scams in an informative presentation. | https://www.anthonylum.ca/projects/phishing-for-gold |
How Do Gasoline Prices Affect Fleet Fuel Economy?
How Do Gasoline Prices Affect Fleet Fuel Economy?
How Do Gasoline Prices Affect Fleet Fuel Economy? Shanjun Li, Christopher Timmins, and Roger H. von Haefen* Exploiting a rich data set of passenger vehicle registrations in twenty U.S. metropolitan statistical areas from 1997 to 2005, we examine the effects of gasoline prices on the automotive fleet’s composition. We find that high gasoline prices affect fleet fuel economy through two channels: (1) shifting new auto purchases towards more fuel-efficient vehicles, and (2) speeding the scrappage of older, less fuel-efficient used vehicles. Policy simulations based on our econometric estimates suggest that a 10 percent increase in gasoline prices from 2005 levels will generate a 0.22 percent increase in fleet fuel economy in the short run and a 2.04 percent increase in the long run.
(JEL H23, L62, Q31) How the composition of the U.S. vehicle fleet responds to changes in gasoline prices has important implications for policies that aim to address climate change, local air pollution, and a host of other externalities related to gasoline consumption.1 We investigate this response.
Exploiting a unique and detailed data set, we decompose changes in the vehicle fleet into changes in vehicle scrappage and new vehicle purchase decisions, and analyze how gasoline prices influence each of these choice margins. We then recover the fuel economy elasticities with respect to gasoline prices in both the short and long run. In 2007, the United States consumed 7.6 billion barrels of oil. This represents roughly one-quarter of global production, with gasoline consumption accounting for 45 percent. The combustion of gasoline in automobiles generates carbon dioxide emissions, the predominant greenhouse gas linked to global warming, as well as local air pollutants such as nitrogen oxides and volatile organic compounds that harm human health and impair visibility.
The costs associated with these environmental effects are generally external to gasoline consumers, leading many analysts to argue for corrective policies.
1 See Parry, Harrington, and Walls (2007) for a comprehensive review of externalities associated with vehicle usage and federal policies addressing those externalities.
1 To address these externalities, a suite of policy instruments have been advanced, such as increasing the federal gasoline tax, tightening Corporate Average Fuel Economy (CAFE) standards, subsidizing the purchase of fuel efficient vehicles such as hybrids, and taxing fuel- inefficient “gas guzzling” vehicles. Several recent studies have compared gasoline taxes and CAFE standards and have concluded that increasing the gasoline tax is more cost-effective (National Research Council, 2002; Congressional Budget Office, 2003; Austin and Dinan, 2005; Jacobsen, 2007).
When evaluating the policy options, two important behavioral drivers are: (1) the utilization effect, or the responsiveness of vehicle miles traveled (VMT) to fluctuations in gasoline prices, and (2) the compositional effect, or the responsiveness of fleet fuel economy to gasoline price changes. Although a large body of empirical evidence on the magnitude of the utilization effect now exists (see Small and Van Dender (2007) and Hughes, Knittel, and Sperling (2008) for summaries and recent contributions), less evidence exists on the size of the compositional effect. This is the focus of our paper.
Existing studies that have attempted to quantify the elasticity of fuel economy to gasoline prices can be divided into two categories based on the methods and data used. Studies in the first group estimate reduced-form models where the average MPG of the vehicle fleet is regressed on gasoline prices and other variables by exploiting aggregate time-series data (Dahl, 1979; Agras and Chapman, 1999), cross-national data (Wheaton, 1982), or panel data at the U.S. state level (Haughton and Sakar, 1996; Small and Van Dender, 2007). Studies using time-series data or cross-sectional data are not able to control for unobserved effects that might exist in both temporal and geographic dimensions.
Although a panel-data structure allows for that possibility, the average MPG used in panel-data studies (as well as other studies in this group) suffers from measurement errors. Because fleet composition data are not readily available, most of the
2 previous studies infer the average MPG of the vehicle fleet by dividing total gasoline consumption by total vehicle miles traveled. However, data on vehicle miles traveled have well- known problems such as irregular estimation methodologies both across years and states (see Small and Van Dender (2007) for a discussion). The second group of studies models household-level vehicle ownership decisions (Goldberg, 1998; Bento, Goulder, Jacobsen, and von Haefen, 2008). A significant modeling challenge is to consistently incorporate both new and used vehicle holdings. Despite significant modeling efforts, Bento et al.
(2008) have to aggregate different vintage models into fairly large categories in order to reduce the number of choices a household faces to an econometrically tractable quantity. 2 This aggregation masks substitution possibilities within composite automotive categories and therefore may bias the estimates of both demand elasticities with respect to price and the response of fleet fuel economy to gasoline prices toward zero. Our study employs a qualitatively different data set and adopts an alternative estimation strategy. First, our data set measures the stock of virtually all vintage models across nine years and twenty Metropolitan Statistical Areas (MSAs) and hence provides a complete picture of the vehicle fleet's evolution.
In contrast to the aforementioned reduced-form studies, the fuel economy distribution of vehicle fleet is therefore observed in our data. Second, the data allow us to control for both geographic and temporal unobservables, both of which are found to be important in our study. In contrast to the structural methods alluded to above, we make no effort to recover the household preference parameters that drive vehicle ownership decisions, and hence our results are robust to many assumptions made in those analyses. Finally, since we observe the fleet composition over time, we are able to examine how the inflow and outflow of the vehicle fleet are influenced by gasoline prices.
2 Goldberg (1998) estimates a nested logit model by aggregating all used vehicles into one choice.
3 We first examine the effect of gasoline prices on the fuel economy of new vehicles (i.e., the inflow of the vehicle fleet) based on a partial adjustment model. We find that an increase in the gasoline price shifts the demand for new vehicles toward fuel-efficient vehicles. We then study the extent to which gasoline prices affect vehicle scrappage (i.e., the outflow of the vehicle fleet). To our knowledge, this is the first empirical study that focuses on the relationship between gasoline prices and vehicle scrappage.3 We find that an increase in gasoline prices induces a fuel-efficient vehicle to stay in service longer while a fuel-inefficient vehicle is more likely to be scrapped, ceteris paribus.
With both empirical models estimated, we conduct simulations to examine how the fuel economy of the entire vehicle fleet responds to gasoline prices. Based on the simulation results, we estimate that a 10 percent increase in gasoline prices will generate a 0.22 percent increase in fleet fuel economy in the short run (one year) and a 2.04 percent increase in the long run (after the current vehicle stock is replaced). We also find that sustained $4.00 per gallon gasoline prices will generate a 14 percent long-run increase in fleet fuel economy relative to 2005 levels, although this prediction should be interpreted cautiously in light of the relatively large out-of-sample price change considered and the Lucas critique (Lucas, 1976).4 The remainder of this paper is organized as follows.
Section I describes the data. Section II investigates the effect of gasoline prices on fleet fuel economy of new vehicles. Section III examines how vehicle scrappage responds to changes in gasoline prices. Section IV conducts simulations and discusses caveats of our study. Section V concludes. 3 Previous papers on vehicle scrappage have focused on factors such as age, vehicle price and government subsidies to retirements of old gas-guzzling vehicles (Walker, 1968; Manski and Goldin, 1983; Berkovec, 1985; Alberini, Harrington, and McConnell, 1995; Hahn, 1995; Greenspan and Cohen, 1999).
4 In particular, large and sustained gasoline price increases may introduce new demand and supply-side responses that would change the model parameters, which themselves might be functions of policy variables.
4 I. DATA Our empirical analysis and policy simulations are based on vehicle fleet information in twenty MSAs listed in Table 1. The geographic coverage for each MSA is based on the 1999 definition by the Office of Management and Budget. These MSAs, well-representative of the nation as discussed at the end of this section, are drawn from all nine Census regions and exhibit large variation in size and household demographics.
There are three data sets used in this study and we discuss them in turn. The first data set, purchased from R.L. Polk & Company, contains vehicle registration data for the twenty MSAs. This data set has three components. The first component is new vehicle sales data by vehicle model in each MSA from 1999 to 2005. The second component is vehicle registration data (including all vehicles) at the model level (e.g., a 1995 Ford Escort) in each MSA from 1997 to 2000. Therefore, we observe the evolution of the fleet composition at the model level over these 4 years.5 This part of the data includes 533,395 model-level records representing over 135 million vehicles registered during this period.
The third component is registration data at the segment level (there are twenty-two segments) in each MSA from 2001 to 2005.6 We observe 59,647 segment-level records representing over 170 million registrations during this period.7 Our new vehicle analysis is based on the first component of this data set while our vehicle scrappage analysis is based on the second and third components. 5 We ignore medium and heavy duty trucks and vehicles older than 1978 because the fuel economy information is not available for them. Since these vehicles account for less than 1 percent of total vehicle stock, our finding should not be significantly altered.
6 There are 22 segments including 12 for cars (basic economy, lower mid-size, upper mid-size, traditional large, basic luxury, prestige luxury, basic sporty, middle sporty, upper specialty, prestige sporty), 4 for vans (cargo minivan, passenger minivan, passenger van, cargo van), 3 for SUVs (mini SUV, mid-size SUV, full size SUV), and 3 for pickup trucks (compact pickup, mid-size pickup and full size pickup). 7 The stock data at the model level are very expensive. Facing the tradeoff of the level of aggregation and the length of the panel, we decided to purchase the stock data at the model level for years 1997 to 2000 and the stock data at the segment level for years 2001 to 2005.
We discuss in Section 3 how we integrate the model and segment level data.
5 Based on the changes in vehicle stock across time, we can compute vehicle survival probabilities at the model level from 1998 to 2000 and at the segment level from 2001 to 2005. The survival probability of a model in year t is defined as the number of registrations in year t over that in year t-1. The survival probability of a segment in a given year is defined similarly. These survival probabilities reflect two types of changes in the vehicle fleet. One is the physical scrappage of a vehicle and the other is the net migration of a vehicle in and out of the MSA, which might induce a survival probability larger than one.
The average survival probability weighted by the number of registrations at the model level (369,507 observations) is 0.9504 with a standard deviation of 0.1098, while the average survival probability at the segment level (48,370 observations) is 0.9542 with a standard deviation of 0.0835. The standard deviations of the survival probabilities at the segment level being smaller reflect the aggregate nature of data at the segment level.
The second data set includes MSA demographic and geographic characteristics from various sources (observations in year 2000 are shown in Table 1). We collect median household income, population, and average household size from the annual American Community Survey. Data on annual snow depth in inches are from the National Climate Data Center. From the American Chamber of Commerce Research Association (ACCRA) data base, we collect annual gasoline prices for each MSA from 1997 to 2005. During this period, we observe large variations in gasoline prices both across years and MSAs. For example, the average annual gasoline price is $1.66, while the minimum was $1.09 experienced in Atlanta in 1998 and the maximum was $2.62 in San Francisco in 2005.
Figure 1 plots gasoline prices in San Francisco, Las Vegas, Albany, and Houston during the period. Both temporal and geographic variation is observed in the figure although geographic variation is relatively stable over time. The general upward trend
6 in gasoline prices during this period can be attributed to strong demand together with tight supply. Global demand for oil, driven primarily by the surging economies of China and India, has increased significantly in recent years and is predicted to continue to grow. On the supply side, interruptions in the global oil supply chain in Iraq, Nigeria, and Venezuela, tight U.S. refining capacities, damage to that capacity as a result of gulf hurricanes, and the rise of boutique fuel blends in response to the 1990 Clean Air Act Amendments (Brown, Hastings, Mansur, and Villas-Boas, 2008) have all contributed to rising and volatile gasoline prices.
The third data set includes vehicle attribute data such as model vintage, segment, make, and vehicle fuel efficiency measured by the combined city and highway MPG. The MPG data are from the fuel economy database compiled by the Environmental Protection Agency (EPA).8 We combine city and highway MPGs following the weighted harmonic mean formula provided by the EPA to measure the fuel economy of a model: MPG=1/[(0.55/city MPG) + (0.45/highway MPG)].9 The average MPG is 21.04 with a standard deviation of 6.30. The least fuel-efficient vehicle – the 1987 Lamborghini Countach (a prestige sporty car) – has an MPG of 7.32, while the most fuel-efficient one – the 2004 Toyota Prius (a compact hybrid) – has an MPG of 55.59.
With vehicle stock data and MPG data, we can recover the fleet fuel economy distribution in each MSA in each year.10 The left panel of Figure 2 depicts the kernel densities of fuel economy distributions in Houston and San Francisco in 2005. The difference is pronounced, with 8 These MPGs are adjusted to reflect road conditions and are roughly 15 percent lower than EPA test measures. EPA test measures are obtained under ideal driving conditions and are used for the purpose of compliance with CAFE standards.
9 Alternatively, the arithmetic mean can be used on Gallon per Mile (GPM, equals 1/MPG) to capture the gallon used per mile by a vehicle traveling on both highway and local roads: GPM = 0.55 city GPM + 0.45 highway GPM. The arithmetic mean directly applied to MPG, however, does not provide the correct measure of vehicle fuel efficiency. 10 Although we only observe segment level stock data from 2001 to 2005, we can impute stock data at the model level during this period for vehicles introduced before 2001 based on the vehicle scrappage model estimated in Section 3. Along with these imputed model level stock data, the third component, which tells us the stock data for vehicles introduced after 2000, completes the vehicle stock data at the model level for years from 2001 to 2005.
7 Houston having a less fuel-efficient fleet. This is consistent with, among other things, the fact that San Francisco residents face higher gasoline prices, have less parking spaces for large autos, and tend to support more environmental causes. The right panel plots the kernel densities of fuel economy distributions in Houston in 1997 and 2005. The vehicle fleet in 1997 was more fuel- efficient than that in 2005 despite much lower gasoline prices in 1997. This phenomenon is largely driven by the increased market share of SUVs and heavier, more powerful and less fuel- efficient vehicles in recent years.
For example, the market share of SUVs increased from 16 percent to over 27 percent from 1997 to 2005 despite high gasoline prices from 2001. The long- run trend of rising SUV demand and declining fleet fuel economy at the national level only started to reverse from 2006, mostly due to record high gasoline prices. To examine whether the twenty MSAs under study are reasonably representative of the entire country, we compare the average MSA demographics and vehicle fleet characteristics with national data. As shown in Table 1, there is significant heterogeneity across the twenty MSAs in both demographics and vehicle fleet attributes.
Nevertheless, the means of these variables for the twenty MSAs are very close to their national counterparts. In Section IV(B), we examine how variation in demographics and gasoline prices affect the transferability of our MSA-level results to the entire nation.
II. FUEL ECONOMY OF NEW VEHICLES Our empirical strategy is to: (1) estimate the effect of gasoline prices on both the inflow (new vehicle purchase) and the outflow (vehicle scrappage) of the vehicle fleet, and (2) simulate the would-be fleet of both used and new vehicles under several counterfactual gasoline tax alternatives. In this section, we study how gasoline prices affect the fleet fuel economy of new
8 vehicles. We examine the effect of gasoline prices on vehicle scrappage in the next section. We separately investigate new vehicle purchase and vehicle scrappage decisions.
To preserve robustness, our approach allows each choice margin to be driven by different factors (e.g., credit availability, macroeconomic conditions) through different empirical models. Although these two decisions may very well be inter-related, modeling both the new vehicle market and used vehicle market simultaneously presents a significant empirical challenge as we discussed in the introduction. For example, Bento et al. (2008) have to aggregate different vehicle models into fairly large categories for the sake of computational feasibility in an effort to model new vehicle purchase and used vehicle scrappage simultaneously.
However, in doing so, potentially useful information about within-category substitution has to be discarded. Although we perform separate analyses of new vehicle purchase and used vehicle scrappage decisions, we attempt to control for possible interactions between the choice margins in our tax policy simulations.
A. Empirical Model Because of behavioral inertia arising from adjustment costs or imperfect information, we specify a partial adjustment process that allows the dependent variable (i.e., new purchases of a particular vehicle type) to move gradually in response to a policy change to the new target value. Specifically, the one-year lagged dependent variable is included amongst the explanatory variables. This type of model can be carried out straightforwardly in a panel data setting and has been employed previously in the study of the effect of gasoline prices on travel and fleet fuel economy.
For example, both Haughton and Sarkar (1996) and Small and Van Dender (2007) apply this type of model to the panel data of average fleet MPG at the U.S. state level, and
9 Hughes et al. (2007) employ it in a model of U.S. gasoline consumption. Compared with these previous studies, our data set is much richer in that we have registration data at the vehicle model level that provide valuable information about how changes in the gasoline price affect substitution across vehicle model. However, the empirical model based on the partial adjustment process cannot be applied directly to the vehicle model-level data because vehicle models change over time. On the other hand, aggregating data across models at the MSA level would discard useful information on vehicle substitution.
With this in mind, we generate an aggregated data panel in the following way. In each of the four vehicle categories (cars, vans, SUVs, and pickup trucks), we pool all the vehicles in the segment in each of the twenty MSAs from 1999 to 2005 and find the q-quantiles of the MPG distribution. Denote c as an MPG-segment cell that defines the range of MPGs corresponding to the particular quantile and denote t as year. Further denote Nc,t as the total number of vehicles in the MPG-segment cell c at year t, with MSA index m suppressed.
With this panel data, we specify the following model: , ) 1 ( ) ln( ) ln( , 3 , 2 1 , 1 , t c t t c t c t c Controls Other GasP MPG N N ξ θ θ θ θ + + + + + = − where the coefficient on the price of gasoline ( t GasP ) is allowed to vary with the inverse of the cell’s MPG. Noting that t c t MPG GasP , equals dollars-per-mile (DPMc,t), we can re-write this expression as follows: . ) ln( ) ln( , 3 , 2 1 , 1 , t c t t c t c t c Controls Other GasP DPM N N ξ θ θ θ θ + + + + + = − (1) Since the lagged dependent variable is one of the explanatory variables, serial correlation in the error term would render this variable endogenous.
We allow the error term to be first-order
10 serially correlated with correlation parameterγ : t c t c t c , 1 , , ν γξ ξ + = − , where t c, ν is assumed to be independent across t. With this serial correlation structure, the model can be transformed into: t c t c t c t c t c Z Z N N , 1 , , 1 , , ) ( ) ln( ) ln( ν γ θ γ + − + = − − , (2) where Zc,t is a vector of all the explanatory variables in equation (1), including the lagged dependent variable, and θ represents the vector of all coefficients.θ and γ can be estimated simultaneously in this transformed model using the least squares method, where we take into account both heteroskedasticity and cross-cell correlation of t c, ν in estimating the standard errors.
Another specification concern is how current and past gasoline prices affect consumer decisions. An implicit assumption in the literature on new vehicle demand (Berry, Levinsohn, and Pakes, 1995; Goldberg, 1995; Bento et al., 2008) is that gasoline prices follow a random walk, which implies that only current gasoline prices matter in purchase decisions. This assumption can have important implications on long-run policy analysis. For example, should past gasoline prices matter (as would be the case if gasoline prices exhibited mean-reversion), studies with the random walk assumption would under-estimate the long-run effect of a permanent gasoline tax increase. | https://www.readkong.com/page/how-do-gasoline-prices-affect-fleet-fuel-economy-2843271 |
Greetings scouts and parents of Troop 171!
My name is Jacob Goldstein and I will be having my Eagle Scout Project workdays this upcoming Saturday, March 23rd, at Mount Mourne IB middle school in Mooresville. During this workday, we will be assembling 3 park benches and landscaping a flower bed to put the benches in. I am planning on completing my project during this weekend because of upcoming events scheduled at Mount Mourne in the following 2 weeks. Due to this, I believe we can finish it in one day but if we don't I will host a second workday on Sunday the 24th for however long it takes us to finish.
With that being said, I would greatly appreciate any help by adults and scouts to help me complete my project. I will need about 2-3 adults, 3 senior scouts, and around 6-8 younger scouts to help me with the project. Due to the fact that I am planning on working all day Saturday, I understand if you will not be able to come for all 8 hours . However, coming for the first few hours or in the middle of the day will really help my project progress. Water, snacks, and pizza will be provided throughout the day. All you need to bring is yourselves and some work gloves.
Also, on a quick notice, if there were to be a weather issue (not likely) I will still have my project because I have keys to the school and we can assemble the benches in the gym.
What: Assembling 3 park benches and landscaping a flower bed to hold them along with planting flowers.
When: Saturday, March 23rd, from 9:00 am to 5:00 pm and Sunday, March 24th, only if necessary. Snacks, pizza, and water will be provided each day.
Where: Mount Mourne IB Middle School, 1431 Mecklenburg Highway, Mooresville, NC. We will be working close to the front office (specifically to the right) so you can park in the main teacher/visitor parking lot.
Thank you for all of the support and hope to see you all there on Saturday! | https://www.eventbrite.com/e/jacob-goldstein-eagle-project-workday-tickets-59071554610?aff=ebdssbdestsearch |
There is a clear change compared to a decade ago and teachers say their gifted pupils seem to lack the motivation to reach the highest levels of excellence.
Experts at the Frederick Chopin National School of Music in Warsaw blame the problem on 10 years of social and economic change, a mixture of the impact of the lifting of martial law, the fall of communism, the triumph and decline of Lech Walesa, and the effect of the rapid privatisation of state assets.
This flux in economic, social and ideological values has had a bad effect on the stability of family and educational life despite the fact that economic growth in Poland ranks among the highest in Europe (forecast at 10 per cent this year).
Bronislawa Kawalla, a teacher at the National School, has been witnessing the decrease in the concentration needed for excellence. Ms Kawalla, renowned for her interpretations of Bach's piano pieces, has given masterclasses at Britain's Royal College of Music, and is also a lecturer at the affiliated Frederick Chopin Academy of Music.
"I think that the young people are (just as) talented, but we have a lot of problems with their mental stamina. These days, our youth are definitely less resilient than they used to be. They find it very difficult to make the effort to drive themselves competitively, and this requires a lot of mental stamina. This somewhat cancels out their technical abilities."
Ms Kawalla does not think it is just music students who are affected. The wider social, economic, and ideological unrest is also affecting concentration among pupils, at primary and secondary school levels as well as on university courses in general. She has seen this in her own pupils of 10 years ago who are now at university.
The National School flourished during the communist period when it was generously funded. It remains entirely state-run, offering basically free tuition. But it faces the same budget cuts as every other sector.
The school remains dedicated to an age-integrated system of education, taking pupils from seven (exceptionally six) to 19. Most of them then go on to the affiliated Chopin academy.
Like Ms Kawalla, many staff double as lecturers at the academy. This means a very close one-to-one relationship can have lasted for up to 16 years, when students leave the academy at 24.
The school's teachers still use East European as opposed to Western methods. Though grouped by age, student ability is not held back because of the almost one-to-one tuition. Technique is developed according to the needs of each pupil, rather than on a particular overall method (in contrast to approaches such as the Suzuki method for violinists).
It remains to be seen how long children caught up in the rarefied world of young musical talent can cope with the financial and other pressures. What does seem certain is that the debate about the how far one judges the health of a society as a whole against the health of an artistic elite will only intensify here. | https://www.tes.com/news/discord-down-music-academy |
Login here to access secure Courses@Brown for students, faculty, and advisors.
Studying abroad can be a very valuable part of any student’s education. Brown offers study abroad programs in many countries in Africa and the Middle East for students to enrich the MES coursework.
Middle East Studies faculty and students have access to a wide array of library services and resources. These include:
1) Library resource/research skill instruction sessions during classes
2) Research consultations for faculty and students
3) Specialized workshops in citation manages, GIS, statistical programs, data plans, and many more topics.
4) Acquisitions of new materials (journals, books, databases, etc.)
5) Course specific library resource pages
For more info or any questions, please contact Maan Alsahoui, Joukowsky Family Middle East Studies Librarian, at [email protected] or visit the MES Resource Guide
Access the historical digital archives of the Palestine/Jerusalem Post through ProQuest
Access the new database of materials for the Brown Library
Twentieth Century Religious Thought: Volume II, Islam
Some Full Text Licensed for Brown New Late 19th century – present; Twentieth Century Religious Thought: Islam is a research database focusing on modern Islamic theology and tradition. The online text resource details Islam’s evolution from the late 19th century by examining printed works, rare documents, and important articles by Muslim writers, both non-Western and Western voices. Content represents a wide range of Islamic thought, bringing together such key thinkers as Muhammad Abduh, Fethullah Gülen, Sayyid Ahmad Khan, Bediüzzaman Said Nursî, Abdolkarim Soroush, and Rifā‘ah Rāf‘i al-Ṭahṭāwī.
Index Islamicus
Index Islamicus is an international bibliography of publications in European languages covering all aspects of Islam and the Muslim world, including history, beliefs, societies, cultures, languages, and literature. The database includes material published by Western orientalists, social scientists and Muslims.
The Office of the Registrar is dedicated to providing service of the highest caliber to all of our constituents – current and former students, faculty, administrative staff, and the general public.
An academic concentration is the focal point of your undergraduate experience. It is an in-depth study centering on one or more disciplines, a problem or theme, or a broad question.
Use this site to explore the many intellectual paths you can take at Brown. Roll over the tiles to view short descriptions or click below to narrow the field based on academic division, your personal interests and career opportunities. Clicking on any of the tiles will bring you to an information page for that concentration. | https://watson.brown.edu/cmes/academics/undergraduate-concentration/resources |
Join the Chamber and you're joining a diverse and collaborative membership working together to keep Boston the best place for businesses and people to thrive. At the Chamber, we help drive this work by:
Nonprofit organizations have worked to diversify their staff and increase representation of people across race, class, gender, sexual orientation, ability, and religious and other culturally diverse backgrounds.
But how do organizations create an environment where people feel they can actively participate, where their contributions are respected and valued? This workshop is focused on racial diversity and taking the next step toward moving beyond the individualism of diversity to the systemic and cultural change needed to build inclusive organizations.
Learning Objectives
Discussion-based learning
Target Audience
Staff at all levels of an organization can benefit from this training, which examines the basic concepts and distinctions between diversity, inclusion and equity. Organizations that are at the early stages of considering how diversity, inclusion, and equity are integrated into their work or beginning an internal process will especially benefit. | https://members.bostonchamber.com/events/details/moving-beyond-diversity-to-inclusion-3913 |
The Second Law of Thermodynamics is Invalid in a Repetitive Cyclic Wave Univers of Equally Interchanging Effect.
Heat Radiates into Cold
Cold Regenerates into Heat
The Heartbeat of the Universe is Continuous.
10th june 2007 update:
The Walter Russel chart could be titled "The Other Side of the coin" in relation to classic thermodynamics. It illustrates the continuous, cyclic and simultaneous running up and running down of all polarization/energy/phenomena in the universe.
Note that as division multiplies, multiplication divides (something and/or everything), and as discharge charges, charging discharges.
Compression is shown with both male and female polarity colors, underscoring the fact that both polarities do charge. The discharge is colored red, emphasizing that this process of heat radiation ultimately and simultaneously charges. The charging is colored blue which suggests that this process of cooling generation ultimately and simultaneously discharges.
The simultaneous occurrence of both complementary/antagonistic/opposite forces and directions is literally and figuratively illustrated in this chart that summarizes the essence of the entire two-way expression of the gravity cycle. This reveals that Walter Russell's concept of gravity as a whole cycle is always simultaneously an imploding, sucking, pulling, and an exploding, blowing, pushing, two-way force.
(from: In the Wave lies the Secret of Creation - courtesy of University of Science and Philosophy.
"Periodicity is an absolute characteristic of all effects of motion" (Walter Russell)
"Nature is not served by rigid laws, but by rhythmical, reciprocal processes. Nature uses none of the preconditions of the chemist or the physicist for the purposes of evolution. Nature excludes all fire, on principle, for purposes of growth; therefore all contemporary machines are unnatural and constructed according to false premises. Nature avails herself of the biodynamic form of motion [self-organization?] through which the biological prerequisite for the emergence of life is provided. Its purpose is to ur-procreate 'higher' conditions of matter out of the originally inferior raw materials, which afford the evolutionally older, or the numerically greater rising generation, the possibility of a constant capacity to evolve, for without any growing and increasing reserves of energy there would be no evolution or development. This results first and foremost in the collapse of the so-called Law of the Conservation of Energy, and in further consequence the Law of Gravity, and all other dogmatic lose any rational or practical basis." - Viktor Schauberger
from frankgermano.com
Indeed at the Stuttgart University of Technology, West Germany in 1952 these theories were tested under strict scientific and laboratory conditions by Professor Dr. Ing. Franz Pöpel, a hydraulics specialist. These tests showed that, when water is allowed to flow in its naturally ordained manner, it actually generates certain energies, ultimately achieving a condition that could be termed 'negative friction'. Checked and double-checked, this well-documented, but largely unpublicized, pioneering discovery not only vindicated Viktor Schauberger's theories. It also over-turned the hitherto scientifically sacred 'Second Law of Thermodynamics' in which, without further or continuous input of energy, all (closed) systems must degenerate into a condition of total chaos or entropy. These experiments proved that this law, whilst it applies to all mechanical systems, does not apply wholly to living organisms.
Schappeller had something to say about the Second Law of Thermodynamics. He said there was another and unknown thermodynamic cycle which runs opposite the Second Law. To name this idea we will call it "Reverse Thermodynamics". It is the reverse of the Second Law of Thermodynamics in that it leads to a decrease in entropy. Not only is there an increase in order by there is an increase in cold! Schappeller....built his spherical device primarily to demonstrate the principles behind this Reverse thermodynamics. It was not designed as a practical machine.54
Both Schappeller and Schauberger were implying a physics based, not on inanimate lifeless processes, the physics we have come to know, but on animate, creative processes but Schappeller's views on thermodynamics were truly revolutionary, and some decades ahead of their time, until Ilya Prigogine won the Nobel prize in chemistry precisely for his pioneering work on self-organizing principles evident in systems driven to a high state of non-equilibrium in 1977.55 The new paradigm, a breathtakingly simple, and yet far-reaching one, was simply that equilibrium had been replaced with non-equilibrium in physics, especially for systems analysis...
http://educate-yourself.org/tjc/visualray21may0.shtml
The Second "Law"
Orgone energy defies many of the accepted 'laws' of standard physics, such as the Second Law of Thermodynamics, which, roughly translated, says that energy always diffuses from a higher state towards a lower one (higher concentration towards lower). Once you understand the nature of orgone energy, however, you soon realize that the Second Law of Thermodynamics isn't really a 'law' at all, since orgone defiantly demonstrates that its energy can move in the opposite direction (lower concentration towards higher). Before we attach the name 'law', to what is really an assumption based on mechanical observations, it should be immutable and applicable 100% of the time, but if it fails in even ONE instance, it's not much of a 'law'.
Orgone busts the rivets right off this bulwark of establishment science with the same ease that it busts clouds. The reader should be aware, however, that such thoughts are tantamount to high heresy in the minds of conventional physicists and academicians (our article on the Joe Energy Cell, an orgone free energy device, also makes a monkey out of Second Law of Thermodynamics. You might want to take a look at it: http://educate-yourself.org/fe/fejoewatercell.shtml )
http://www.wilhelmreichmuseum.org/books.html#ether
In Cosmic Superimposition, (Wilhelm) Reich steps out of our current framework of mechanistic-mystical thinking and comes to a radically different understanding of how man is rooted in nature. He shows clearly how the superimposition of two orgone energy streams - demonstrable in the human genital embrace and in the formation of spiral galaxies - is the common functioning principle in all of nature.
"Ilya Prigogine once stated that all natural movement arises out of a state of imbalance, of non-equilibrium. Non-equilibrium is a pre-requisite for movement and evolution in all its forms, and a state of equilibrium is therefore impossible in Nature. Yet we find that certain symmetries do occur, nevertheless." | https://merlib.org/node/5134 |
We, at ISGOLD Gold Refinery Istanbul, aim at combining our past knowledge and experience with present knowledge and technology with the great contribution of our expert staff and engineer team.
To achieve our company policy as perfectly fit for international standards, We shall:
- Refine the precious metals properly and effectively
- Produce high quality gold bars, silver, platinum and palladium
- Manufacture gold bullion bars taking the quality and time seriously into account
- Adopt Quality Policy and Environmental Policy as a main principle and make a move for all operations with an eye to these policies
- Assure in every sense taking customer satisfaction and relations as a prime concern
Within the scope of policies and strategies of our company, we strongly believe that in order to reach the main targets which will provide the best sustainable performance for our company, all the activities and operations related to main targets should be well understood and systematically conducted and all the thoughts belong to partners should be incorporated trustworthily into our system.
We, at ISGOLD Gold Refinery Istanbul, are committed to achieve ever-increasing levels of customer satisfaction through pioneer and continual improvements in the quality of our products & services.
To enhance customer satisfaction by applying our Quality Policy, We shall:
- Improve continuously the quality of our products and the process during the production of our products to gain customer satisfaction and trust
- Upgrade required methods or applications to minimise the possibility of any fault that may occur during the process of manufacturing our products
- Encourage graciously our employees to make them ready to take all responsibilities and enjoy their works, to consider improving operations with regard to their responsibilities and to take initiative for making decision with loyalty to company
- Implement perfectly required training and development procedure for our employees
- Commit continual improvement of our service quality and become a leading company of gold business industry
- Work properly with corresponding law and pay attention to rules of morality and human rights with care during our business operations, enhance awareness and minimise the possibility of environmental pollution due to our operations
- Make no compromises from necessity of our system in order to improve efficiency and activity of our quality system continuously
“Quality improvement is everlasting journey” is one of the prime principles of our company and reflects our Quality Policy that we strongly aim to adopt it in all departments of our company.
We, at ISGOLD Gold Refinery Istanbul, manufacturer of standard minted gold bars, gold refinery and authorised assayer are committed to preserve the environment, where we shall endeavour to adhere to this policy in all our business operations.
To be able to achieve our environmental policy, we shall:
- Reduce the sludge content to minimum level, reuse and/or recycle metals within the sludge if possible; if not, then dispose of sludge by using the most convenient method
- Conserve the natural resources, energy and raw materials to exploit efficiently
- Take global warming into first consideration and encourage our employees using our energy as efficient as possible in every operation
- Inform this policy to all employees and increase awareness of environmental responsibilities among employees
- Reduce the use of packaging, therefore, waste and make them recyclable as far as possible
- Enhance the preservation methods for our natural resources such as energy, water etc.
- Strive to make use of energy and natural resources efficiently
We, at ISGOLD Gold Refinery Istanbul, are committed to the health, safety and welfare at work of our employees, complying with applicable health and safety legal requirements and continual improvement of our health and safety control arrangement and performance.
To achieve Occupational Health & Safety Policy, We shall;
- Create or evolve such systems minimizing the risk or casualty in our facility or neighborhood
- Plan and maintain a safe and healthy working environment and necessary information, instruction, training and supervision to protect safety and health at work
- Enhance awareness, skill and competence of our employees while evolving steadily our health and safety systems
- Fulfil and comply with our legal obligation, any relevant environmental and ISG legislation in force and any affiliated organisation
- Take essential precautions to conserve the health of our employees and any people who may be affected from our operations and to control any possible incident, casualty or danger that may arise to our or any other person’s commodity or property in the very beginning of fault
Isgold Altin Rafinerisi A.S. carries out precious metals refining operations and gold bullion production in Turkey since 2011.
We have three sites located in Istanbul and employ 20 personnel.
Isgold Altin Rafinerisi A.S. is a Member of the Responsible Jewellery Council (RJC).
The RJC is a standards-setting organisation that has been established to advance responsible ethical, human rights, social and environmental practices throughout the diamond, gold and platinum group metals jewellery supply chain.
The RJC has developed a benchmark standard for the jewellery supply chain and credible mechanisms for verifying responsible business practices through third party auditing.
As an RJC Member, we commit to operating our business in accordance with the RJC Code of Practices. We commit to integrating ethical, human rights, social and environmental considerations into our day-to-day operations, business planning activities and decision making processes.
We commit to understanding, monitoring and managing Social Responsibility Policy to our responsible business activities.
Forced Labour
Not employing any forced or involuntary employer bound with contract.
Child Labour
Act according to the methods and principles of employing child and young employees and not to employing any employee under the age of 18. (more detail)
Preventing Abuse
Not applying any corporate punishment and to not allowing any oral, physical and psychological harassment or forcing.
Salaries and Payments
Paying the minimum salary determined according to the department or otherwise by the laws.
Preventing Discrimination
Isgold believes that the employment is limited with the abilities and is not related to the beliefs and characters of the people. Therefore, to not considering the race, gender, religion or disability of the people and to not making any discrimination while creating employment.
Work Health and Safety
Adopting a proactive approach based on risk analysis, ensure participation of all employees to work health and safety practices and adopting a work system keeping the general health of the employees at the first place.
Environment
Preventing environmental pollution and reduce the pollution at its resource based on the environmental dimension and effect evaluation.
Steakholder Relations
Evaluating the Social Compliance activities of the supplier companies, follow up the evaluation results with action plans and gradually improve the social compliance levels.
Research & Development
Research and development (R&D) activities are highly essential and always prior consideration of our company that are being carried on successfully having in close connection with the most precious professors in the country taking their invaluable guidance, advice and reliance, and well-equipped engineer team in the company.
The necessary development related activities of new manufacturing systems and brand new products are maintained in our R&D facility. In order to be able to become a leading company in the gold business and medium-long term competitiveness potential of our company predominantly depend on the idea of closely tracing and adopting the cutting-edge technology with regard to the gold industry, the capability of integrating to self manufacturing systems and creating own technology if required. Our company has adopted these principles stated above in the field of research and development and has aimed to raise well-informed and well-experienced employees in the company. | http://isgold.com.tr/?page_id=942 |
looking for? Try an advanced search.
Download the entire decision to receive the complete text, official citation,
docket number, dissents and concurrences, and footnotes for this case.
Learn more about what you receive with purchase of this case.
United States District Court, M.D. Pennsylvania
July 3, 2019
ELBOW ENERGY, LLC, Plaintiff,v.EQUINOR USA ONSHORE PROPERTIES INC., Defendant.
ORDER
MATTHEW W. BRANN UNITED STATES DISTRICT JUDGE
Defendant
Equinor USA Onshore Properties Inc. (“Equinor”)
moves to dismiss the complaint filed by Plaintiff Elbow
Energy, LLC (“Elbow”). For the following reasons,
that motion will be granted.
Background
The
complaint alleges that Equinor and Elbow are parties to an
oil and gas lease under which Equinor was obligated to pay
certain royalties to Elbow. Elbow claims that Equinor
breached the terms of the lease by paying royalties in
insufficient sums. Elbow filed a two-count complaint in the
Court of Common Pleas of Lycoming County, Pennsylvania. Count
I of Elbow's complaint seeks damages for breach of
contract, and Count II seeks an accounting of certain records
as well as damages for breach of contract.
Equinor
removed the action to this Court and presently moves to
dismiss Elbow's complaint under Federal Rule of Civil
Procedure 12(b)(6). Equinor argues that Elbow has not complied
with an express lease provision proscribing the commencement
of a judicial action for damages until one year after Equinor
was notified and given an opportunity to cure its alleged
breach.
Discussion
When
considering a motion to dismiss for failure to state a claim
upon which relief may be granted, a court assumes the truth of
all factual allegations in the plaintiff's complaint and
draws all inferences in favor of that party; the court does
not, however, assume the truth of any of the complaint's
legal conclusions. If a complaint's factual allegations,
so treated, state a claim that is plausible-i.e., if
they allow the court to infer the defendant's
liability-the motion is denied; if they fail to do so, the
motion is granted.
Equinor
argues that paragraph six of the lease requires Elbow to
notify Equinor of its purported breach and provide Equinor
with an opportunity to cure prior to commencing a civil
action for damages. Paragraph six's notice-and-cure
provision provides as follows:
This Lease shall not be terminated, in whole or in part, nor
Lessee held liable for any failure to perform unless such
obligation, covenant or condition remains unsatisfied and
unperformed for a period of one year following the
express and specific written demand upon Lessee by Lessor for
such satisfaction and performance. Neither the service
of said notice nor the doing of any acts by Lessee intended
to satisfy any of the alleged obligations shall be deemed an
admission or presumption that Lessee has failed to perform
all its obligations hereunder. No judicial action may be
commenced for forfeiture of this lease or for damages until
after said period. Lessee shall be
given a reasonable opportunity after judicial ascertainment
to prevent forfeiture by discharging its expressed or implied
obligation as established by the court.
Here,
Elbow sent a letter to Equinor on July 31, 2018 demanding
payment of additional royalties. Inferring in Elbow's
favor that the letter satisfied the lease's notice
requirements, Equinor had until July 31, 2019 to cure its
purported deficient performance. But Elbow commenced this
action on March 28, 2019. Therefore, Elbow's present
action is premature and violates the express terms of the
notice-and-cure provision.
Elbow's
three arguments asking this Court to look the other way are
unavailing. First, Elbow argues that “no adjudication
actually holding Equinor liable for any breach of its duties
under the Lease will occur until after July 31,
2019.” But the notice-and-cure provision
expressly prohibits judicial actions from being
commenced until one year after Elbow notified
Equinor that Equinor was in breach; the provision does not
predicate a judicial action's propriety on the date at
which time Equinor may be held liable.
Second,
Elbow argues that the notice-and-cure provision does not
apply to its claim for accounting. But Elbow is the master
of its complaint and Count II therein is styled as a claim
for “accounting and damages.” The lease
makes clear that any judicial action for damages invokes the
notice-and-cure provision's one year bar.
Third,
Elbow argues that its alleged non-compliance with the one
year cure period is an immaterial failure. But
Pennsylvania law requires a lease “to be construed in
accordance with the terms of the agreement as manifestly
expressed, ” and I decline Elbow's invitation to
conclude otherwise when the notice-and-cure provision's
plain meaning prohibits the commencement of judicial action
until the one year cure period expires.
In sum,
by commencing the present action before July 31, 2019, Elbow
has violated the lease, and Elbow's claims must be
dismissed. That dismissal, however, is without prejudice to
Elbow to reassert its claims ...
Our website includes the first part of the main text of the court's opinion.
To read the entire case, you must purchase the decision for download. With purchase,
you also receive any available docket numbers, case citations or footnotes, dissents
and concurrences that accompany the decision.
Docket numbers and/or citations allow you to research a case further or to use a case in a
legal proceeding. Footnotes (if any) include details of the court's decision. If the document contains a simple affirmation or denial without discussion,
there may not be additional text.
Download the entire decision to receive the complete text, official citation,
docket number, dissents and concurrences, and footnotes for this case.
Learn more about what you receive with purchase of this case. | http://pa.findacase.com/research/wfrmDocViewer.aspx/xq/fac.20190703_0001790.MPA.htm/qx |
Inferences Based On Sample Data Marketing Research (Essay Sample)
Submit a paper identifying and commenting on the ethical issues involved in using sample data to make inferences about populations. Use Biblical support where appropriate. At least two COMPLETE pages.source..
...
Ethical Issues Involved in Using Sample Data to Make Inferences about Population
Student's Name
Institutional Affiliation
During the research process, researchers are not able to carry out direct observations of each individual in the population of study. They, therefore, collect data from a section of the total population called a sample that represents the entire population. This implies that the conclusions of the researcher based on the sample are applicable to the total population. Various ethical issues arise in using a sample of the population. The ethical issues may revolve around types of questions asked the population, type of information studied, and methods of data collection. Crucial ethical matters include the voluntary participation of the interviewees and informed consent, the need for anonymity and confidentiality, and also accuracy in analysis and reporting of issues (Lawrence Neuman, 2009).
The selected sample may be too small and therefore fail to provide an accurate representation of the population. The researcher will proceed to make inferences about the entire population using the sample selected and fail to capture accurate characteristics of the population. This can prove to be a source of the challenge if the research was necessary to provide further assistance to the population, such as through the introduction of medical services, introduction of food crop and other issues. The research can seem to be successful, but fail to satisfy the entire population. Such ethical issues can be solved be using a variety of sampling techniques inst
YOU MAY ALSO LIKE
You Might Also Like Other Topics Related to ethical dilemma: | https://essayzoo.org/essay/apa/business-and-marketing/inferences-based-sample-data.php |
Purification and characterization of a chloride ion-dependent α-glucosidase from the midgut gland of Japanese scallop (Patinopecten yessoensis).
Marine glycoside hydrolases hold enormous potential due to their habitat-related characteristics such as salt tolerance, barophilicity, and cold tolerance. We purified an α-glucosidase (PYG) from the midgut gland of the Japanese scallop (Patinopecten yessoensis) and found that this enzyme has unique characteristics. The use of acarbose affinity chromatography during the purification was particularly effective, increasing the specific activity 570-fold. PYG is an interesting chloride ion-dependent enzyme. Chloride ion causes distinctive changes in its enzymatic properties, increasing its hydrolysis rate, changing the pH profile of its enzyme activity, shifting the range of its pH stability to the alkaline region, and raising its optimal temperature from 37 to 55 °C. Furthermore, chloride ion altered PYG's substrate specificity. PYG exhibited the highest Vmax/Km value toward maltooctaose in the absence of chloride ion and toward maltotriose in the presence of chloride ion.
| |
Consumer Price Index CPI in Canada increased to 138.90 points in February from 138.20 points in January of 2021.
source:
Statistics Canada
Consumer Price Index CPI in Canada averaged 63.95 points from 1950 until 2021, reaching an all time high of 138.90 points in February of 2021 and a record low of 12.10 points in January of 1950. This page provides the latest reported value for - Canada Consumer Price Index (CPI) - plus previous releases, historical high and low, short-term forecast and long-term prediction, economic calendar, survey consensus and news. Canada Consumer Price Index (CPI) - values, historical data and charts - was last updated on April of 2021.
Consumer Price Index CPI in Canada is expected to be 139.26 points by the end of this quarter, according to Trading Economics global macro models and analysts expectations. Looking forward, we estimate Consumer Price Index CPI in Canada to stand at 141.54 in 12 months time. In the long-term, the Canada Consumer Price Index (CPI) is projected to trend around 142.67 points in 2022 and 145.52 points in 2023, according to our econometric models.
1Y
5Y
10Y
25Y
MAX
Chart
Column
Line
Area
Spline
Splinearea
Candlestick
Bars
Trend
Average(4)
Histogram
Variance
Mean
Maximum
Minimum
Compare
Export
API
Embed
Ok
Trading Economics members can view, download and compare data from nearly 200 countries, including more than 20 million economic indicators, exchange rates, government bond yields, stock indexes and commodity prices.
Features
Questions?
Contact us
Already a Member?
Login
The Trading Economics Application Programming Interface (API) provides direct access to our data. It allows API clients to download millions of rows of historical data, to query our real-time economic calendar, subscribe to updates and receive quotes for currencies, commodities, stocks and bonds.
API Features
Documentation
Interested?
Click here to contact us
Please Paste this Code in your Website
<iframe src='https://tradingeconomics.com/embed/?s=canadaconpriindcpi&v=202104192315v20200908&h=300&w=600&ref=/canada/consumer-price-index-cpi' height='300' width='600' frameborder='0' scrolling='no'></iframe><br />source: <a href='https://tradingeconomics.com/canada/consumer-price-index-cpi'>tradingeconomics.com</a>
width
height
Preview
Actual
Previous
Highest
Lowest
Dates
Unit
Frequency
138.90
138.20
138.90
12.10
1950 - 2021
points
Monthly
2002=100, NSA
Canada Prices
Last
Previous
Highest
Lowest
Unit
Inflation Rate
1.10
1.00
21.60
-17.80
percent
[+]
Inflation Rate Mom
0.50
0.60
2.60
-1.30
percent
[+]
Consumer Price Index CPI
138.90
138.20
138.90
12.10
points
[+]
Core Consumer Prices
137.10
136.70
137.10
61.70
points
[+]
Core Inflation Rate
1.20
1.60
5.40
0.00
percent
[+]
GDP Deflator
112.80
111.60
112.80
11.80
points
[+]
Producer Prices
108.80
107.10
108.80
13.30
points
[+]
Producer Prices Change
9.40
7.10
21.46
-7.14
percent
[+]
Export Prices
116.50
113.50
116.50
62.60
points
[+]
Import Prices
122.50
120.90
8964.00
75.90
points
[+]
Food Inflation
1.80
1.00
20.18
-7.14
percent
[+]
Wholesale Prices
112.80
105.80
130.70
38.50
points
[+]
CPI Housing Utilities
148.80
148.40
148.80
39.10
points
[+]
CPI Transportation
147.10
144.90
147.10
10.90
points
[+]
Canada Consumer Price Index (CPI)
In Canada, the Consumer Price Index or CPI measures changes in the prices paid by consumers for a basket of goods and services.
Compare
Consumer Price Index CPI by Country
Related
TSX Eases from Records
Canada Housing Starts Rise More than Expected
Canada Wholesale Sales Fall More than Initially Estimated
Foreign Investors Buy Canadian Securities for 7th Month
Canada Producer Inflation Eases in March
Canada Manufacturing Sales Fall More than Expected
Canada Business Climate Indicator at Near 3-Year High
Canada Jobless Rate Falls More than Expected
Canadian Economy Adds More Jobs than Expected
Canada Business Morale Strongest since 2011
Latest
Ivory Coast Inflation Rate at Fresh 8-Year High in March
Sensex Ends at Over 2-1/2-Month Low
Italy Current Account Surplus Shrinks in February
Greek Current Account Gap Narrows in February
Croatia Jobless Rate Rises to 9.3% in March
Taiwan Export Orders Highest on Record For March
Indian Rupee Remains Under Pressure
South African Stocks Extend Losses
Sugar Trades Near 2-Month High
Milan Stocks Extend Losses on Tuesday
Canada
United States
United Kingdom
Euro area
China
Afghanistan
Albania
Algeria
Andorra
Angola
Antigua and Barbuda
Argentina
Armenia
Aruba
Australia
Austria
Azerbaijan
Bahamas
Bahrain
Bangladesh
Barbados
Belarus
Belgium
Belize
Benin
Bermuda
Bhutan
Bolivia
Bosnia
Botswana
Brazil
Brunei
Bulgaria
Burkina Faso
Burundi
Cambodia
Cameroon
Canada
Cape Verde
Cayman Islands
Central African Republic
Chad
Channel Islands
Chile
China
Colombia
Comoros
Congo
Costa Rica
Cote d Ivoire
Croatia
Cuba
Cyprus
Czech Republic
Denmark
Djibouti
Dominica
Dominican Republic
East Asia and Pacific
East Timor
Ecuador
Egypt
El Salvador
Equatorial Guinea
Eritrea
Estonia
Ethiopia
Euro area
European Union
Europe and Central Asia
Faeroe Islands
Fiji
Finland
France
French Polynesia
Gabon
Gambia
Georgia
Germany
Ghana
Greece
Greenland
Grenada
Guam
Guatemala
Guinea
Guinea Bissau
Guyana
Haiti
Honduras
Hong Kong
Hungary
Iceland
India
Indonesia
Iran
Iraq
Ireland
Isle of Man
Israel
Italy
Ivory Coast
Jamaica
Japan
Jordan
Kazakhstan
Kenya
Kiribati
Kosovo
Kuwait
Kyrgyzstan
Laos
Latvia
Lebanon
Lesotho
Liberia
Libya
Liechtenstein
Lithuania
Luxembourg
Macau
Macedonia
Madagascar
Malawi
Malaysia
Maldives
Mali
Malta
Marshall Islands
Mauritania
Mauritius
Mayotte
Mexico
Micronesia
Moldova
Monaco
Mongolia
Montenegro
Morocco
Mozambique
Myanmar
Namibia
Nepal
Netherlands
Netherlands Antilles
New Caledonia
New Zealand
Nicaragua
Niger
Nigeria
North Korea
Norway
Oman
Pakistan
Palau
Panama
Palestine
Papua New Guinea
Paraguay
Peru
Philippines
Poland
Portugal
Puerto Rico
Qatar
Republic of the Congo
Romania
Russia
Rwanda
Samoa
Sao Tome and Principe
Saudi Arabia
Senegal
Serbia
Seychelles
Sierra Leone
Singapore
Slovakia
Slovenia
Solomon Islands
Somalia
South Africa
South Asia
South Korea
South Sudan
Spain
Sri Lanka
Sudan
Suriname
Swaziland
Sweden
Switzerland
Syria
Taiwan
Tajikistan
Tanzania
Thailand
Timor Leste
Togo
Tonga
Trinidad and Tobago
Tunisia
Turkey
Turkmenistan
Uganda
Ukraine
United Arab Emirates
United Kingdom
United States
Uruguay
Uzbekistan
Vanuatu
Venezuela
Vietnam
Virgin Islands
Yemen
Zambia
Zimbabwe
Calendar
Forecast
Indicators
News
Markets
Currency
Government Bond 10Y
Stock Market
GDP
GDP
GDP Annual Growth Rate
GDP Constant Prices
GDP From Agriculture
GDP From Construction
GDP From Manufacturing
GDP From Mining
GDP From Public Administration
GDP From Services
GDP From Transport
GDP From Utilities
GDP Growth Annualized
GDP Growth Rate
GDP per capita
GDP per capita PPP
Gross Fixed Capital Formation
Gross National Product
Labour
Adp Employment Change
Average Hourly Earnings
Employed Persons
Employment Change
Employment Rate
Full Time Employment
Job Vacancies
Labor Force Participation Rate
Labour Costs
Minimum Wages
Non Farm Payrolls
Part Time Employment
Population
Productivity
Retirement Age Men
Retirement Age Women
Unemployed Persons
Unemployment Rate
Wage Growth
Wages
Wages in Manufacturing
Youth Unemployment Rate
Prices
Consumer Price Index CPI
Core Consumer Prices
Core Inflation Rate
CPI Housing Utilities
CPI Transportation
Export Prices
Food Inflation
GDP Deflator
Import Prices
Inflation Rate
Inflation Rate Mom
Producer Prices
Producer Prices Change
Wholesale Prices
Money
Banks Balance Sheet
Central Bank Balance Sheet
Deposit Interest Rate
Foreign Exchange Reserves
Foreign Stock Investment
Interbank Rate
Interest Rate
Loans to Private Sector
Money Supply M0
Money Supply M1
Money Supply M2
Money Supply M3
Private Debt to GDP
Trade
Balance of Trade
Capital Flows
Crude Oil Production
Current Account
Current Account to GDP
Exports
Exports by Category
Exports by Country
External Debt
Foreign Direct Investment
Gold Reserves
Imports
Imports by Category
Imports by Country
Oil Exports
Terms of Trade
Terrorism Index
Tourist Arrivals
Weapons Sales
Government
Asylum Applications
Credit Rating
Fiscal Expenditure
Government Budget
Government Budget Value
Government Debt
Government Debt to GDP
Government Revenues
Government Spending
Holidays
Military Expenditure
Business
Bankruptcies
Business Climate Indicator
Business Confidence
Capacity Utilization
Car Registrations
Changes in Inventories
Competitiveness Index
Competitiveness Rank
Corporate Profits
Corruption Index
Corruption Rank
Crude Oil Rigs
Ease of Doing Business
Industrial Production
Industrial Production Mom
Internet Speed
IP Addresses
Leading Economic Index
Manufacturing PMI
Manufacturing Production
Manufacturing Sales
Mining Production
New Orders
Steel Production
Wholesale Sales
Consumer
Bank Lending Rate
Consumer Confidence
Consumer Credit
Consumer Spending
Disposable Personal Income
Gasoline Prices
Households Debt To GDP
Households Debt To Income
Personal Savings
Private Sector Credit
Retail Sales Ex Autos
Retail Sales MoM
Retail Sales YoY
Housing
Building Permits
Home Ownership Rate
Housing Index
Housing Starts
Taxes
Corporate Tax Rate
Personal Income Tax Rate
Sales Tax Rate
Social Security Rate
Social Security Rate For Companies
Social Security Rate For Employees
Health
Coronavirus Cases
Coronavirus Deaths
Coronavirus Recovered
Hospital Beds
Hospitals
Medical Doctors
Nurses
More Indicators
National Statistics
World Bank
TRADING
ECONOMICS
196 Countries
20 Million Indicators
50 Thousand Markets
HISTORICAL DATA
INDICATORS
ECONOMIC CALENDAR
FOREX
LIVE QUOTES
STOCKS
FORECASTS
COMMODITIES
RATINGS
BONDS
GET STARTED
JOIN TRADING ECONOMICS
Write for Us
Publish your articles and forecasts in our website.
Get recognition from our millions of users.
We will share up to 75% of its ad revenues. | https://tradingeconomics.com/canada/consumer-price-index-cpi |
Small companies conduct telephone interviews or surveys to determine interest in new products, or measure the customer satisfaction of existing products. They also determine the needs and wants of customers through phone interviews. There are certain advantages and disadvantages to using telephone interviews in business research, so it is up to individual business owners to determine whether telephone surveys are the most effective way to collect the necessary information.
Advantages and disadvantages of interviews. Structured interviews can also be used to identify respondents whose views you may want to explore in more detail through the use of focused interviews, for example. Interviews schedules have a standardized format which means the same questions are asked to each interviewee in the same order see Fig. Unstructured interviews are difficult to repeat if you need to test the reliability of the data. When using a thematic analysis, it is not possible to transform these data quantitatively.
semi structured interview advantages and disadvantages pdf
Skip to search form Skip to main content You are currently offline. Some features of the site may not work correctly. DOI: Face-to-face interviews have long been the dominant interview technique in the field of qualitative research. In the last two decades, telephone interviewing became more and more common. Due to the explosive growth of new communication forms, such as computer mediated communication for example e-mail and chat boxes , other interview techniques can be introduced and used within the field of qualitative research.
Advantages & Disadvantages of Telephone Interviews in Business Research
In addition to the written text, the pack includes exercises for completion. Download the above infographic in PDF. Both methods are quite useful depending on the type of study. All qualitative research interviews are structured to varying degrees, but structured interviews are the most rigid. The qualitative methodology intends to understand a complex reality and the meaning of actions in a given context.
Take a look at the advantages and disadvantages of the face-to-face data collection method. As with any research project, data collection is incredibly important. However, several aspects come into play in the data collection process. The three most crucial aspects include: the cost of the selected data collection method; the accuracy of data collected; and the efficiency of data collection. Despite the rise in popularity of online and mobile surveys , face-to-face in-person interviews still remain a popular data collection method.
After reading this article you will learn about the advantages and disadvantages of the interview method of conducting social research. It will be remembered that the questionnaire approach is severely limited by the fact that only the literate persons can be covered by it. Again, the observational approach is also subject to limitations because many things or facts cannot be observed on the spot. The interviewer who is present on the spot can clear up the seemingly inaccurate or irrelevant answers by explaining the questions to the informant. If the informant deliberately falsifies replies, the interviewer is able to effectively check them and use special devices to verify the replies. Interview is a much more flexible approach, allowing for posing of new questions or check-questions if such a need arises.
Advantage and disadvantage of interview
There are some limitations of the interview process. It is not free from defects. The disadvantages of the interview are discussed below:. Thanks, Leah McManus. This article has really helped me because i was doing my reaserch on Observation and interview method of data collection.
Research is about gathering data so that it can inform meaningful decisions.
Он сидел у нее на животе, раскинув ноги в стороны. Его копчик больно вдавливался в низ ее живота через тонкую ткань юбки. Кровь из ноздрей капала прямо на нее, и она вся была перепачкана. Она чувствовала, как к ее горлу подступает тошнота. | https://alfabia.org/and-pdf/1382-advantages-and-disadvantages-of-interviews-in-research-pdf-574-880.php |
Location:
Agia Paraskevi, Athens, Greece
Year:
2021
Type:
Urban green space, Multi-purpose space, Urban Regeneration, Community Hub
Award:
First Prize
Status:
Planning Permission
Size:
30.000 sqm
The project is located at the city of Agia Paraskevi, a densely populated residential suburb of Athens at the foot of mount Imittos and involves the design of the masterplan of the former Spiroudi military camp, which covers an area of 30.000 sqm. The new project maintains part of the park’s history, infusing new uses and creating a sustainable park that acts as a hub for the local community.
The park is designed as a continuity of the mountain, optimizing the much needed green space and enhancing local biodiversity. It is accessible to people of all ages encouraging them to participate into a variety of athletic, artistic and cultural activities while promoting environmental awareness.
The existing park is rearranged inside a large loop between areas of different landscape qualities. The loop consists of two parallel routes: a cyclist and a pedestrian route. On the south side, the pedestrian route transforms into a bridge to highlight the main entrance to the park and frame the most important access to the sports field. The bridge offers a panoramic view to the park.
Visitor’s thermal comfort was a priority, therefore regulating microclimate was a key element of our design. For this reason, environmental strategies were applied to provide the necessary shading and protection from the wind to outdoor seating areas and ensure passive cooling and energy efficiency for buildings during the park’s use. Water features were incorporated into the design in the most crowded places, reducing the park’s temperature through evaporative cooling, whereas green roofs are applied turning the spaces into almost zero-carbon.
It is an evolved type of contemporary urban park addressing users of different interests and ages. The skatepark, the outdoor gym and the playgrounds, beside their athletic purpose, act as spaces for communication and socialization. The spaces’ organic forms are integrated into the site, as if the light structures had always been next to the trees. The skylights, placed next to the sports field, are designed based on the sun position, providing daylight to the underground basketball and archery fields. They also act as climbing walls. Besides their ergonomic values, the skylights establish a landmark of the area.
A creative hub is formed on the east side of the park with a multi-purpose cinema and workshop areas. While viewers using headphones, four projectors that can operate all together, make the most out of the space. The old military campus buildings are preserved and transformed into activity workshops.
The space is extended to the underground floor to double the area so as to accommodate more visitors and events. Atriums are created in-between the buildings to enhance underground floor’s daylighting and natural ventilation.
The park aims to breed a new leisure lifestyle within the existing local community. | https://pierisarchitects.com/projects/public-use/eco-park-spiroudi-agia-paraskevi-2/ |
To benefit from the lessons of school reform, policy makers and researchers will need consistent information about what works from classroom to classroom. By finding ways to compare results from successful individual improvement projects, researchers will have a much more powerful set of data to draw upon to develop new approaches that be generalized for schools across the country.
The center, funded by a $6 million, five-year grant from the National Science Foundation (NSF), will look specifically at projects that have been shown to be effective in improving reading, mathematics and science. "Often investigators are unaware of how their findings relate to work in other areas, or whether their findings are consistent with results from more comprehensive samples," said Barbara Schneider, senior research scientist at NORC, and a principal investigator for the project.
The center will work with researchers who have received grants through the Interagency Educational Research Initiative (IERI), a collaboration of NSF, the U.S. Department of Education and the National Institute of Child Health and Human Development, and an effort to improve pre-k-12 learning through integration of information technologies. The center will try to identify key features of successful educational programs. Ideally, this knowledge could be applied to other programs so that they, too, can be widely effective.
The new center at NORC will enable researchers on these projects to become a community of scholars able to share their knowledge about how to "scale up" effective programs, through seminars, workshops, and intensive courses for senior education professionals and policymakers, graduate students, postdoctoral fellows, faculty and other researchers, Schneider said The center's research agenda will provide new insights into learning, child and adolescent development, and instruction. It will also chart changes over time and help scholars learn how to extend effective approaches to larger an more diverse populations.
Joining Schneider as co-principal investigators are University of Chicago colleagues Larry Hedges, professor of sociology; Colm O'Muircheartaigh, professor of public policy, and David Sallach, Director of Social Science Research Computing.
Media contact:
William Harms
(703) 292-8070/[email protected]
Program contact: | https://www.eurekalert.org/pub_releases/2002-08/nsf-nct082102.php |
If you are going to cultivate cauliflower, then keep in mind that cauliflower needs more nutrients. Cauliflower is a heavy feeder crop.
Fertilizer dose depends upon the soil. In every crop, fertilizers and manures should be used after soil testing because by giving less amounts of manures and fertilizer, plants are not well developed.
While giving more quantity, plants grow quickly & later fall down.
15-20 ton of cow dung should be added in the field before 4 weeks of transplanting the seedling.
Cauliflower requires 120 kg of nitrogen, 60 kg of phosphorus and 60 kg of potash per hectare.
Half amount of nitrogen and full amount of phosphorus & potash should be added in the field on last ploughing at the time of field preparation.
Dividing the remaining amount of nitrogen into two parts, mix it in the field after 30 days of transplanting & at the time of flowering.
If crop growth appears weak then spray 1-1.5 kg of urea in per 100 litre of water.
By doing this, the crop will grow properly. | https://www.digitrac.in/agricare-blog/nutrient-management-in-cauliflower-crop |
Acacia species are found at various arid and semiarid regions. Among the tree genera of family, Fabaceae, Acacia contains the highest number of species. We have collected different species of Acacia from various places of Saudi Arabia and reconstructed phylogeny for evaluation of genetic relationship among them. Internal transcribed spacer sequence of nrDNA (nrDNA-ITS) locus was used for the reconstruction of phylogeny among these species. Based on phylogenetic tree, Acacia etbaica and A. johnwoodii were close to each other. Similarly, A. ehrenbergiana and A. tortilis were close to each other. Thus, this gene locus was helpful in evaluating the genetic relationship among these species.
Trees and shrubs are important flora to maintain the ecosystem of any habitat where the vegetation is rich. The number of tree species in Saudi Arabia is more than the floristic element. Around 80% are found in the south- western and western regions, including Taif region. Among the tree genera, Acacia contains the highest number of species followed by Ficus genus. Acacia genus (commonly known as wattle, thorntree or whistling thorn) (family: Fabaceae) contains around 1300 species and out of these, around 960 species are native to Australia. The remainder species are found from tropical to warm temperate region of both hemispheres. The plant of this genus is well nodulated under drought stress conditions.
Most of Acacia species grow in the arid and semi-arid regions of the world. Acacia species have social and economic importance throughout the warm and tropical regions of the world. These species withstand harsh climatic conditions and considered as the most successful trees in the arid regions . Xerophytic morphological characteristics of Acacia species help to cope from harsh environmental conditions.
Different species of Acacia have been used for various purpose of human. Most of the Acacia produce tannins which is highly used in the leather industry . Besides, Acacia species are also used as a forage, fuel, soil fertility and soil conservation. Many of them such as A. nilotica and A. polyacantha are used for local furniture for houses. Some species are used for medicinal purpose as A. nilotica pods are used for treating diarrhea, wounds, gums and kidney diseases .
Plant genetic diversity has important role in crop improvement for important desirable characteristic such as yield and quality traits. This genetic diversity can be preserve in plant genetic resources such as DNA library and GenBank and so forth. It can be used in breeding program for the development of potential cultivars for improving food production.
Number of molecular markers have been used for phylogenetic study such as random amplified polymorphic DNA (RAPD) , amplified fragment length polymorphism AFLP , restriction fragment length polymorphism (RFLP) etc. . Molecular and biochemical researches have been performed on Australian and African Acacia species to provide useful markers for plant breeding and conservation programs - .
These are polymorphic many molecular markers which have been used for genetic diversity study. DNA markers are more reproducible as compared to morphological and biochemical markers. Different chloroplast markers are also used for phylogenetic study as the size of these markers are short and easily amplifiable. The nuclear and chloroplast sequences have been used for the phylogenetic study of Acacia species. The gene dispersal by pollen as well as by seeds both have high impact on genetic diversity. The plants which are self-polli- nated have narrow genetic diversity than the cross pollinated plant species (http://www.crestonfoodaction.ca/site/preventing-cross-pollination-in-seed-plants/) . The population of one species found at different places, have also genetic variations, as environmental factor create insertion or deletions in the genome.
Based on available literature about the Acacia species, in the current study, we used nrDNA-ITS locus for phylogenetic study of different species of Acacia species found in the Al-Baha province.
The study was done at Department of Biology, Al-Baha University, Saudi Arabia. The important Acacia species viz., Acacia etbaica, A. johnwoodii, A. ehrenbergiana and A. tortilis were collected from different places of Al- Baha, Saudi Arabia. Genomic DNA was isolated from 200 mg fresh leaves according to the modified DNeasy Plant Mini DNA isolation kit (Qiagen). The extracted genomic DNA was quantified and diluted in double distilled water for PCR reaction. Polymerase chain reaction (PCR) amplification was performed in a total volume of 25 µl. The reaction mix contained 40 ng of temple DNA, 15 pmol/l of each forward/reverse ITS primer, 10 mmol/l Tris?Cl (pH 8.3), 0.5 U Taq DNA polymerase, 200 umol/l of each deoxyribonucleotide triphosphate, 50 mmol/l KCl and 25 mmol/lMgCl2. The PCR program consisted: 94˚C for 3 min; 94˚ for 30 s, annealing temperature (54.3˚C) for 30 s, 72˚C for 1 min, 41 cycles; 72˚C for 10 min. PCR product was electrophoresed on 1.2 percent agarose gel for confirmation of the amplified sequence. After amplification of the ITS loci, they were purified before sequencing.
The nrDNA-ITS sequence was BLAST at GenBank database http://blast.ncbi.nlm.nih.gov/Blast.cgi for confirmation of our sequence. These sequences were processed further for phylogeny reconstruction using the MEGA 5 . The sequence alignment was done using the ClustalX . The branch support of phylogenetic tree was evaluated using 1000 bootstrap (BS) replicates with random sequence addition, equal weighting and TBR branch swapping, holding one tree at each replicate.
Acacia species were collected from different places of Al-Baha province, Saudi Arabia for their phylogenetic study (Figure 1). All Acacia species were identified by Taxonomist, at Department of Biology, Al-Baha University. The identified taxa were processed further for genomic DNA isolation. The extracted DNA was good in quality and yield as it was isolated using the Qiazen mini Kit.
Figure 1. Acacia species collected from Al-Baha province of Saudi Arabia. (a) Acacia ehrenbergiana; (b) Acacia johnwoodii; (c) Acacia tortilis; (d) Acacia etbaica.
Figure 2. Phylogenetic tree was constructed using the Maximum Parsimony method. The percentage of replicate trees in which the associated taxa clustered together in the bootstrap test (1000 replicates) are shown next to the branches.
of poor and eroded soils. These legumes produce extensive, deep root system, in addition to their potential to fix atmospheric N2 . In Tunisia, Acacia tortilis ssp. raddiana is the only wild and native Acacia, which grow spontaneously in arid and Saharan areas.
Different species are found in Saudi Arabia and some species have overlapping morphological characteristics. Therefore, their phylogenetic study is very necessary to know their relationship among them based on genetic markers. These markers are more informative than the non-sequencing based markers. We have collected different species of Acacia from various places of Al-Baha (Figure 1) for their phylogenetic study. Internal transcribed spacer sequence of nrDNA (nrDNA-ITS) was used for the phylogenetic study among the Acacia species. After amplification and sequencing of nrDNA-ITS loci, all sequences were BLAST at GenBank database for sequence similarity to the concerned genus/species. After confirmation of nrDNA-ITS, the phylogeny was reconstructed among these species for the evaluation of genetic similarity among them. The generated sequences of nrDNA-ITS have been submitted at GenBank database (accession number: KU168956, KU168957, KU168958). Paraserianthes lophantha and Pithecellobium clypearia were used as outgroup for the phylogenetic study of these Acacia species as shown at the base of phylogenetic tree. All species were clustered according to the sequence similarity (Figure 2).
sequencing both are easy. Thus, nrDNA-ITS locus was useful marker in phylogenetic study of Acacia species growing in the Saudi Arabia. This marker is more useful in phylogenetic study as compared to the other polymorphic markers as described above.
We thank to Dr. Abdul Wali Al Khulaidi for providing good Pictures of Acacia species.
1. Ibrahim, A.A.M. and Aref, I.M. (2000) Host Status of Thirteen Acacia Species to Meloidogyne jovanica. Journal of Nematology, 32, 609-613.
2. Aref, I.M. (2000) Morphological Characteristics of Seeds and Seedling Growth of Some Native Acacia Trees in Saudi Arabia. Journal of King Saud University, Agricultural Sciences, 12, 31-95.
3. Sahni, M. (1968) Important Trees of the Northern Sudan. Khartoum University Press, 40-63.
4. Elkhalifa, K.F. (1996) Forest Botany. Khartoum University Press, 79-94.
5. Williames, J.K., Kubelik, A., Livak, K., Rafalsky, J. and Tingey, S. (1990) DNA Polymorphisms Amplified by Arbitrary Primer and Useful as Genetic Markers. Nucleic Acids Research, 18, 6531-6535.
6. Vos, R., Hogers, R., Bleeker, M., Reijans, M., Lee, T., Hornes, M., Frijters, A., Pot, J., Peleman, J. and Kuiper, M. (1995) AFLP: A New Technique for DNA Fingerprinting. Nucleic Acids Research, 23, 4407-4414.
7. Beckman, J.S. and Soller, M. (1983) Restriction Fragment Length Polymorphisms in Genetic Improvement: Methodologies, Mapping and Costs. Theoretical and Applied Genetics, 67, 35-43.
8. Butcher, P.A., Moran, G.F. and Perkins, H.D. (1998) RFLP Diversity in the Nuclear Genome of Acacia mangium. Heredit, 81, 205-213.
9. McGranahan, M., Bell, J.C., Moran, G.F. and Slee, M. (1997) High Genetic Divergence between Geographic Regions in the Highly out Crossing Species Acacia aulacocarpa (Cunn. Ex Benth.). Forest Genetics, 4, 1-13.
10. Playford, J., Bell, J.C. and Moran, G.F. (1993) A Major Disjunction in Genetic Diversity over the Geographic Range of Acacia Melanoxylon R. Australian Journal of Botany, 41, 355-368.
11. Ashworth, S. (2002) Seed to Seed. 2nd Edition. Seed Savers Exchange, Decorah.
12. Tamura, K., Peterson, D., Peterson, N., Stecher, G., Nei, M. and Kumar, S. (2011) MEGA5: Molecular Evolutionary Genetics Analysis Using Maximum Likelihood, Evolutionary Distance, and Maximum Parsimony Methods. Molecular Biology and Evolution, 28, 2731-2739.
13. Thompson, J.D., Gibson, T.J., Plewniak, F., Jeanmougin, F. and Higgins, D.G. (1997) The Clustal X Windows Interface: Flexible Strategies for Multiple Sequence Alignment Aided by Quality Analysis Tools. Nucleic Acids Research, 24, 4876-4882.
14. Nabli, M.A. (1989) Essai de Synthèse sur la Végétation et la Phyto-écologie Tunisienne. Faculté des Sciences, Tunis.
15. Nanda, R.M., Nayak, S. and Rout, G.R. (2004) Studies on Genetic Relatedness of Acacia Tree Species Using RAPD Markers. Biologia, Bratislava, 59, 115-120.
16. Widyatmoko, A.Y.P.B.C., Watanabe, A. and Shiraishi, S. (2010) Study on Genetic Variation and Relationships among Four Acacia Species Using RAPD and SSCP Marker. Journal of Forestry Research, 7, 125-143.
17. Josiah, C.C., George, D.O., Eleazar, O.M. and Nyamu, W.F. (2008) Genetic Diversity in Kenyan Populations of Acacia senegal (L.) Willd revealed by Combined RAPD and ISSR Markers. African Journal of Biotechnology, 7, 2333-2340.
18. Sirelkhatem, R. and Gaali, E.E.L. (2009) Phylogenic Analysis in Acacia senegal Using AFLP Molecular Markers across the Gum Arabic Belt in Sudan. African Journal of Biotechnology, 8, 4817-4823.
19. Li, J.-H., Liu, Z.-J., Salazar, G.A., Bernhardt, P., Perner, H., Tomohisa, Y., Jin, X.-H., Chung, S.-W. and Luo, Y.-B. (2011) Molecular Phylogeny of Cypripedium (Orchidaceae: Cypripedioideae) Inferred from Multiple Nuclear and Chloroplast Regions. Molecular Phylogenetics and Evolution, 61, 308-320.
20. Harpke, D., Meng, S., Rutten, T., Kerndorff, H. and Blattner, F.R. (2012) Phylogeny of Crocus (Iridaceae) Based on One Chloroplast and Two Nuclear Loci: Ancient Hybridization and Chromosome Number Evolution. Molecular Phylogenetics and Evolution, 66, 617-627.
21. Zhao, G.J., Yanga, Z.Q., Chen, X.P. and Guo, Y.H. (2011) Genetic Relationships among Loquat Cultivars and Some Wild Species of the Genus Eriobotrya Based on the Internal Transcribed Spacer (ITS) Sequences. Scientia Horticulturae, 130, 913-918. | https://file.scirp.org/Html/14-2602419_61851.htm |
With intentions to be more proactive in advocating for historic places in Bronzeville, approximately 35 volunteers helped survey 400 properties along and around the 43 rd Street corridor on May 4, 2019.
Preservation Chicago collaborated with Bronzeville residents who formed a group called Preservation Bronzeville after the historic Boston Store Stables building was demolished in November 2018.
Associated Bank sponsored the event, which included a moving keynote address by Chicago Historian Timuel Black. Black recalled a time “when there was no need to go outside of the community because 43 rd Street offered all that you need.” There were jobs, great music venues and movie theaters. He stressed the importance of keeping what is left of the community’s history intact to honor what the area was and the vibrant commercial corridor it can become again. “The closed door is going to open. Be prepared to walk in,” Black said about opportunities in the area.
Volunteers surveyed buildings to determine if they have historic character, what condition the building is in and whether it may be vulnerable to demolition threats.
Data from this survey will be used to target additional research needs, share with the City of Chicago, and advocate for current and future tools that will help keep this history intact. It is a long-term goal to encourage the City to begin a new Chicago Historic Resources Survey. Data collected here can be imported by the City.
Preservation Chicago is organizing outreach in other communities. Roseland community leaders are initiating a similar process to recognize, celebrate and protect its cultural and historic assets. The survey date in Roseland is currently set for Saturday, June 22.
“It was powerful to connect the volunteers with Timuel Black’s historic perspective to take action toward saving Bronzeville’s built environment,” said Mary Lu Seidel, Director of Community Engagement at Preservation Chicago. | https://www.preservationchicago.org/successful-community-engagement-event-over-400-properties-surveyed-in-bronzeville/ |
Contribution $5 (one door prize entry)
Contribute $5 to the "For the Cause of Love Fund" and you'll receive one chance for a door prize at the celebration reception. The fund is to help pay for an immigration attorney so Riyaz can visit. We've tried several approaches but alas the State Dept is not cooperative. This would allow us to be a part of each others lives and split our time between the US and India rather than me only going to India. He is the kindest man I've ever met. and we just passed 11 years of knowing one another and 3 years in a relationship. Thanks for thinking we are worth it.
Thank you drawings will be held every Friday in April and May (or until goal is met).
Prize choices include: | https://sojournsfairtrade.com/products/contribution-2-one-door-prize-entry |
Annual appeal: Please make a donation to keep the OEIS running!
Over 6000 articles have referenced us, often saying "we discovered this result with the help of the OEIS".
Other ways to donate
1,1
Generally, the 3-class ranks s of the real quadratic field R=Q(sqrt(d)) and r of the complex quadratic field C=Q(sqrt(-3d)) are related by the inequalities s <= r <= s+1. This reflexion theorem was proved by Scholz and independently by Reichardt using a combination of class field theory and Kummer theory over the bicyclic biquadratic compositum K=R*E of R with Eisenstein's cyclotomic field E=Q(sqrt(-3)) of third roots of unity.
In particular, the biquadratic field K=Q(sqrt(-3),sqrt(d)) has a 3-class group of type (3,3) if and only if s=r and R and C both have 3-class groups of type (3).
Therefore, the discriminants in the sequence A250236 uniquely characterize all complex biquadratic fields containing the third roots of unity which have an elementary 3-class group of rank two.
The discriminant of K=R*E is given by d(K)=3^2*d^2 if gcd(3,d)=1 and simply by d(K)=d^2 if 3 divides d.
G. Eisenstein, Beweis des Reciprocitätssatzes für die cubischen Reste in der Theorie der aus den dritten Wurzeln der Einheit zusammengesetzten Zahlen, J. Reine Angew. Math. 27 (1844), 289-310.
Table of n, a(n) for n=1..40.
H. Reichardt, Arithmetische Theorie der kubischen Körper als Radikalkörper, Monatsh. Math. Phys. 40 (1933), 323-350.
A. Scholz, Über die Beziehung der Klassenzahlen quadratischer Körper zueinander, J. Reine Angew. Math. 166 (1932), 201-203.
A250236 is a proper subsequence of A250235. For instance, it does not contain the discriminant d=733, resp. 1373, although the corresponding real quadratic field R=Q(sqrt(d)) has 3-class group (3). The reason is that the 3-dual complex quadratic field C=Q(sqrt(-3d)) of R has 3-class group (9), resp. (27).
(MAGMA)for d := 2 to 3000 do a := false; if (1 eq d mod 4) and IsSquarefree(d) then a := true; end if; if (0 eq d mod 4) then r := d div 4; if IsSquarefree(r) and ((2 eq r mod 4) or (3 eq r mod 4)) then a := true; end if; end if; if (true eq a) then R := QuadraticField(d); E := QuadraticField(-3); K := Compositum(R, E); C := ClassGroup(K); if ([3, 3] eq pPrimaryInvariants(C, 3)) then d, ", "; end if; end if; end for;
A250235 and A094612 are supersequences, A250237, A250238, A250239, A250240, A250241, A250242 are pairwise disjoint subsequences. | http://oeis.org/A250236 |
A state task force to study reparations for Black people in California will kick off Tuesday with a virtual inaugural convening and a pre-recorded message from Gov. Gavin Newsom.
California's Task Force to Study and Develop Reparation Proposals for African Americans was created by the passage last year of Assembly Bill 3121, legislation authored by then-Assemblywoman Shirley Weber, D-San Diego, who became the state's first Black Secretary of State in January.
Support local news coverage and the people who report it by subscribing to the Napa Valley Register. Special offer: $5 for your first 5 months!
The nine-person task force will make recommendations to the state Legislature regarding reparations, with special consideration for those who are the descendants of enslaved people in the U.S., and will look at how they should be awarded and who would be eligible.
Newsom's office called the task force "a first-in-the-nation effort by a state government to take an expansive look at the institution of slavery and its present-day effects on the lives of Black Americans."
The summer of 2020 was marked by significant acts of police violence and widespread protest. But as the dust settles on an unprecedented attack on the nation's Capitol, many observers who took to the streets this summer can't help but compare the police response Wednesday to what they have experienced. "White bodies who engage in terror are handled gently. Black and brown bodies who engage in resistance are terrorized," Faith in Action Director of Urban Strategies Michael McBride said. Mike McBride is an activist and reverend in the Bay Area. MCBRIDE: "It's terrifying to have people with weapons standing in front of you. And you know that they are aware that if they harm you, there likely will be no recourse to them. I mean, it's terrifying."Activists and protesters across the country have complained and in some cases filed suit alleging police brutality during peaceful protest. In Chicago, 60 protesters filed a federal lawsuit against the Chicago Police Department alleging false arrests and unprovoked, violent attacks. In Buffalo, a 75-year-old man was shoved to the ground during a protest against police brutality, causing nationwide backlash. He was hospitalized for two weeks. "I have put my body on the line to expand people's rights, for liberation, for freedom," said Jennifer Epps-Addison.Jennifer Epps-Addison is an activist and the network president for the Center for Popular Democracy. EPPS-ADDISON: "Those are the things that have gotten me arrested. And so what these folks were doing, they were literally trying to preserve a crumbling altar of White supremacy. And that is not the same thing as protest."During the insurrection at the Capitol, United States Capitol Police arrested just 14 people from beginning to end. D.C. police made more than 50 arrests after they were brought in as backup, but the vast majority of those arrests were for breaking curfew, not breaching the Capitol building. Compare that with one Black Lives Matter protest in Dallas, where more than 600 people were arrested for blocking a highway during a peaceful protest. EPPS-ADDISON: "I could give you a long list of things, egregious crimes I was arrested for, like singing and chanting and dancing. And yet these folks not only scaled the Capitol building, not only entered the building with firearms, but then began to desecrate this building, you know, members of Congress' offices? I think we heard something like maybe 15 arrests. A friend of mine joked, there have been birthday parties with more arrests than that."Several law enforcement agencies, including the FBI, say they will be going back over footage from the incident and plan to make subsequent arrests and file charges. But that doesn't change the message this incident sent to many activists and protesters across the country. Cori Bush protested in Ferguson, Missouri, after the police killing of Michael Brown. Now, she's a U.S. congresswoman. "Had we, as Black people, did the same things that happened today with the police, had we fought with fists police officers, the reaction would have been different. We would have been laid out on the ground. There would have been shootings. There would have been people in jail. There would have been people beaten with batons. And I know because I've been there, and we didn't even have to put our hands on police officers to get that," Rep. Cori Bush said. Jamal Andress, Newsy.
Photos: Black Lives Matter protests in Napa County
Calistoga peaceful protest
Peaceful demonstrators waved signs and chanted in support of Black Lives Matter on the Lincoln Avenue Bridge in Calistoga on June 20, as drivers honked in support.
Tim Carl Photography
Calistoga peaceful protest
Calistoga protesters marched from Pioneer Park to Lincoln Avenue June 20 in support of Black Lives Matter.
Tim Carl Photography
Calistoga peaceful protest
Calistoga Mayor Chris Canning addressed the crowd of BLM protesters in Pioneer Park on June 20, saying the community is fortunate to be "collectively supportive" of one another.
Tim Carl Photography
Calisoga peaceful protest
Peaceful protesters gathered at Calistoga's Pioneer Park June 20 and 21 to support Black Lives Matter.
Tim Carl Photography
Calistoga peaceful protest
On June 20-21, Calistogans participated in peaceful protests carrying signs and chanting from Pioneer Park to the Lincoln Avenue Bridge.
Tim Carl Photography
Calistoga peaceful protest
Supporting the national and international Black Lives Matter movement, peaceful protesters waved signs in downtown Calistoga June 20-21.
Tim Carl Photography
Calistoga peaceful protest
Police Chief Mitch Celaya told BLM protesters on Friday that the Calistoga Police Department has created a culture that is "very community-centered."
Tim Carl Photography
Calistoga peaceful protest
Friday's peaceful protest in Calistoga for the Black Lives Matter movement drew demonstrators of all ages.
Tim Carl Photography
Calistoga peaceful protest
About three dozen peaceful protesters gathered in Pioneer Park on Friday, calling attention to the national and global-wide Black Lives Matter movement.
Tim Carl Photography
Calistoga peaceful protest
Protesters waved signs in support of Black Lives matter June 20, as passing cars on Lincoln Avenue honked in support.
Tim Carl Photography
Lorie Johns, Diego Mariano
Lorie Johns and Diego Mariano take part in Monday's Black Lives Matter demonstration in Angwin.
Julie Lee photo
Angwin demonstration
Dozens of Angwinites pose near Howell Mountain Road during Friday's demonstration in support of Black Lives Matter.
Bryan Ness photo
Juneteenth in Angwin
Demonstrators march down Howell Mountain Road in Angwin on Friday, which was Juneteenth. Leading the march is organizer Michael Andrianarijaona.
Black Lives Matter demonstration, Angwin
Angwin demonstration
Angwin residents march down Howell Mountain Road near College Avenue supporting the Black Lives Matter movement.
Jesse Duarte, Star
Napa protest against police violence
Some 100 people gathered at Napa's Veterans Memorial Park in the opening minutes of a protest against police brutality and racial discrimination, one of numerous such protests to take place nationwide since the death of George Floyd during an arrest in Minneapolis was captured on video.
Howard Yune, Register
Napa protest against police violence
Gabriela Fernandez (foreground) leads demonstrators in chants of "No justice, no peace!" at Veterans Memorial Park during the third weekend of protests in Napa inspired by the death of George Floyd, a black man, during an arrest by Minneapolis police last month.
Howard Yune, Register
Napa protest against police violence
Veterans Memorial Park in Napa has become the local hub of protests against police brutality against people of color, including a demonstration Sunday afternoon.
Protesters against racism take to Napa streets
Between 250 and 300 people took part in Sunday's protest march in downtown Napa against racism and police brutality. The procession included 8 minutes and 46 seconds of silence at First and School streets, in remembrance of the death of George Floyd during an arrest by Minneapolis police May 25.
Howard Yune, Register
Napa protest against police brutality
Hundreds of demonstrators against racism and police violence formed a ring at First and School streets Sunday in downtown Napa, maintaining silence for nearly nine minutes in remembrance of George Floyd, an unarmed black man who died during an arrest in Minneapolis May 25. The length of the silent period was inspired by the length of the video showing a police officer leaning a knee to Floyd's neck as he struggled to breathe.
Howard Yune, Register
Kids' march against racism in Napa
Some 200 people took part in a "kids' march" in downtown Napa for children and families protesting the death in Minneapolis police custody of George Floyd. The walk preceded Napa's second scheduled all-comers march against police brutality, following an earlier demonstration May 31.
Howard Yune, Register
Vigil for racial equality in Napa
A spectator at Sunday's interfaith vigil for racial equality in Napa's Veterans Memorial Park held a sign marking the length of a widely shared video that showed the death of George Floyd as a Minneapolis police officer leaned a knee onto his neck during an arrest May 25. Local clergy members led some 60 people in 84.6 seconds of silence in Floyd's memory. The 84.6 seconds was in recognition of the length of a video showing an officer leaning a knee onto Floyd's neck as he struggled to breathe.
Howard Yune, Register
Kids' march against racism in Napa
Children and parents massed at Veterans Memorial Park in downtown Napa on Sunday for a kids' march against racism and police violence. Organizers of the event, which included a walk through the Oxbow Commons, put together the observance in solidarity with similar family-oriented demonstrations held in San Francisco and Oakland.
Howard Yune, Register
Vigil for racial equality in Napa
The Rev. Robin Denney of St. Mary's Episcopal Church held a "Black Lives Matter" placard Sunday while speaking to about 60 people during an interfaith vigil supporting people of color and opposing racism. The vigil and walk was part of a second Sunday of observances following the death of George Floyd during an arrest by Minneapolis police May 25.
Howard Yune, Register
Marching through downtown St. Helena
After gathering at Jacob Meily Park and listening to several speaches, a group of mostly young people marched through downtown St. Helena on Tuesday night.
David Stoneberg, Star
Marching through downtown St. Helena
A large group of mostly young people, all with masks, marched through downtown St. Helena on Tuesday night.
David Stoneberg, Star
Marching through downtown St. Helena
A group of people gathered at Meily Park Tuesday evening and after listening to several speakers marched through downtown St. Helena to Lyman Park. Translated the sign says, "Your Struggle is My Struggle."
David Stoneberg, Star
Marching through downtown St. Helena
On Tuesday night, a large group of mostly young people walked on the sidewalks from Jacob Meily Park to Lyman Park in downtown St. Helena. The sign is translated as "Your Struggle is My Struggle."
David Stoneberg, Star
Black Lives Matter protest in Napa
Sunday's protest in downtown Napa against police brutality and racial bias began with hundreds of demonstrators massing in front of the county courthouse on Third Street before moving the rally a block north to Veterans Memorial Park. Later, marchers peacefully walked through the central city before disbanding at 11:30 p.m. at Veterans Park.
Howard Yune, Register
Black Lives Matter protest in Napa
Many of the placards carried by protesters at a downtown Napa demonstration Sunday displayed solidarity for Latinos with African Americans in the battle against racism and brutality by law enforcement. The rally follows a long line of protests that have taken place nationwide since George Floyd, a 46-year-old black man, died while being arrested by Minneapolis police May 25.
Howard Yune, Register
Black Lives Matter protest in Napa
A marcher at Sunday's demonstration against police brutality carried a sign with the words "I can't breathe" - echoing the plea of George Floyd captured on video minutes before the 46-year-old African American man died during an arrest by Minneapolis police on May 25.
Howard Yune, Register
Black Lives Matter protest in Napa
A demonstrator leads hundreds of other protesters in chants of "Black Lives Matter" and "No justice, no peace" at Veterans Memorial Park in downtown Napa during a Sunday afternoon demonstration against racism and police brutality. The protest took place six days after George Floyd, an African-American man, died during an arrest by Minneapolis police after an officer leaned a knee on his neck for more than eight minutes.
Related to this story
TULSA, Okla. — In a century-old family story about a teenage aunt who liked to drive her luxury car down the trolley tracks of Tulsa, Kristi Williams still savors a tiny, lingering taste of how different life could have been for all Black Americans after slavery. | |
Q:
What features are good for Sentence Classification apart from using vector representation like Bag-of-words?
I am trying to find whether a given sentence is a "question request", "call for action", etc. I am using supervised multilabel classification for that.
What will be a good set of features to use? I am currently using Bag-of-words with trigrams, modal verbs, question words, etc. but the result is not that good.
Input example: "Can you get this today? I need following items."
A:
https://code.google.com/p/word2vec/ is probably a good feature.
Illinois Wilkifier can also be very helpful: http://cogcomp.cs.illinois.edu/demo/wikify/?id=25
Also take a look at features used for Dataless classification: http://cogcomp.cs.illinois.edu/page/project_view/6
| |
Q:
Upper semi-continuity and lower semi-continuity of particular functions
Let $f:[a,b]\to \mathbb{R}$ be a bounded function. Let
$$\bar{f}(x) = \inf_{\delta>0}\sup_{|x-y|<\delta} f(y)$$
$$\underline{f}(x) = \sup_{\delta>0}\inf_{|x-y|<\delta} f(y)$$
(small side question, does anyone know the latex code for a small bar underneath an item?)
The problem (well it's got 4 parts, but I'll just put one and hopefully figure things out on the other 3 with a little nudge)
Show that $\bar{f}$ is upper semi-continuous on $[a,b]$
The definition of upper semi-continuous we are using is: A function $f$ is upper semi-continuous at $x$ if whenever $f(x)<\alpha$ there is an open set $O$, containing $x$, such that $f(y)<\alpha$ for $y\in O$.
I've tried showing this directly and by contradiction, but in both cases I'm simply finding I don't fully understand how to deal with these functions. For example, my direct argument:
We must show that when $\bar{f}(x) < \alpha$, there is an open set $O\subset [a,b]$, containing $x$, such that $\bar{f}(z) < \alpha$ for all $z\in O$. But this means we must show if
$$\inf_{\delta>0}\sup_{|x-y| < \delta} f(y) < \alpha,$$
then there is an open set $O\subset[a,b]$ containing $x$ such that
$$\inf_{\delta>0}\sup_{|z-y| < \delta} f(y) < \alpha,$$
for all $z\in O$. Fix some $\delta'>0$. For all $\delta > \delta'$,
$$ \inf \sup \{y:|z-y|<\delta\}f(y) \leq \inf \sup \{y:|z-y|<\delta'\} f(y).$$
so...?
Some other thoughts:
We note if we have
$$ f(y) > \alpha$$
for all $y\neq x$ on some open neighbourhood of $x$, with $f(x) < \alpha$, then $\bar{f}(x) \geq \alpha$, as $\bar{f}(x)$ does not actually use the value of $f$ at $x$. Hence if $f$ has a removable discontinuity or singularity, $\bar{f}$ will not notice. Hence we may assume our function $f$ does not have a removable singularity. But then $f$ can only have jump discontinuity or an essential discontinuity. If $f$ has a jump discontinuity, perhaps we can find some open neighbourhood $U$ of $f$ containing $x$ such that $f(U)$ is homeomorphic to something connected apart from the jump, and well behaved. If $f$ has an essential discontinuity at $x$, the limit must not exist (as our function is bounded), thought $\bar{f}$ still exists. Again, we could `zoom in' to some connected component (apart from the discontinuity) and things should be nicer there? I'm not sure how to pick such an open neighbourhood of our point though...
Even the slightest nudge would be appreciated :) Thanks!
A:
Suppose that $\bar{f}$ is not upper semicontinuous at $x$; then there is an $\alpha>\bar{f}(x)$ such that for each $\epsilon>0$ there is a $y_\epsilon \in (x-\epsilon,x+\epsilon) \cap [a,b]$ such that $\bar{f}(y_\epsilon)\ge \alpha$. This implies that for every $\epsilon>0$, $$\sup_{|y-x|<\epsilon}\bar{f}(y)\ge \alpha\;.\tag{1}$$ On the other hand, you know that $$\bar{f}(x) = \inf_{\epsilon>0}\;\;\;\sup_{|y-x|<\epsilon}f(y)<\alpha\;,$$ where $\sup_{|y-x|<\epsilon}f(y)$ cannot increase as $\epsilon$ gets smaller. Pick any $\beta\in (\bar{f}(x),\alpha)$; then there must be some $\epsilon_0>0$ such that $f(y)<\beta$ whenever $|y-x|<\epsilon_0$. What does this tell you about $$\bar{f}(y) = \inf_{\epsilon>0}\;\;\;\sup_{|z-y|<\epsilon}f(z)$$ when $|y-x|<\epsilon_0$? Now compare with $(1)$.
| |
What is your time worth when you volunteer at Sus Manos Gleaners for a 2-3 hour session?
During our first complete year of operation this past year with volunteers coming on Saturday mornings, here is a summary of the impact:
3 blue barrels - Mexico orphanage
5 blue barrels - Caribbean Mercy Ship Ministry
42 blue barrels - Northern Haiti Christian Mission
TOTAL: 50 BLUE BARRELS!!!
How much does a blue barrel hold? We conservatively estimate that each holds approximately 6,500 servings of nutrient-full fruits and veggies, and each barrel only weights 100 pounds!
50 blue barrels x 6,500 servings = 325,000 servings provided overseas to starving people who are being reached with the good news of the Gospel after filling their bellies with food. | https://www.smgleaners.org/events-1 |
As some of you might be aware, my husband and I have different first languages and therefore we very much want our children to be fluent in both languages. My husband’s first language is English and mine Portuguese.
Before our first child was born, I had no idea how to raise a bilingual child but I definitely didn’t want to leave it to chance. It was too important to be left to chance! So I did some reading on the subject and together with my husband we came up with a plan (sort of!).
WE WILL EXPOSE HIM TO BOTH LANGUAGES FROM THE MOMENT HE IS BORN!
Now I have 2 amazing kids at very different language development stages (one is 3 and a half and the other 18 months) both learning and speaking in English and Portuguese.
In a nutshell, these are the things we consciously do to develop both languages.
- When we lived in England we mostly used the ‘one parent one language’ approach. I would always speak to the kids in Portuguese and my husband in English. Since we moved to Portugal, this changed. My husband and I both speak English with them because they are exposed to Portuguese everywhere else.
- We read them books in both Portuguese and English every day. They love books and don’t have a preference for the language they are written in.
- Sometimes they mix up both languages in the same sentence. When this happens, I repeat the sentence back to them in both languages correctly so that they get used to the correct way of speaking. Every time they learn a new word i make sure i refer to whatever it is in both languages.
- When I speak to them in Portuguese I expect an answer in Portuguese and vice versa. This doesn’t always happen so instead of telling me to answer in the correct language, I repeat their answer back to them in the correct language. This seems to work a treat!
- We listen to music in both languages, we play in both languages, we watch cartoons in both languages and we sing in both languages.
Are you or do you know someone that is raising bilingual children? How are you/they getting on? Drop me a line to let me know. Would love to hear other people’s experiences. | https://whitecamellias.com/2016/02/18/how-im-raising-my-children-to-be-bilingual/ |
Latest statistics sees EconStor among the ten most heavily used archives in RePec
Most Economists know, that RePEc publishes rankings on publications, authors and research institutions. Less known is the fact, that RePec also provides statistics on it’s contributing archives and how they perform. These archives can come from university faculties (e.g. for their working papers), from publishers (for their journals) or from repositories (either institutional or subject based). A ranking of the largest archives is provided on the RePEc homepage (see also screenshot on the right).
As a subject repository with a focus on the German Economics community EconStor provides it’s RePEc input services for over 100 institutions, which places as amont the TOP 20 archives with respect to size.
But even more pleasant (and also more important to our customers) is the fact, that papers on EconStor are also heavily used. Looking at the detailed LogEc statistics for all contributing archives in RePEc, we find that EconStor (acronym “zbw”) is at number 10 concerning the use of our archives (number of downloads).
803 new items have been added and the total number of full-text documents was 76707 by the end of May. In addition we have counted 297941 downloads in that month. The Top 5 most downloaded papers were:
1214 new items have been added and the total amount of full-text documents reached 75904 by the end of April. The total number of downloads in April climbed to 305.357. The Top 5 most downloaded papers were:
2108 new items have been included and the total amount of full-text documents reached 74690 by the end of March. The total number of downloads in March climbed to 278135. The Top 5 most downloaded papers were:
With 1576 new items we have increased our total amount of full-text documents up to 72482 by the end of February. The total number of downloads in February climbed to 226147. Our Top 5 most downloaded papers were:
With 2828 new items we have increased our total amount of full-text documents up to 70906 by the end of January. The total number of downloads in January reached 203.259. Our Top 5 most downloaded papers were:
New edition of webometrics ranking now also includes Altmetrics indicators. EconStor ranks at number 17 concerning full-text documents.
The “Ranking Web of Repositories” is a service provided by the Spanish Reseseach Organization CSIC. Besides comparing repositories it also ranks Universities, Business Schools and Hospitals. The repository ranking is published every 6 months, now in it’s 14th edition.
The latest edition, which was published this week, introduced a new element among the relevant criteria for the ranking: The “Altmetrics”-Element (which accounts for 25% of the ranking) is monitoring services like Twitter, Wikipedia and Google+ and looks up, how often repository documents are mentioned in these service.
The new Webometrics repository ranking sees EconStor at number 45 (out of 1750 repositories worldwide) in the overall ranking. When it comes to the number of PDF-Documents within a repository (category “rich files”), EconStor comes out even better at Number 17. Concerning the coverage in Google Scholar, we rank at number 21.
With 464 new items we have increased our total amount of full-text documents up to 68078 by the end of December. Over the year 2013 we have added more than 20000 new titles to EconStor. The total number of downloads in December climbed to 226441. Our Top 5 most downloaded papers were:
| |
It never gets old for the Neshoba Central Lady Rockets who swept East Central last week to claim their ninth straight state fast-pitch soft- ball championship and the 11th in school history.
All of the Mississippi High School Activities Association state championship series were held at the University of Southern Mississippi softball complete in Hattiesburg. Neshoba Central won game one 7-0 and took game two 14-4. Neshoba Central finished the season with a 31-3 record.
“I couldn’t be more proud of the girls we put on the field this year,” said Coach Zach Sanders. “We came down here last year with six seniors, two juniors and an eighth grader on the field. We were undefeated.
“This year, we came through the season with 31-3 record. We had an eighth grad- er, three ninth graders three sophomores, a junior and two seniors starting. We were real- ly a young group. We worked from August until today, building and maturing and getting better and doing what we needed to do to build a championship team.”
This is the ninth straight fast-pitch championship. The
program also has eight straight slow-pitch championships to its credit during the same time period, giving the Lady Rockets a total of 17 gold trophies. Sanders said no one wants to let this winning streak come to an end of their watch.
“It drives you,” Sanders said. “It keeps you going. I’m pretty sure nobody wants to be that team that ends the streak.”
Freshman Lanayah Henry was the winning pitcher of both Lady Rockets victories. She was named the MVP for
the championship series.
Game one
Lady Rockets 7
East Central 0
With Lanayah Henry’s four-hitter backed up by a strong defense, the Lady Rockets kept East Central off the scoreboard in game one while scoring seven runs.
Neshoba Central scored one in the first, five in the third and one in the sixth to go up 1- 0 in the championship series.
Fielder was the winning pitcher while Lauren Brooks took the loss for East Central.
Tenly Grisham had two hits and scored two runs. Henry and Miley Thomas both had two hits. Hama’ya Fielder, Mya Willis and Mauhree Jones all had one hit.
Game two
Lady Rockets 14 East Central 4 Mauhree Jones knocked in two runs in the bottom of the sixth as Neshoba Central beat East Central with the 10 run rule in game two and captured the state championship.
Neshoba Central scored five runs in the first inning. They added two in the third inning, three in the fourth and four in the fifth. East Central scored one in the second and three in the fourth.
Henry was the winning pitcher. She allowed six hits and struck out two batters, Breyona Tanner took the loss for East Central.
The Lady Rockets banged out 13 hits. Hama’ya Fielder and Mya Willis both hit a dou- ble. Miley Thomas and Charmayne Morris both hit a triple. Tenly Grisham and Sa’Naya Jackson both had two hits.
This is the second year in a row the Lady Rockets have beaten East Central in the state finals. | https://neshobademocrat.com/stories/lady-rockets-claim-ninth-straight-state-championship,54909? |
Table of Contents
Press and hold buttons 1 and 3 for 15 seconds on HomeLink. Homelink buttons are on the rearview mirror or just above it.
Next, press and hold button 1 on the HomeLink and at the same time press the button on your handheld garage door opener. Keep both pressed for about 8 seconds. The HomeLink LED light should start flashing rapidly.
Once you see the flashing light go to your garage door motor, you will find the garage door motor mounted on the ceiling of your garage. Locate the LEARN button on the back of the garage door motor and press it once. Do not keep it pressed.
Quickly go back to the car and press button 1 on HomeLink a few times until the garage door starts to operate. The last step can not take more than 30 seconds. Each press needs to be two seconds long.
For more help, follow the detailed step-by-step procedure below.
What you will need
- Garage door remote
- Must have a working handheld remote.
- Access to your garage.
- Small ladder or step stool
Procedure
- Locate the 1 and 3 buttons on the rearview mirror
- Press both buttons (1 and 3) simultaneously and hold down for a while until the LED light begins to blink. This means that memory is cleared.
- Ensure minimum proximity between your garage remote and the Homelink buttons.
- Pick up the handheld garage remote. Press and hold down the garage button while you press button “1” on Homelink. Hold for a while until the orange light appears and starts blinking green—about 8 seconds.
- Go to the garage and remove the casing on the motor to reveal the learn button. Press and release the learn button. Do not hold the learn button down too long, or you will erase all programmed remotes.
- Go back to the car quickly and finish the connection by pressing the 1 button on the rearview mirror a few times. Press the one button anytime after that to unlock the garage door.
If you are not able to program your Lexus garage door opener, repeat all of the steps. Steps 5 and 6 should be completed within 30 seconds. In step 6, keep button 1 on HomeLink pressed instead of pressing the button several times.
How to program Lexus garage door opener without a remote?
You must have a remote that is programmed to open your garage door. You can not program a Lexus garage door opener without a preprogrammed handheld garage door opener. First, you program your handheld garage door. Next, program your home link using the instruction on this page.
As an alternative, you can use the garage keypad if it is wireless, not hard-wired to the garage motor. You may have to remove the keypad or have the vehicle very close to it when you complete step 4.
Notes
- When using your Lexus mirror button to open or close the garage, you have to hold it down for about about 2 seconds for it to work. One-touch does not work.
- You only have 30 seconds to press one of the buttons on the rearview mirror after you press the LEARN button on the back of the garage door motor. This step must do this step quickly.
- Park vehicle right outside the garage. Close to the garage door but not under the garage door.
- Holding the 1st and 3rd Homelink buttons for 20-30 sec it would clear the memory.
- The mirror LED will start flashing or turn on solid. Flashing LED means you have a rolling code garage door opener. Solid green means it's not a rolling code, and your buttons can now be programmed.
Programming fixed code garage opener
The majority of garage door openers built before 1996 use fixed code. This is how to program these units with your Lexus.
- Hold the remote up to the HomeLink button you want to program.
- Push both one of the HomeLink buttons and the remote button at the same time.
- Hold them pressed until you see the LED on the Lexus HomeLink begin to flash rapidly.
- Release both buttons. Your Lexus HomeLink button should be programmed as a duplicate of the remote.
You must have a working handheld garage remote. | https://www.youcanic.com/vehicle/lexus-garage-door-opener-programing-instructions |
We talk about probability when one is not certain of the result of a calculation, event or experiment. This is a theory that you use in life every day without even knowing it. Probability in itself is not difficult, provided you master its fundamentals and reasoning which are behind each event. If you can understand its fundamentals, you will manage to make it your own, and assimilate easily. In this article, Franck Peltier will help you discover the foundations that will enable you to better understand this discipline called probability!
What are the foundations for probability?
First, probability can be defined as the ratio of the number of outcomes in a given event, over the total number of possible outcomes.
You cannot measure it, but you can analyze a specific case, to determine the number of times you can get the same result.
Take the example of a game where the possible events have the same probability of occurring: the dice game with six faces! The number of times even numbers occur is 3/6 or ½, same as the probability of having odd numbers.
In this case, we speak of equal probability, because chances are equal or 0.5 for the occurrence of each event. Otherwise, we talk about unequal probability.
You should also know that for every random experiment, all the possible outcomes are contained in a universal set called the sample space.
This is the set of all probable and possible outcomes that can result from a random experiment. It is denoted by “Ω”.
The odds of a given event depend on random occurrences and thus on chance. The ability to predict how many times a given can occur is the main foundation of this discipline! Let’s thank Franck Peltier for giuding us through this. | https://www.mathematics-probabilities.com/fundamentals-of-mathematical-probability/ |
What do Mozart and the Beatles have in common besides music? The answer is simple: lots and lots of practice. Yes, we all know that Mozart was gifted at a very young age, but precisely because of that, by the time he reached adulthood, he had spent thousands of hours playing music. The same goes for the Beatles: by the time they hit America and rocketed to fame, they had been playing together continuously for seven years.
Where’s all this leading? To that mysterious place where mastery happens. And it turns out that it isn’t so mysterious after all. There’s a raft of new research that confirms the saying that “the only place where success comes before hard work is in the dictionary.” When it comes to mastering music or writing or jump shots, innate talent is only part of the story. Achievement is talent plus preparation. And the more the careers of gifted achievers in all fields are studied, the smaller the role of talent and the bigger the role of preparation turn out to be.
The key to excellence: 10,000 hours of concentrated practice. At least, according to Malcolm Gladwell’s book, Outliers: The Story of Success, that’s the number of hours you have to devote to practicing your craft or your game to achieve full mastery.
Wow! At first glance, 10,000 hours seems like a lot, doesn’t it? But let’s not forget, it’s the journey that counts: forward motion is everything. And to put it into perspective, remember that most people watch 20-to-30 hours of TV a week; over a year, that adds up to a lot of time. To me, what really counts is that mastery is more a matter of discipline and desire than DNA. That means the path to mastery is open to all of us. The only must-haves are passion and perseverance. Now that’s a sweet melody, indeed! | https://karinwritesdangerously.com/2010/08/19/rock-on/ |
Q:
Prove that any map from a surface to another one preserves at least one angle
Tissot proved that for any kind of representation of a surface onto another, there exists, at every point of the first surface, two perpendicular tangents, which are unique unless the angles are preserved at that point (i.e. the map is conformal), such that the images of these two tangents are perpendicular on the second surface.
I can't find the paper; and don't really know if this is easy to prove.
The only idea I had so far is picking two perpendicular tangents of the first surface and rotate them and try using something like Bolzano, but it hasn't worked so far and I don't think it will.
Does anyone have a reference/knows the general idea of how to prove that? I would really appreciate that. Thanks!
A:
I'll do it for a linear transformation $T : V \rightarrow W$. Consider the matrix $R_\theta$ that rotates a vector with a counterclockwise angles $\theta$. Then, if $\{e_1,e_2\}$ is an orthonormal basis, $A_\theta = \{R_\theta\ e_1, R_\theta\ e_2\}$ is too, $\forall\ \theta$.
Let's define $F: [0,2\pi] \rightarrow \mathbb{R}$ by $F(\theta)=<Tv,Tw>$
If $F(\theta) = 0$ for any $\theta$, I won. Suppose $F(0) > 0$, then, by linearity, $F(\pi/2) = - F(0) < 0$. Then, by Bolzano, I won.
[Edit] For the uniqueness part:
Suppose $\{v,w\}, \{x,y\}$ both satisfy what's said in the statement. (of norm=1). Let's prove the transformation is conformal, then.
Let $a,b$ be some vectors. We can write $a=a_1 v + a_2 w$, and $b = b_1 v + b_2 w$.
$<a,b> = a_1 b_1 + a_2 b_2$
$<Ta,Tb> = a_1 b_1 ||Tv||^2+ a_2 b_2 ||T(w)||^2$
Let's see, then, that $||Tv||^2 = ||Tw||^2$.
We can write x,y in function of v,w:
$x=x_1 v + x_2 w$, and $y = y_1 v + y_2 w$
As $<T(v),T(w)> = 0$, we get that $x_1 y_1 = - \frac{x_2 y_2 ||T(w)||^2}{||T(v)||^2}$ (1)
Then, $||Tv||^2 = ||Tw||^2$ iff $x_1 y_1 = -x_2 y_2$, which happens because $x,y$ are orthonormal. (I'm using here that $\{v,w\}, \{x,y\}$ are not the same pair of vectors and hence (1) is not the equation 0 = 0).
The end!
| |
Q:
Passed argument/parameter in function is still being changed after removing the reference/alias
I've spent the last 2 hours on this and I've probably read every question on here relating to variables being passed to functions. My issue is the common one of the parameter/argument being affected by changes made inside the function, even though I have removed the reference/alias by using variable_cloned = variable[:] in the function to copy the contents across without the reference.
Here is the code:
def add_column(m):
#this should "clone" m without passing any reference on
m_cloned = m[:]
for index, element in enumerate(m_cloned):
# parameter m can be seen changing along with m_cloned even
# though 'm' is not touched during this function except to
# pass it's contents onto 'm_cloned'
print "This is parameter 'm' during the for loop...", m
m_cloned[index] += [0]
print "This is parameter 'm' at end of for loop...", m
print "This is variable 'm_cloned' at end of for loop...", m_cloned
print "m_cloned is m =", m_cloned is m, "implies there is no reference"
return m_cloned
matrix = [[3, 2], [5, 1], [4, 7]]
print "\n"
print "Variable 'matrix' before function:", matrix
print "\n"
add_column(matrix)
print "\n"
print "Variable 'matrix' after function:", matrix
What I'm noticing is that the parameter 'm' in the function is changing as if is an alias of m_cloned - but as far as I can tell I have removed the alias with the first line of the function. Everywhere else I have looked online seems to suggest that this line will make sure there is no reference to parameter - but it's not working.
I'm sure I must have made a simple mistake but after 2 hours I don't think I'm going to find it.
A:
It looks like you need a deepcopy, instead of a shallow copy, which is what [:] gives you:
from copy import deepcopy
list2 = deepcopy(list1)
Here's a longer example comparing the two types of copy:
from copy import deepcopy
list1 = [[1], [1]]
list2 = list1[:] # while id(list1) != id(list2), it's items have the same id()s
list3 = deepcopy(list1)
list1[0] += [3]
print list1
print list2
print list3
Outputs:
[[1, 3], [1]] # list1
[[1, 3], [1]] # list2
[[1], [1]] # list3 - unaffected by reference-madness
| |
The challenges facing the High Street have been much in the news over the last few years, with the failure of many shops and restaurant chains. This is the latest manifestation of a process that has been changing the way we shop for many years. It was not so long ago that the corner shop was the main source for day to day provisions. A corner shop could be found on the majority of London streets and the outline of many of these remain to this day, and for this blog post I went in search of one we photographed in 1986 to see what remained. This is the Walcot Stores in Walcot Square, Lambeth:
The same view today:
The Walcot Stores has long closed, but parts of the painted sign and the shop front remain. The conservation appraisal for the area records about the shop front that “Despite some surviving elements, unfortunately the door and much of the timber has been replaced and inappropriately stained.”
Looking into the original shop reveals a mix of goods. Tins of soup, boxes of fruit juice, packets of biscuits and in the left hand window, household cleaning products, toothpaste and soap.
Walcot Square is in Lambeth. Follow the Kennington Road, and just after the grounds of the Imperial War Museum are a couple of streets that lead into an early 19th century development centered on Walcot Square.
in the following map, Kennington Road is the vertical orange road in the centre. The Imperial War Museum is at the centre top, and just below this is the Walcot Estate centred around two triangular greens, just to the right of Kennington Road.
Map © OpenStreetMap contributors.
The Walcot Estate is one of those places that can be found across London which have a very distinctive character and are different to their surroundings. I have written about similar estates before, such as the Lloyd Baker estate.
The land in this part of Lambeth was once owned by the Earls of Arundel, then the Dukes of Norfolk. In 1559, Thomas, Duke of Norfolk sold a parcel of land, and in 1657 this land was sold to Edmund Walcot.
There must have been a family connection to the area as Edmund’s uncle, Richard Walcot also owned some local land which Edmund inherited.
Edmund left the land in trust to St. Mary Lambeth and St. Olave, Southwark, for the benefit of the poor. At this time the majority of the land was undeveloped, apart from some limited building, it was mainly used for agricultural purposes.
The land left to St. Mary and St. Olave went through a number of partitions, enabling the two parishes to own their specific block of land, with the line of the present day Kennington Road roughly forming the border between the partitioned land.
Building commenced in the early years of the 19th century. Walcot Square was constructed between 1837 and 1839.
The name Walcot Square does not fit the area. Walcot recalls Edmund Walcot who left the land in trust to the parishes, however the area is not really a square.
The grassed section in the middle is more a triangle than a square, whilst the name also extends along the road that leads down to Kennington Road and up towards Brook Drive.
The Walcot Stores were at the eastern end of Walcot Square, towards Brook Drive. After photographing the old store front, I walked along Walcot Square towards the central grassed area.
In the following photo, the old shop is in the building on the left. In the front of the single storey building is a very old looking stone.
A closer examination reveals a stone with a number of inscriptions. The date appears to be 1779.
The stone pre-dates the construction of Walcot Square so was probably a boundary stone, possibly to show the land had been partitioned between the parishes of St. Mary and St. Olave.
I checked John Rocque’s 1746 map to see if I could find any obvious boundary (assuming that the boundary was in place 33 years before the date on the stone).
The following map extract shows the area in 1746.
Some of the main roads that exist today could also be found in 1746. Part of Kennington Road along with Hercules Road, Lambeth Road and Lambeth Walk.
I have marked roughly where Kennington Road extends today and also the area occupied by the Walcot Estate, which in 1746 mainly consisted of fields.
There is a very hard boundary running diagonally down from Lambeth Road with cultivated agricultural land on the left and open fields on the right. This may have been the partition between St. Mary and St. Olave’s land, however the boundary does not look to be where the stone is to be found today – this is assuming that the stone is in the original location, Rocque’s map is correctly drawn and scaled, and that my interpretation of where the future extension to Kennington Road would run, and the future location of the Walcot Estate is correct.
Again, one of the problems with my blog where I worry I do not have enough time to research the detail – however I found it fascinating to find that a boundary stone that pre-dates the building of the entire estate can still be found.
Walking from the location of the shop, a short distance along Walcot Square brings us to the main area that could be considered a square. Here, the view is looking in the direction of Kennington Road with one of the corners of the triangle of grass, and the houses of Walcot Square on either side.
The square and housing was built between 1837 and 1839. The Kenning Road extension had already been built, along with the large houses that faced onto Kennington Road, so Walcot Square was the typical expansion of building back from the main roads into the fields.
The square consists of two and three storey houses, along with a small basement. the ground floor is raised so a small set of steps leads up from the street to the front door.
The north western corner of the square ends in a short stub of a street. The large gardens of the houses facing onto Kennington Road block the street. On the left is a rather attractive single storey building with a part basement below. The unusual design is probably because of the limited space behind, as this building could not intrude into the gardens of the house on the left.
I wonder if it was the original intention to purchase the houses and land that block the extension on to Kennington Road and extend the above street and Walcot Square housing directly onto Kennington Road?
Leaving Walcot Square, walk along Bishop’s Terrace and you will find St. Mary’s Gardens (which must have been named after St. Mary, Lambeth, one of the parishes that had received the land in trust from Edmund Walcot). The layout is almost a mirror image of Walcot Square, with a triangular central garden.
St. Mary’s Gardens was built at around the same time as Walcot Square so is of 1830s design and construction.
Compared to the continuous stream of traffic along Kennington Road, the streets of the Walcot Estate are quiet, and apart from the street parking, the general appearance of Walcot Square and St. Mary’s Gardens are much the same as when the estate was completed in the 1830s.
Walking back to Kennington Road, there is another of the typical 19th century standards for estate building. As with shops, pubs were also a common feature at the end of a terrace, or corner of a street, here at the junction of Bishop’s Terrace and Kennington Road:
The pub was until very recently the Ship, but appears to have had a name change to The Walcot 1830 – a clear reference to the construction decade of the adjacent estate.
Lost Prefabs
Nothing to do with the Walcot Estate, but these must be in the local area.
The photo adjacent to the photo of the Walcot Stores on the strip of negatives from 1986 shows some prefab houses:
Whilst the prefabs have almost certainly long gone, I was hoping that the distinctive building in the background could still be found, however after a lengthy walk through the streets of this part of Lambeth, I could not find the building in the background, so the street in which these prefabs were to be found in the 1980s remains a mystery.
It would be great to know if any reader recognises the location. | https://alondoninheritance.com/london-streets/walcot-square-lost-prefabs/ |
Hmm… I think home econ cooking has to be at least basic, balance nutrients, rather simple to cook and creative. Or, am I thinking too complicated (laughing)?!
Well, be it cooking for home dinner, or for your home econ lesson, try this mushroom recipe. It’s simply flavoursome! This dish has most of the above criteria, not just that, it is very delicious and compatible to eat with just a bowl of hot steamed rice.
Anyway, most of the edible mushroom species are high in fiber and provide vitamins such as Vitamin Bs, Vitamin C, and minerals such as iron, selenium, phosphorous and potassium. It is good to have mushroom in our diet.
This mushroom dish is good as vegetarian dish, too! Simply replace the oyster sauce with Vegetarian mushroom sauce, and also forgo the chopped garlic, if you are on strict-vegetarian diet, or for religious purpose.
The preparation and cooking time of such is relative short, too! You may have it ready within 15 min altogether!
See below:
Ingredients
1 bowl of Shimeji mushrooms i.e. a group of fresh Asian mushrooms which consist of:
enokitake aka golden needdle mushrooms, trimmed and cut away root edge
buna-shimeji mushrooms,
bunapi-shimeji mushrooms,
king oyster mushrooms, thick sliced, and
shiitake mushrooms, thick sliced.
3 – 4 white button mushroom, thick sliced
½ cup of canned button mushroom, whole
2 cloves of garlic, chopped
1 slice of ginger, sliced into strips
1 tablespoon of oyster sauce (or vegetarian oyster sauce)
1 tablespoon of Shao Hsing Hua Diao Rice Wine (花雕酒)
Sprinkles of sesame oil
1 tablespoon of water
1 teaspoon of vegetable oil
Method
1. Heat oil in wok over high heat. Place garlic and ginger, stir-fry for 30 seconds, or until fragrant.
2. Reduce heat to medium fire. Add mushrooms, sprinkle some water, and stir fry for 1 min.
3. Add in oyster sauce, rice wine and sprinkle sesame oil. Stir well, for another 1 min, or until ingredients are tender. Heat off. Serve with hot steamed rice.
Low-caloric mushroom dish, it counts less than 100kcal for a regular portion.
Tips: Sprinkle 1 more tablespoon of water before adding seasoning, if the mushrooms are too dry, or more sauce preferred.
*Get a packet of pre-packed Shimeji mushrooms (assorted fresh Asian mushrooms pack.. see picture on right) which available at most of the large supermarkets and hypermarkets. Look around at Vegetables or Organic Food section.
*Increase the amount of oyster sauce if more salty taste preferred.
*Use vegetable mushroom oyster instead, if cooking as vegetarian dish. | https://www.mywoklife.com/2008/08/stir-fried-assorted-asian-mushrooms.html |
A. Five sets of:
5 Bench Press @ moderate-heavy
Rest 2-3 minutes
B. Three sets, not for time, of:
60 seconds of Kip Progression Practice
10-20 Tuck-Ups (or V-Ups)
5-15 Ring Rows
C. Four sets, for times, of:
200m Row
10 Push Press
Fitness Level 2
A. Muscle-Up Progressions
3 x 20 Kip Swings
3 x 10 Box Extensions
3 x 10 Seated MU with Band
B. Five sets of:
3-4 Bench Press
Rest 60 seconds
Max Strict Pull-Ups
Rest 2-3 minutes
C. Four sets for times of:
200m Row
10 Shoulder-to-Overhead (135/95lbs)
Rest 2-4 minutes
*Focus on high effort each set with lots of recovery in between. Goal is to work on the efficiency of Shoulder-to-Overhead. All sets should be unbroken. | https://www.crossfitqueenstreet.com/wod/2015/1/26/tuesday-january-27th-2015 |
Tacoma Yacht Club is the home for Fleet #8 of the Cal 20 Class Association. The California 20 sailboat is a sturdy and fun sailboat that is easy to sail, but challenging to master. Although originally built as racer/cruisers our Cal 20s are used primarily for one design racing. The Cal 20 is Tacoma Yacht Clubs primary keelboat one design class. Racing with other Cal 20s offer great tactical and strategic competition, and we feel a lot more fun than handicap racing.
We meet monthly, on the third Friday of each month at the Tacoma Yacht Club Bar, to talk about our next race, social event, boat maintenance, or just hang out and shoot the breeze.
For more information please contact Fleet Captain, Lance Lindsley at 253-212-6979. | https://www.tacomayachtclub.org/Mobile/About_Us/Cal_20_Fleet_8 |
The ubiquity of the Internet means that users’ demands regarding Internet-enabled services are also on the rise. In response to these demands, companies are coming up with newer technologies, not to deliver these services, but also to enhance user experiences. In every industry and sector, there is an increasing push towards providing better customer experiences (CX), combined with the realisation that this can be made possible by engaging users in a personalised manner. In fact, the personalisation of user experiences for an “audience of one” has become a major theme in the development and growth of Internet-based services.
Earlier engagement formats such as web and app were structured formats consisting of screens, tabs and buttons – they require humans to behave like computers; conversational interfaces are natural and intuitive – forcing computers to behave like humans. In that sense, conversational interfaces enable the most advanced form of customer engagement.
Natural Language Processing
Natural Language Processing (NLP) is a branch of Machine Learning that deals with understanding and processing human languages by machines. In essence, this means that a machine can understand languages such as spoken and written English (and other languages), and carry out relevant and appropriate actions based on this literal and contextual understanding.
Types of conversational interfaces
There are two main types of conversational interfaces.
- Rules-based conversational Interfaces: A rules-based interface is a user interface in which the user and the software follow a predetermined set of rules. There are specified inputs, to which there are specified outputs. There is no processing of languages, or interpretation of the user input involved. This can also be thought of as a “multiple-choice interface” where the user is given a choice among multiple inputs that can be entered, and based on these choices, and further choices are provided until the user resolves. The amount of user-specific personalisation involved in this type of conversational interface is low.
- AI-based conversational Interfaces: In this type of interface, a large amount of user-specific personalisation is involved. AI is used to make the program understand the language of the user. Theoretically, the user can provide any input, and the machine will provide an output based on this understanding, so there are no pre-set rules or a limited set of choices. In other words, such an interface is highly dynamic, and vastly capable of mimicking human language and understanding capability.
The Applications of a conversational interface
Conversational interfaces are considered the future of Internet-based services, capable of supporting unlimited applications that can span across virtually every field and domain of the human experience.
Currently, conversational interfaces are most commonly used in virtual customer care. Conversational chatbots using WhatsApp for customer support can help users gain more knowledge about a product, and clarify their doubts quickly without waiting for a human agent. Such chatbots were initially rules-based, but have lately begun to have AI capabilities to support greater customisation and better CX.
Another application of conversational interfaces is in virtual assistants. As the Internet of Things (IoT) network expands, and more and more devices are added to it to create a hyper-connected global network, so has the number of companies making apps and tools compatible with these devices. IoT devices give companies yet another avenue to engage with customers and prospective customers, and establish a memorable and meaningful position in their lives.
The Advantages of a conversational interface
Implementing a conversational interface has many advantages. Let’s take a look!
Provide personalised CX
Every consumer interacts with multiple brands every single day – whether they realise it or not. Each brand has a different identity, and a different position in that consumer’s life. However, most of these brands go unnoticed in the noisy and crowded “buyer’s market”. Only brands that personalise their offerings according to the customer’s needs, demands and habits are noticed and then supported.
This is because personalisation is a form of convenience. A personalised product does not need to be configured again and again – which most customers appreciate. So, if a website or WhatsApp chatbot is able to quickly and correctly answer your customers’ questions, you can not only give them the convenience they demand, but also significantly boost the trust quotient between them and your brand.
Stay connected with the customer across channels
Perhaps the greatest advantage of a conversational interface is that it allows you to stay in touch with a current or prospective customer in more ways than one. This is a great way to stay top-of-the-mind, which is what every brand aspires to achieve. A rules-based chatbot on your website is likely to find few takers, with customers preferring to call a helpline and speak to a real human instead. Additionally, it’s also difficult for customers to chat for extended time periods with a chatbot, since they might not have their browser open all the time, or their browser notifications enabled. WhatsApp and other messaging chatbots solve this issue with contextual and meaningful conversations, powered by conversational messaging platforms. People are much more likely to be available on WhatsApp and messaging channels throughout the day, which ensures that they can converse with your company’s virtual representative at pretty much any time. This is a great way to both resolve their queries, and wins their hearts and minds.
Easy usage
A disadvantage of traditional consumer interaction methods such as IVR and customer care helplines is that they’re often tedious to use. You had to be on the phone for a long time, and listen to a long menu of options before providing an input. Often, the menu options would simply not match what you wanted to ask, and you had to resort to email instead, which took even longer time to glean a response. While rules-based chatbots did solve the problem of long wait times and inconvenient methods of conversation, the problem of limited options didn’t go away. AI-based chatbots using WhatsApp for customer support solve this problem. These chatbots can understand most of what you ask, and provide a reasonable reply that usually matches the question. These chatbots are also continuously “learning” and improving thanks to Machine Learning. As Machine Learning capabilities grow, so will these chatbots’ abilities
Conclusion
Conversational interfaces are opening a number of new avenues for customer interactions that improve service delivery and enhance their brand experiences. As customer demands for personalised services increases, so will the capabilities of these interfaces. Brands that can successfully implement these interfaces in their customer support ecosystem will benefit and establish a stronger competitive position over brands that don’t.
Blogs you will want to share. Delivered to your inbox. | https://www.gupshup.io/resources/blog/how-a-conversational-interface-can-improve-your-customer-engagement-outcomes |
Search the Community
Showing results for tags 'numbering'.
Found 6 results
-
I have some paragraphs that I want to give a number to but I want the number to be to the left of the text, rather than 'within' the text. It might be easier to explain what I want using an image, so please see the attached where I have used five different techniques (denoted by the numbers in the black circles on the right). The frames are shown to give an idea of what's there. Technique 1 was to use a simple text frame for the text and a separate artistic text layer for the number. Easy to create but awkward having to make sure that the number layer is kept aligned with the text. Technique 2 was to manipulate various spacings of the paragraph. Reasonable, if a little awkward to do, but the baseline of the number is always the same as the text so the top of the number isn't aligned with the top of the text (which is what I want). Technique 3 was to use a two-column text frame. Some manipulation of column widths and gutters was needed which wasn't too bad but having more than one paragraph per text frame is awkward. Technique 4 was to use the Numbering feature and manipulate the spacings. More awkward than the other techniques as the x-position of the text frame needed to be placed manually (no way - that I could find - to line-up the left-hand side of the body text with the margin), and it has the same problem as technique 2 with the number having the same baseline as the text. Technique 5 was to use a Drop Cap and manipulate some spacings. Not ideal at all, and I can't have a dot after the number, and the size of the number has to be a multiple of the line height. Is there a better way of doing this? Plus, does anyone know what this sort of thing is properly called? (If I knew the right search term I might have more luck finding something.)
-
Windows 10 Home 1903, Publisher 1.7.1.404 Sometimes – on at least one occasion where I definitely witnessed it, but quite possibly other occasions where things didn’t seem to be working right – setting the numbering formatting of paragraphs requires that the document be closed and re-opened before the changes applied can be seen on-screen. See here for a bit of background: https://forum.affinity.serif.com/index.php?/topic/90091-consecutive-numbering-in-text-styles/&tab=comments#comment-486709 Sorry I can’t supply a definite replicable workflow but the number of steps required to get to the position where the problem can be seen is high.
-
Bullets and Numbering > Outline Styles
Petar Petrenko posted a topic in Feedback for Affinity Publisher on DesktopHi, Publisher is missing Outline styles inside "Text > List" . Yes, it can be done something simmilar with "Increase/Decrease Level", but I can't find these schemes as shown on the pictures in attachment which I created in Quark. BTW, finding the exact forum/topic could be much easier IMO if it is reorgainized like on the pictures.
-
1.7 Beta (latest) number field not working
cramosm posted a topic in Designer Beta on MacI am using a MAC desktop, OS Mojave. After downloading the latest 1.7 Beta version I opened a document started with former versions and the pages numbers were off. On the side panel the numbers under the pages are different from the numbers on the main view of the pages. This is important, since pages on the left should be pair numbers and on the left, odd numbers. On the side panel they are ok. I think this was not a problem before.
-
Is varied automatic numbering possible?
nitro912gr posted a topic in Feedback for Affinity Publisher on DesktopHello and thank you for your effort to give us publisher! I happen from time to time to print carbonless paper to make invoice books or numbered coupons, while a simple automatic page numbering is enough for having numbering in paper sizes from A5 and above, there are times that I have to fit multiple smaller sized pages in one big page and need each to start and stop at different numbers. After a long time and thanks to the help of many people I managed to do this with indesign and I wonder if it is possible to be done with publisher too. I will describe how I did this with indesign because it is not exactly an indesign feature but more of a creative way of using the lists in indesign. So maybe after someone recreate the result can tell me if it can be done in publisher. It is not simple so a big thanks to anyone who may even try to see if this is possible in publisher.
-
Page Menu - Numbering Chaos
BalsallHeathen posted a topic in Publisher beta on WindowsThere appears to be a serious bug in the page numbering as shown in the image below. My displayed page numbers start at 'page 1' followed by 'page 2' as would be expected, but then rise in counts of two with a 'page 54' appearing seeming at random throughout. Weirdly, if I toggle single pages to facing pages and then back to single pages the page numbers shown double each time. First rising by a count of four, then by eight, then sixteen and so on. And each time more pages shed their original number and become yet another 'page 54'. Is this a bug or am I doing something idiotic? | https://forum.affinity.serif.com/index.php?/tags/numbering/ |
Rivers are natural streams of water. They flow freely. Even strong mountains, cannot stop their persistence. A river flows by force called current. The path of a river is known as a course.
Near the river source, the current is usually strong. The amazing fact is that rivers can flow through valleys, down mountains and can create gorges or canyons. Some flow seasonally while others flow throughout the year.
Read through this article and learn the amazing facts about world rivers.
The Longest River Is In Africa
The longest river is called the river Nile. Eleven countries share it. It Stretches from Kenya through Ethiopia, to Egypt. The length of the river Nile is 6,853 kilometers long.
The Most Polluted River Is In New York
River Hudson is 315 miles river flowing through Eastern New York from the Adirondack Mountains. The river flows southward to Hudson Valley. It is the most polluted river in the world. The organisms in the stream are learning how to adapt to the dirty water.
Amazon River Discharges The Highest Amount Of Water
Amazon river originates from the Andes, Peru and runs through to Brazil. It is the second longest river on earth. The river has the highest volume of water in the world.
The river is also the widest river. Without flooding, it can stretch to a width of 11 km miles at the widest point. It carries water from the tropical Amazon forest.
Mekong River Erupts With Fireballs
The river erupts with hundreds of fireballs in Thailand. The locals believe that the river has a serpent that spits fire. The snake is called Naga. The natural phenomenon is called Naga fireballs.
Sac Actun Is The Longest Underground River
The longest underground river flows beneath the Peninsula. It is also called the ‘White Cave.’ The river flows for 153 Km through an underground maze of limestone caves.
The Black Sea Is The Longest Undersea River
The river is 60 km (37 miles) long. It has a depth of 35 meters and a width of 970 meters. The river is shorter than the Amazon river. However, it carries ten times the water carried by the Amazon.
Deepest River In The World Is In Congo
Congo River was formerly called Zaire River. Its depth is over 720 feet (220 meters). It is the second largest river in the world by the level of water discharge. The river is also the second longest in Africa. Congo river stretches through the great Congo forest.
Onyx River Is The Longest Meltwater Stream
Onyx River is in the Antarctic, the frozen continent. The river flows westward from wright Lower Glacier through the Wright Valley. It only flows during a few months of Antarctic summer.
Onyx Is 32 Kilometers And The L
ongest In Antarctica
Conclusions
Rivers are amazing physical features. They also serve as life support machines in different parts of the world. They don’t only support animal and plants, but also people around them. For example, Nile river water is used by 11 countries for irrigation. The river also supplies people with household water. | https://funfactsclub.com/amazing-facts-about-world-rivers%EF%BB%BF/ |
[Exclusive] Asynchronous Compute Investigated On Nvidia And AMD in Fable Legends DX12 Benchmark, Not Working on Maxwell
What Does This All Mean?
Conclusions
It’s quite possible that the underlying game itself was optimized with the Xbox One in mind, which has significantly less async compute units than current high-end PC hardware. Thus only a few in-game graphical assets are actually being pushed to the async compute queue, owing to a much smaller amount of resources available. And we’re not entirely sure what kind of requests from the engine are being given to the async compute queue, only that there’s clearly some measure of rendering happening there. These tasks could be very efficient in the normal pipeline for NVIDIA, owing to a similar performance to AMD’s hardware. Any number of things could be going on under the hood.
The official statement is that compute is a large part of the workload, and that they’re able to offload more than just lighting to that particular queue. This could potentially help alleviate bottlenecks and shortcomings. What we actually see, however, is that async compute is barely used at this point. What’s actually being rendered down there is unknown, but whatever it is, it’s not very much of the total work output.
Similarly, CPU usage seems to be very mixed. Skylake is provide a great showing and certainly is not a limiting factor for any high-end GPU with this benchmark. Utilization, though, is far higher across the board than with Haswell-E, that has significantly more threads at its disposal. It’s curious how the performance is actually largely the same. What’s going on underneath is a mystery, though both are handling their work without issue with similar performance results. So at least there’s that.
Now, this is just an analysis of a closed benchmark that isn’t indicative of the behavior of the actual final build of the game. With more action on screen and more assets to light, render and make pretty, we could see an infusion of work in that compute queue, making good use of the async compute capabilities in DX12. At the moment, however, it’s not actually there.
I’m excited for the final game, though, because the final product will indeed likely take advantage of the different DX12 components that we’re all looking forward to. Async compute is one such technology that has the ability, if implemented fully and properly, to provide more complex scenes. Just imagine a resurgence of a texture compression algorithm similar to S3TC that can be done faster and more efficiently in the compute queue instead of in memory. We could have larger, and better looking textures without much of a performance hit. Not to mention all the other pretties we could have on screen. | https://wccftech.com/asynchronous-compute-investigated-in-fable-legends-dx12-benchmark/4/ |
Number of critical patients rise at local hospitals
Wednesday
Local hospitals are seeing a spike in patients with the coronavirus, one factor that led to all critical care beds in Gadsden being full at one point earlier this week.
During an update on COVID-19 at the Gadsden City Council meeting, EMA director Deborah Gaither said that as of Tuesday morning, there were no critical care beds available either at Gadsden Regional Medical Center or Riverview Regional Medical Center.
"I don’t know what kind of patients are in them," Gaither, who also said that the status of those beds changes by the minute as patients’ conditions improve and they are stepped down, explained.
During a weekly briefing with elected officials Wednesday morning, Josh Hester, chief operating officer for Gadsden Regional, said they had four critical care beds available.
He said the hospital has 38 ICU patients, including 10 with COVID-19 and five on ventilators.
Overall, the hospital has 21 patients in-house with the virus, a number that has nearly doubled from last week.
Last Thursday, Gadsden Regional CEO Corey Ewing spoke at a City of Gadsden press conference and said the hospital had 12 COVID-19 patients.
Susan Moore, director of marketing and business development at Riverview said Wednesday that the hospital has a "very fluid situation" with regards to its critical care capacity, but they do have beds available.
She said the hospital could be maxed out but then discharge or step down critical care patients shortly after that.
Moore said Riverview has 16 dedicated critical care beds, but they have the ability to create beds for critical care patients in other areas if needed.
"This second round, it appears that patients are not as critical as the first round, but we do have some critical patients on ventilators," Moore said, and added that the hospital has one floor dedicated to COVID-19 patients.
Statewide, the number of patients hospitalized with COVID-19 has been surging in July.
According to the Alabama Department of Public Health, Alabama had around 700 confirmed hospitalized patients during the last week of June, and that number reached 1,353 Tuesday.
But there is another factor besides the coronavirus when it comes to increasing numbers of people in critical care units.
"We’re seeing an influx in not only COVID-19 patients, but in others who have put off care and are seeing higher acuity," Moore said. Acuity is the medical term for the severity of a patient’s illness.
Ewing made that point last week, saying that it seems like many people are afraid to visit the hospital for fear of catching the virus.
"Please do not delay care if you have a health need," he said.
Hospitals are taking precautions to deal with the virus and make sure everyone, including visitors, patients and staff, are safe, Ewing said.
Moore also emphasized the need for people to wear masks.
"It’s definitely not something that’s just going away, and people need to wear their masks when they’re out in public," she said.
The local briefing with officials came before Gov. Kay Ivey announced a statewide mandatory mask order, but Hester also advocated wearing masks.
At the hospital, he said, they talk about "universal" mask wearing. "That means mandatory," he said. Hester said the hospital staff in encouraged to be very aware of that, when they are out in the community.
At the hospital, Hester said, they have upped screening processes, and locked down some doors to facilitate that. He said the hospital is "marking" people who’ve been screened.
He noted the changing criteria in screening. "I think we’ve hung our hat on fever as the symptom," Hester said, but only 10% of people who later test positive have shown fever as a symptom. Respiratory issues and even diarrhea are being found to be telling symptoms of COVID-19.
GRMC is seeing more patients testing positive on the outpatient side, he said.
Moore said the staff of Riverview are taking the situation minute-by-minute, and it’s been a challenge.
"It’s definitely been a trying time," she said.
Moore said travel nurses are being brought in to help out with shifts, and the entire staff has been hard at work throughout the pandemic.
"Certainly keep us in your thoughts and prayers," Moore said.
Times staff writer Donna Thornton contributed to this story.
Never miss a story
Choose the plan that's right for you.
Digital access or digital and print delivery. | |
Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
Introduction
Software engineering productivity is notorious for being difficult to improve . Domain Engineering (DE) is a promising new approach for improving productivity by simplifying coding tasks that are performed over and over again and by eliminating some tasks, such as design [16,5]. DE practitioners believe that it can improve productivity by a factor of between two and ten, although there has been no quantitative empirical support for such claims.
Quantifying the impact of a technology on software development is particularly important in making a case for transferring new technology to the mainstream development process. Rogers cites observability of impact as a key factor in successful technology transfer. Observability usually implies that the impact of the new technology can be measured in some way. Most of the time, the usefulness of a new technology is demonstrated through best subjective judgment. This may not be persuasive enough to convince managers and developers to try the new technology.
In this paper we describe a simple-to-use methodology to measure the effects of one DE project on software change effort. The methodology is based on modeling measures of small software changes or Modification Requests (MRs). By modeling such small changes we can take into account primary factors affecting effort such as individual developer productivity and the purpose of a change. The measures of software changes come from existing change management systems and do not require additional effort to collect.
We apply this methodology to DE project at Lucent Technologies. We show that using DE increased productivity around four times.
Sections 2 and 3 describe DE in general and specifics of the particular case study. Sections 4 and 5 describe a general methodology to estimate the effects of DE and the analysis we performed on one DE project. Finally we conclude with a relevant work section and a summary.
Domain Engineering
Traditional software engineering deals with the design and development of individual software products. In practice, an organization often develops a set of similar products, called a product line. Traditional methods of design and development don't provide formalisms or methods for taking advantage of these similarities. As a result the developers practice some informal means of reusing designs, code and other artifacts, massaging the reused artifact to fit into new requirements. This can lead to software that is fragile and hard to maintain because the reused components were not meant for reuse.
There are many approaches to implementing systematic reuse, among
them Domain Engineering. Domain Engineering approaches the problem
by defining and facilitating the development of software product
lines (or software families) rather than individual software
products. This is accomplished by considering all of the products
together as one set, analyzing their characteristics, and building
an environment to support their production. In doing so,
development of individual products (henceforth called Application
Engineering) can be done rapidly at the cost of some up-front
investment in analyzing the domain and creating the environment. At
Lucent Technologies, Domain Engineering researchers have created a
process around Domain Engineering called FAST (Family-oriented
Abstraction, Specification and Translation) . FAST is
an iterative process of conducting Domain Engineering and
Application Engineering, as shown in Figure 1.
In FAST, Domain Engineering consists of the following steps:
Application Engineering is the process of producing members of the product line using the application engineering environment created during Domain Engineering. Feedback is then sent to the Domain Engineering team, which makes necessary adjustments to the domain analysis and environment.
The AIM Project
Lucent Technologies' 5ESSTMswitch is used to connect local and long distance calls involving voice, data and video communications. To maintain subscriber and office information at a particular site, a database is maintained within the switch itself. Users of this database are telecommunications service providers, such as AT&T, which need to keep track of information, such as specific features phone subscribers buy. Access to the database is provided through a set of screen forms. This study focuses on a domain engineering effort conducted to reengineer the software and the process for developing these screen forms.
A set of customized screen forms is created for each service provider who purchases a 5ESS switch corresponding to the 5ESS features purchased by the service provider. When a service provider purchases new features, its screen forms have to be updated. Occasionally, a service provider may request a new database view to be added, resulting in a new screen form. Each of these tasks requires significant development effort.
In the old process, screen forms were customized during compile-time. This often means inserting #ifdef-like compiler directives into existing screen specification files. Forms have had as many as 30 variants. The resulting specification file is hard to maintain and modify because of the high density of compiler directives. In addition, several auxiliary files specifying entities such as screen items need to be updated.
The Asset Implementation Manager (AIM) project is an effort to automate much of this tedious and error-prone process. The FAST process was used to factor out the customer-specific code and create a new environment that uses a data-driven approach to customization. In the new process, screen customization is done at run-time, using a feature tag table showing which features are turned on for a particular service provider. A GUI system was implemented in place of hand-programming the screen specification files. In place of screen specification files, a small specification file using a domain specific language is stored for each individual screen item such as a data entry field. This new system also automatically updates any relevant auxiliary files.
Methodology
We undertook to evaluate the impact of the AIM project on software change effort. Collecting effort data has traditionally been very problematic. Often, it is stored in financial databases that track project budget allocations. These data do not always accurately reflect the effort spent for several reasons. Budget allocations are based on estimates of the amount of work to be done. Some projects exceed their budgets while others underuse them. Rather than changing budget allocations, the tendency of management is to charge work on projects that have exceeded budgets to those that haven't.
We chose not to consider financial data, but rather to infer the key effort drivers based on the number and type of source code changes each developer makes. This has the advantage of being finer grain than project-level effort data. Analyzing change-level effort could yield trends that would be washed out at the project level because of aggregation. In addition, our approach requires no data to be collected from developers. We use existing information from the change management system such as change size, the developer who made the change, time of the change, and purpose of the change, to infer the effort to make a change.
Change Data
The 5ESS source code is organized into subsystems with each subsystem further subdivided into a set of modules. Each module contains a number of source code files. The change history of the files are maintained using the Extended Change Management System (ECMS) , for initiating and tracking changes, and the Source Code Control System (SCCS) , for managing different versions of the files.
Each logically distinct change request is recorded as a Modification Request (MR) by the ECMS. Each MR is owned by a developer, who makes changes to the necessary files to implement the MR. The lines in each file that were added, deleted and changed are recorded as one or more ``deltas'' in SCCS. While it is possible to implement all MR changes restricted to one file by a single delta, but in practice developers often perform multiple delta on a singe file, especially for larger changes. For each delta, the time of change, the login of the developer who made the change, the number of lines added and deleted, the associated MR, and several other pieces of information are all recorded in the ECMS database. This delta information is then aggregated for each MR. A more detailed description of how to construct change measures is provided in .
We inferred the MR's purpose from the textual description that the developer recorded for the the MR . In addition to the three primary reasons for changes (repairing faults, adding new functionality, and improving the structure of the code; ), we used a class for changes that implement code inspection suggestions, since this class was easy to separate from others and had distinct size and interval properties.
We also obtained a complete list of identifiers of MRs that were done using AIM technology, by taking advantage of the way AIM was implemented. In the 5ESS source, a special directory path was created to store all the new screen specification files created by AIM. We refer to that path as the AIM path. The source code to the previously used screen specification files also had a specific set of directory paths. We refer to those paths as pre-AIM paths. Based on those sets of paths we classified all MRs into three classes:
Thus, for each MR, we were able to obtain the following information,
Modeling Change Effort
To understand the effects of AIM better, we need to know the effort associated with each change. Since the direct measurement of such effort would be impractical, we developed an algorithm to estimate it from the widely available change management data. We briefly describe the algorithm here. For more detail see . Let us consider one developer. From the change data, we obtained the start and ending month of each of the developer's MRs. This helps us partially to create a table as in Table 1 (a typical MR takes a few days to complete, but to simplify the illustration we show only three MRs over five months).
|Jan||Feb||Mar||Apr||May||Total|
|0||?||?||0||0||?|
|?||?||0||0||0||?|
|0||0||?||?||?||?|
|Total||1||1||1||1||1||12|
We assume that monthly developer effort to be one technical headcount month so we have 1's as the column sums in the bottom row. The only exception are months with no MR activity which have zero effort. We then proceed to iteratively fit values into the blank cells, initially dividing the column sums evenly over all blank cells. The row sums are then calculated from these initial cell values. A regression model of effort is fitted on the row sums. The cell values in each row are rescaled to sum up to the fitted values of the regression model. Then the cell values in each column are rescaled to sum up to the column sums. A new set of row sums is calculated from these adjusted cell values and the process of model fitting and cell value adjustments is repeated. This is done until convergence occurs. The code to perform the analysis is published in .
Analysis Framework
We outline here a general framework for analyzing domain engineering projects and describe how we applied this to the AIM project. The analysis framework consists of five main steps.
Obtaining and inspecting change measures
The basic characteristic measures of software changes include: identity of the person performing the change; the files, modules, and actual lines of code involved in the change; when the change was made; the size of the change measured both by the number of lines added or changed and the number of deltas; the complexity of the change measured by the number of files touched; and the purpose of the change including whether the purpose of the change was to fix a bug or to add new functionality. Many change management systems record data from which such measures can be collected using software described in .
In real life situations developers work on several projects over the course of a year and it is important to identify which changes they using the environment provided by DE. There are several ways to identify these changes. In our example the domain engineered features were implemented in a new set of code modules. In other examples we know of a new domain-specific language is introduced. In such a case the DE changes may be identified by looking at the language used in the changed file. In yet another DE example, a new library was created to facilitate code reuse. To identify DE changes in such a case we need to look at the function calls used in the modified code to determine if those calls involve the API of the new library.
Finally we need to identify changes that were done prior to DE. In our example the entire subsystem contained pre-DE code. Since the older type of development co-existed in parallel with the post-DE development, we could not use time as the factor to determine if the change was post-DE. In practice it might happen often that initially only some of the projects use a new methodology until it has been proven to work.
To understand the differences between DE and pre-DE changes that might affect the effort models, we first inspect the change measures to compare interval, complexity, and size between DE and pre-DE changes. The part of the product the AIM project was targeting has a change history starting from 1986. We did not consider very old MRs (before 1993) in our analysis, to avoid other possible changes to the process or to the tools that might have happened more than six years ago.
The following table gives average measures for AIM and pre-AIM MRs. Most measures are log-transformed to make use of a t-test more appropriate. Although differences appear small, they are all significant at 0.05 (using two-sample t-test) because of the very large sample size (19450 pre-AIM MRs and 1677 AIM MRs).
As shown in Table 2, AIM MRs do not take any longer to complete than pre-AIM MRs. The change complexity, as measured in number of files touched by the MR, appears to have increased, but we note that instead of modifying specification files for screens, the developers are modifying smaller specification files for individual screen attributes. In addition, all changes are done through the GUI, which handles the updating of individual files, hence the changes. The table also shows that more deltas are needed to implement an MR. The increased number of deltas might be a result of the MRs touching more files. The numbers of lines are an artifact of the language used and since the language in the new system is different, they are not directly comparable here. The differences probably due to the fact that AIM measures are measuring the output of a tool, where in the pre-AIM measures are measuring the manual output. We use the number of delta as the MR size measure in the final model partly because the differences among groups are the smallest in this dimension. It is possible to adjust the size measure for the AIM (or pre-AIM) MRs when fitting the model, however we have not done so in the subsequent analysis.
Variable Selection
First among variables that we recommend always including in the model is a developer effect. Other studies have found substantial variation in developer productivity . Even when we have not found significant differences and even though we do not believe that estimated developer coefficients constitute a reliable method of rating developers, we have left developer coefficients in the model. The interpretation of estimated developer effects is problematic. Not only could differences appear because of differing developer abilities, but the seemingly less productive developer could have more extensive duties outside of writing code.
Naturally, the size and complexity of a change has a strong effect on the effort required to implement it. We have chosen the number of lines added, the number of files touched, and the number of deltas that were part of the MR as measures of the size and complexity of an MR.
We found that the purpose of the change (as estimated using the techniques of ) also has a strong effect on the effort required to make a change. In most of our studies, changes that fix bugs are more difficult than comparably sized additions of new code. The difficulty of changes classified as ``perfective'' varies across different parts of the code, while implementing suggestions from code inspections is easy.
Eliminating collinearity with predictors that might affect effort Since developer identity is the largest source of variability in software development (see, for example, [2,7]), we first select a subset of developers that had a substantial number of MRs both in AIM and elsewhere so that the results would not be biased by the developer effects. If we had some developers that made only AIM (or only pre-AIM) changes, the model could not distinguish between their productivity and the AIM effect.
To select developers of similar experience and ability, we chose developers that had completed between 150 and 1000 MRs on the considered product in their careers. We chose only developers who completed at least 15 AIM MRs. The resulting subset contained ten developers. Itemization of their MRs is given in Table 3.
Models and Interpretation
In the fourth step we are ready to fit the set of candidate models
and interpret the results. Our candidates models are derived from
the full model (specified next) by omitting sets of predictors. The full model that included measures
that we expect to affect the change effort was:
|=|
|estimate||p-val||95% CI|
|0.69||0.000||[0.43,.95]|
|-0.04||0.75||[-0.3,0.22]|
|-0.10||0.09||[-0.21,.01]|
|1.9||0.003||[1.3,2.8]|
|0.7||0.57||[0.2,2.4]|
|0.6||0.13||[0.34,1.2]|
|0.25||0.000||[0.16,0.37]|
|1.03||0.85||[0.7,1.5]|
The following MR measures were used as predictors: -- the number of deltas, -- the number of lines added, -- the number of files touched, indicator if the change is a bug fix, a perfective change, an inspection change, indicator if it was AIM or pre-AIM MR, and an indicator for each developer. Only the coefficients for number of deltas, indicator of the bug fix, and indicator of AIM were significant at 0.05 level.
The coefficients reflect how many times a particular factor affects the change effort in comparison with the base factors and which are assumed to be 1 for reference purposes.
Inspecting the table, we see that the only measure of size that was important in predicting change effort was (p-value of is 0.000). Although we refer to it as a size measure it measures both size and complexity. For example, it is impossible to change two files with a single delta, so the number of delta has to be no less than the number of files. It also measures the size of the change. A large number of new lines is rarely incorporated without preliminary testing which would lead to additional deltas.
From the type measures we see that the bug fixes are almost twice as hard as new code ( ). Other types of MRs (perfective and inspection) are not significantly less difficult than new code changes (p-values of and are 0.57 and 0.13, respectively). This result is consistent with past work .
Finally, the model indicates that pre-AIM MRs are indistinguishable from other MRs (p-value of is 0.85). This result was expected since there was no reason why pre-AIM MRs should be different from other MRs. More importantly, AIM MRs are significantly easier than other MRs ( , with p-value of 0.000). Consequently, AIM reduced effort per change by a factor of four.
Next we report the results of a reduced model containing only
predictors found significant in the full model. The simple model
was:
|=|
and the estimated coefficients were:
|estimate||p-val||95% CI|
|0.51||0.000||[0.34,0.67]|
|2.1||0.002||[1.3,3.2]|
|0.27||0.000||[0.17,0.43]|
Those results are almost identical to the results from the full model indicating the robustness of the particular set of predictors.
Calculating total cost savings
The models above provide us with the amount of effort spent at the fine change level. To estimate the effectiveness of the DE we must integrate those effort savings over all changes and convert effort savings to cost savings. Also, we need an assessment of the cost involved in creating a new language, training developers and other overhead associated with the implementation of the AIM project.
To obtain the cost savings first the total effort spent doing DE MRs is estimated. Then we use the cost saving coefficient from the fitted model to predict the hypothetical total effort for the same set of features as if the domain engineering has not taken place. 1
The effort savings would then be the difference between the latter and the former. Finally, the effort savings are converted to cost and compared with additional expenses incurred by DE. The calculations that follow are intended to provide approximate bounds for the cost savings based on the assumption that the same functionality would have been implemented without the AIM.
Although we know that AIM MRs are four times easier, we need to ascertain if they implement functionality comparable to that of pre-AIM MRs. In the particular product, new functionality was implemented as software features. The features were the best definition of functionality available in the considered product. We assume that on average, all features implement similar amount of functionality. This is a reasonable assumption since we have a large number of features under both conditions and we do not have any reason to believe that the definition of a feature changed over the considered period. Consequently, even a substantial variation of functionality among features should not bias the results.
We determined the software features for each MR (MR implementing new functionality) using an in-house database. We had 1677 AIM MRs involved in implementation of 156 distinct software features and 21127 pre-AIM MRs involved in implementation of 1195 software features, giving 11 and 17 as the MR per feature ratio. Based on this analysis AIM MRs appear to implement 60 percent more functionality per MR than pre-AIM MRs.
Consequently the functionality in 1677 AIM MRs would approximately equal the functionality implemented by 2650 pre-AIM MRs. The effort spent on 1677 AIM MRs would approximately equal the effort spent on 420 hypothetical pre-AIM MRs if we use the estimated 75% savings in change cost obtained from the models above. This leaves total cost savings expressed as 2230 pre-AIM MRs.
To convert the effort savings from pre-AIM MRs to technical headcount years (THCY) we obtained the average productivity of all developers in terms of MRs per THCY. To obtain this measure we took a sample of relevant developers, i.e., those who performed AIM MRs. Then we obtained the total number of MRs each developer started in the period between January 1993 and June 1998. No AIM MRs were started in this period. To obtain the number of MRs per THCY, the total number of MRs obtained for each developer was divided by the interval (expressed in years) that developers worked on the product. This interval was approximated by the interval between the first and the last delta each developer did in the period between January 1993 and June 1998. The average of the resulting ratios was 36.5 pre-AIM MRs per THCY.
Using MR per THCY ratio the effort savings expressed as 2230 pre-AIM MRs are equal to approximately 61 technical headcount years. Hence the total savings in change effort would be between $6,000,000 and $9,000,000 in 1999 US dollars. This assumes technical headcount costs varying between $100K and $150K per year in the Information Technology industry.
To obtain expenses associated with the AIM domain engineering effort we used internal company memoranda summarizing expenses and benefits of the AIM project. The total expenses related to the AIM project were estimated to be 21 THCY. This shows that the first nine months of applying AIM saved around three times (61/21) more effort than was spent on implementing AIM itself.
We also compared our results with internal memoranda predicting effort savings predictions in AIM and several other DE projects performed on the 5ESSTMsoftware. Our results were in line with the approximate predictions given in the internal documents indicating that DE interval reduction to one third to one fourth of pre-DE levels.
Related Work
This work is an application of the change effort estimation technique originally developed by Graves and Mockus . This technique has been refined further in their more recent work . The method was applied to evaluate the impact of a version editing tool in . In this paper we focus on a more general problem of domain engineering impact, where the software changes before and after the impact often involve completely different languages and different programming environments.
This technique is very different in approach and purpose from traditional cost estimation techniques (such as COCOMO and Delphi ), which make use of algorithmic or experiential models to estimate project effort for purposes of estimating budget and staffing requirements. Our approach is to estimate effort after actual development work has been done, using data primarily from change management systems. We are able to estimate actual effort spent on a project, at least for those phases of development that leave records on the change management system. This is useful for calibrating traditional cost models for future project estimation. In addition, our approach is well-suited for quantifying the impact of introducing technology and process changes to existing development processes.
Summary
We present methodology to obtain cost savings from Domain Engineering exemplified by a case study of one project. We find that the change effort is reduced three to four times in the considered example.
The methodology is based on measures of software changes and is easily applicable to other software projects. We described in detail all steps of the methodology so that anyone interested can try it. The key steps are,
We expect that this methodology will lead to more widespread quantitative assessment of Domain Engineering and other software productivity improvement techniques. We believe that most software practitioners will save substantial effort of trials and usage of ineffective technology, once they have the ability to screen new technologies based on a quantitative evaluation of their use on other projects. Tool developers and other proponents of new (and existing) technology should be responsible to perform such quantitative evaluation. It will ultimately benefit software practitioners who will be able to evaluate appropriate productivity improvement techniques based on quantitative information.
We would like to thank Mark Ardis, Todd Graves, and David Weiss for their valuable comments on earlier drafts of this paper. We also thank Nelson Arnold, Xinchu Huang and Doug Stoneman for their patience in explaining the AIM project. | http://mockus.us/papers/aim/ |
The formula was suggested by the revenue minister and number two in the council of ministers Chandrakant Patil. Shiv Sena hinted that the 144:144 seat sharing for the 288 members Maharashtra Legislative Assembly was sealed between the party president Uddhav Thackeray and BJP president Amit Shah and Chief Minister Devendra Fadnavis. The saffron party has made it clear that it does not give importance to statements by BJP ministers on seat sharing as Fadnavis has the sole authority to finalise seat-sharing pact in consultation with BJP's national leadership.
Shiv Sena on Friday kicked off planning for the preparations for the ensuing Assembly election in all 288 constituencies. Sena president Uddhav Thackeray met party's coordinators from all 288 seats to further strengthen the party organisation by focusing on stepping up interactions with the voters and cross sections of people.
ट्रैफिक रूल्स में हुए नए बदलाव जनता के लिए ! | https://www.molitics.in/news/119206/report-shiv-sena-reiterates-demand-for-sharing-equal-seats-in-maharashtra |
Contents
Geography
Lily Dale is located on the east side of Upper Cassadaga Lake at an elevation of approximately 1325 ft. Its coordinates are 42°21'06" North, 79°19'27" West (42.351725, -79.324211). Its main route of access is New York State Route 60, which is located about .5 miles east of the hamlet and runs north and south to the cities of Dunkirk and Jamestown. The Cassadaga Job Corps Center can be seen across the lake from Lily Dale.[4] Another very small lake, Mud Lake, is located between Lily Dale and Route 60.
Spiritualism and the Lily Dale Community
Lily Dale became the center of the Spiritualist movement when the childhood home of Kate and Margaret Fox was moved from Hydesville, New York to Lily Dale in 1916. This established Lily Dale as the home of Spiritualism in the United States, although other communities, such as Summerville, California, were founded on the same principles. On September 21, 1955 the Fox Sisters cottage was destroyed by a fire.
Lily Dale is a place of pilgrimage for many Spiritualists and others interested in the paranormal. A large population of mediums and Spiritualist healers reside in Lily Dale.
The Lily Dale Spiritualist Assembly
The Lily Dale Spiritualist Assembly holds year round meetings and provides seminars on topics such as Mediumship, spiritualist studies and the paranormal. Well-known speakers such as Deepak Chopra, Dr. Wayne Dyer and John Edward have been frequent speakers at Lily Dale. This private community is home to The Marion Skidmore Library, Lily Dale Museum and Historical Society, the headquarters of The National Spiritualist Association of Churches, numerous residences, hotels, private bed and breakfast inns, a spiritualist bookstore and related shops. Printed programs announce events such as Mediumship Demonstrations, religious services, thought exchange meetings, Healing Services, among others. The town of Lily Dale has a Volunteer Fire Department, a private water supply, a cafe, camp grounds, recreational vehicle accommodations, picnic grounds, and a lake-front beach for sunbathing. An admission fee to the grounds of The Assembly from mid-June through Labor Day is aimed at managing numbers of visitors. Admission is waived on Sundays to allow for church attendance.
In fiction
Lily Dale is the backdrop for a series of young adult paranormal novels by New York Times bestselling author Wendy Corsi Staub, who grew up a few miles from Lily Dale, New York. To-date, the series includes four titles published by Walker Books for Young Readers: Lily Dale: Awakening, Lily Dale: Believing, Lily Dale: Connecting and Lily Dale: Discovering. They have been optioned for television by Freemantle Entertainment. Staub has also written an adult bestselling thriller set in Lily Dale entitled In The Blink Of An Eye, published by Kensington Books in 2002.
Lily Dale was the setting for "The Mentalists," a seventh-season episode of the TV series Supernatural, starring the Fox Sisters as Good and Evil Spirits.[5]
Lily Dale was also visited in the series, Grave Academy, in the 14th chapter by the monsters, who had their fun in the city, scaring people in Halloween, it was actually a deal made by their, fictional, mayor and the monster's principal, that they should do that every year.
| |
Image Credit: JefferyW (CC License)
Coming from the other corner of the world, a rather far-flung one, I didn’t know what mac and cheese was until my late 20s. Now, this might reflect poorly on the credibility of this page – but oh well, it is what it is.
While at it, let me confess one more thing. I didn’t exactly know right off the bat if you could broil mac and cheese. I’m sure the blog’s credence is down the drain.
Fret not – I’m going all out for redemption – yes, I know that sounded a lil dramatic. This blog won’t just shed light on if you can broil mac and cheese or not but will also include a couple of bells and whistles that’ll make this an interesting read. Let’s begin.
Table Of Content
Can I Broil Mac And Cheese?
Mac and cheese is traditionally made in the oven or in a saucepan. Once it’s cooked, you can broil it for that scrumptious crust. If you don’t want to go through the hassle of baking but miss the crunchy top, broiling is what you’re looking for.
Broiling is hands down the best method to add a crispy, cheesy top to your mac and cheese. Just make sure that you’re careful while broiling it. It can go from crispy brown to burnt in just a few seconds.
If you like your mac and cheese creamy and with a soft texture, I’d recommend baking all the way. Simply bake the dish in a preheated oven at 375 degrees F for 20-25 minutes – until thoroughly heated through and cheese is melted.
After a lot of trial and error involving several batches, I finally cracked how to achieve that crunchy top when you don’t have an hour on a busy weeknight to bake macaroni casserole the traditional way.
Let’s find out how.
How To Broil Mac And Cheese?
After cooking mac and cheese on a stovetop, you should top the dish off with buttered breadcrumbs and put the whole thing under the broiler to brown for two minutes.
Voila – you will end up with a casserole that’s cheesy and creamy on the inside and crunchy on the top – all ready in just a few minutes.
While at it, let’s get a few things out of the way.
What Is Broiling? When Should You Broil Food?
To put it simply, broiling uses top-down heat at high and extra-high temperatures to crisp or brown the top of the food. Broiling involves using only the upper heating element of the oven, applying high temperatures between 500 to 550 degrees Fahrenheit to the top of the dishes for fast flavor.
This method is best used to crisp delicate foods or brown top of already-cooked dishes. Since broiling involves working with high temperatures, you ought to keep a close eye on the cooking process.
Other than cheesy mac and cheese that can be elevated with a brown, crunchy layer on the top, other food like thin-cut meats, quick-cooking veggies like zucchini and peppers, and white bread that need a quick toast can be broiled.
There was this recipe I found online, which I then improvised to incorporate broiling at the end, and it tasted heavenly! Unfortunately, I cannot find the recipe now to give proper credit, but I think I know the steps by heart.
Broiled Mac And Cheese Recipe
Total Time Taken: 1 hour 30 minutes
Yield: 8 to 10
Ingredients:
- ¾ pound elbow macaroni
- 2 eggs
- 1 cup whole milk
- 1 cup half-and-half
- 1 ½ cups heavy cream
- ½ tablespoon kosher salt
- ½ teaspoon freshly grated nutmeg
- Extra virgin olive oil for drizzling
- 4 ounces cold cream cheese (cut into small cubes)
- 1 ½ cups shredded Monterey Jack cheese
- 1 ½ cups shredded extra-sharp cheddar cheese
- 1 ½ cups heavy cream
- ½ teaspoon freshly grated nutmeg
- ¼ teaspoon freshly ground white pepper
Instructions
First, preheat the oven to 350 degrees F. Next, take a large pot of boiling, salted water, and cook the macaroni for 3 minutes. It will still be quite chewy but don’t worry. Drain the macaroni and place it back into the pot. Drizzle lightly with olive oil and toss well.
Now, generously butter a 10-by-15-inch baking dish. Whisk half-and-half, heavy cream, eggs, nutmeg, milk, salt, and ground pepper in a large bowl. Then, stir in the cheddar and Monterey Jack cheeses and the macaroni. Evenly spread the mac and cheese in the prepared baking dish and scatter cream cheese cubes on top.
Bake the macaroni for 6 minutes. Use the back of a big spoon to spread the melted cream cheese cubes evenly over the surface. Bake for the next 40 minutes. It should be bubbling by the end.
Now, remove the baking dish from the oven and preheat the broiler. Broil mac and cheese around 3 inches from the heat source until it turns a rich brown shade. This should take about 2-3 minutes. Finally, let it stand for at least 10 minutes and up to 20 minutes before serving.
So, that’s all for today’s blog – you can broil mac and cheese. Do so for a couple of minutes for a crispy top. Don’t forget to tune in for the next article.
Recommended Readings!
What Is Calabash Chicken? Why’s It So Confusing?
White Spots On Shrimp | Harmless Or Disease-Ridden? | https://cookingyoda.com/can-i-broil-mac-and-cheese/ |
Mars is the last planet of the inner four terrestrial planets in the solar system at an average distance of 141 million miles from our Sun. It revolves around the Sun every 687 days and rotates every 24.6 hours (nearly the same as Earth). Mars has two tiny satellites, named Deimos and Phobos (shown below). They are most likely small asteroids drawn into Mars' gravitational pull. Deimos and Phobos have diameters of just 7 miles and 14 miles, respectively. An interesting side note; the inner moon, Phobos, makes a revolution around Mars in slightly more than seven hours. This means since it orbits Mars faster than the planet rotates, the satellite rises in the west and sets in the east if observed from the Martian surface.
Atmosphere and Weather: The Martian atmosphere is composed primarily of carbon dioxide. However unlike Venus, the Mars atmosphere is very thin, subjecting the planet to a bombardment of cosmic rays and producing very little greenhouse effect. Mariner 4, which flew by Mars on July 14, 1965, found that Mars has an atmospheric pressure of only 1 to 2 percent of the Earth's. Temperatures on Mars average about -81 degrees F. However, temperatures range from around -220 degrees F. in the wintertime at the poles, to +70 degrees F. over the lower latitudes in the summer.
Various probes over the past few decades have found the surface of Mars to be rather desert like. A fascinating panoramic view of the martian surface was taken (picture below) in 1997 by the Pathfinder mission. The surface is cratered, but not as much as our Moon or Mercury. The craters have probably been weather worn over the years by fierce windstorms, some of which can cover the entire planet. These windstorms are common on the red planet, lifting rust-colored dust well up into the atmosphere encircling the entire globe. Mars' red color comes from its reddish rock, sand and soil which encompasses about 5/8 of the surface. The rest of Mars has patches of green. But it is not clear what is producing this green color as it certainly is not vegetation. Evidence does exist in the terrain that water has eroded some of the soil. No flowing water is present today, but NASA announced on March 2, 2004 that the two rovers, Spirit and Opportunity confirmed liquid water once flowed on Mars. In addition, a NASA research team in 1984 found a meteorite in Antarctica which may have come from Mars. The meteorite was dated back 4.5 billion years and some evidence of microscopic life was left in the rock. At present, Mars' water appears to be trapped in its polar ice caps and possibly below the surface. Because of Mars' very low atmospheric pressure, any water that tried to exist on the surface would quickly boil away.
Although water in Mars' atmosphere is only about 1/1000th of the Earth's, enough water vapor exists that thin, wispy clouds are formed in the upper layers of the Martian atmosphere as well as around mountain peaks. No precipitation falls however. At the Viking II Lander site, frost covered the ground each winter.
Seasons do exist on Mars, as the planet tilts on its axis about 25 degrees. White caps of water ice and carbon dioxide ice shrink and grow with the progression of winter and summer at the poles. Evidence of climatic cycles exists, as water ice is formed in layers with dust between them. In addition, features near the south pole may have been produced by glaciers which are no longer present.
Mars does have many terrain features similar to Earth, such as canals, canyons, mountains and volcanoes. Mars has a prominent volcano named Olympus Mons, which stands 69,800 feet above the Martian surface. This makes it the tallest mountain known in our solar system.
In general, Mars has highly variable weather and is often cloudy. The planet swings from being warm and dusty to cloudy and cold. Mars long ago was likely a warmer, wetter planet with a thicker atmosphere, able to sustain oceans or seas.
|Average distance from Sun||141,000,000 miles|
|Perihelion||128,100,000 miles|
|Aphelion||154,500,000 miles|
|Sidereal Rotation||24.62 Earth hours|
|Length of Day||24.66 Earth hours|
|Sidereal Revolution||687 Earth days|
|Diameter at Equator||4,222 miles|
|Tilt of axis||25.2 degrees|
|Moons||2|
|Atmosphere||Carbon Dioxide 95.3%, Nitrogen 2.7%, Argon 1.6%|
|Discoverer||Unknown|
|Discovery Date||Prehistoric|
DEFINITIONS:
Average distance from Sun: Average distance from the center of a planet to the center of the Sun.
Perihelion: The point in a planet's orbit closest to the Sun.
Aphelion: The point in a planet's orbit furthest from the Sun.
Sidereal Rotation: The time for a body to complete one rotation on its axis relative to the fixed stars such as our Sun. Earth's sidereal rotation is 23 hours, 57 minutes.
Length of Day: The average time for the Sun to move from the Noon position in the sky at a point on the equator back to the same position. Earth's length of day = 24 hours
Sidereal Revolution: The time it takes to make one complete revolution around the Sun.
Axis tilt: Imagining that a body's orbital plane is perfectly horizontal, the axis tilt is the amount of tilt of the body's equator relative to the body's orbital plane. Earth is tilted an average of 23.45 degrees on its axis. | https://www.weather.gov/fsd/mars |
This matrix detail illustrates the same PCOS Attack Voting Equipment example shown in the threat trees. The entries in the matrix describe the characteristics of each threat action identified in the threat trees.
TIRA is an automated tool that allows the user to quantify his/her professional intuition/judgment about the likelihood of a given threat happening and its potential impact on election operations. The analyst can rerun the analysis multiple times using different assumptions to see how much the result changes.
Begin with a technology you are familiar with.
Compare what you know with what you see.
Identify any missing threat actions or other elements.
Remember that TIRA uses these trees as inputs for creating scenarios. If something is missing from a tree it can’t be considered in the TIRA analysis.
Do you understand the terms used in the column headings?
Are there other threat actions you would add?
Are there other mitigations / controls you would add?
Would you change any entries in the cells of the matrix?
If your review time is limited, feedback on the overall matrix structure is more important than comments on the individual cells. | https://www.eac.gov/documents/2017/03/15/elections-operations-assessment-summarization-voting-systems-voting-technology/ |
There can be many reasons for Xiaomi Mi 11X Pro overheating. There can be many reasons like flower brightness, persistent background apps, playing games continuously or watching YouTube videos, constant use of smartphones, etc. It is normal for any electronic device to overheat normally but if overheating occurs follow the method below.
Why is My Xiaomi Mi 11X Pro Overheating
Check out the below-given methods to fix overheating issue on your Xiaomi Mi 11X Pro/Xiaomi Mi 11X Nite LE/Xiaomi Mi 11/Xiaomi Mi 10i device.
Method 1: Avoid Overcharging Time
Avoid overcharging your Xiaomi Mi 11X Pro. This is because overcharging increases the likelihood of overheating. So, try to keep your device on a cool and smooth surface while charging.
Method 2: Lower the Screen Brightness
Step 1: Open your phone and go to the app tray and tap on Settings.
Step 2: Turn off Auto-Brightness or adaptive brightness. After that use the slider to keep the brightness as per your likeness.
Method 3: Close Unused Background Apps
Always turn off background apps, running these apps in the background makes your device warmer and consumes more battery. In addition, such applications require more memory space which slows down your device.
Method 4: Take a Break While you Play Game
Playing the game for too long is not a good thing for the temperature of your Xiaomi Mi 11X Pro. Prolonged playing increases battery consumption and requires more memory to run.
Method 5: Turn off Bluetooth and Wi-Fi
Step 1: Pull down the Notification panel and tap on Settings.
Step 2: In many options, two options called Wi-Fi and Bluetooth will appear. If you don’t need it, turn off both options.
Check Also: How to Fix App Crashing and Freezing Issue on Xiaomi Mi 11X Pro
Method 6: Turn off Unwanted Notification
Notification of the application you have downloaded is on by default. Insist on turning off notifications that are not needed.
Method 7: Install the Latest Update
Step 1: Slide down to open the Control panel and press on the settings icon.
Step 2: Press on System apps updater.
Step 3: Press on the MIUI version.
Step 4: Wait a few seconds and if some new updates will be available, install them.
Method 8: Change the Old Battery
If the battery of your phone is old then overheating of the phone may be the main reason so change it.
Method 9: Uninstall Unused Apps
Delete applications that you do not use so that they do not run in the background and the phone does not get hot.
Method 10: Don’t use your Device When Charging to Avoid Heating
To watch videos while your phone is charging, data should not be used to play games. Doing so warms up the phone quickly.
Which of the above method did you use to fix the overheating issue in your Xiaomi mi 11x Pro? Please comment on the comment box. | https://www.bestandroidos.com/how-to-fix-overheating-issue-on-xiaomi-mi-11x-pro/ |
Russia is an immense country that is home to numerous natural and man-made wonders. On October 1, 2007, a competition was launched which happened in 3 phases to determine what are the 7 wonders of Russia.
This is very similar to the multiple competition held all over the world in response to the main competition to uncover the New 7 Wonders of the World, a new list that replaced the 7 Ancient of the World.
The results were announced officially on the famous Red Square in Moscow on June 12, 2008, and below are the results!
1. Mount Elbrus
Mount Elbrus is the highest and one of the most famous mountains in Europe and stands 5,642 meters (18,510 feet) tall. This dormant volcano forms the natural boundary between Europe and Asia in the south of Russia. It’s also the 10th most prominent peak in the world and it has 2 summits which are its volcanic domes.
2. Saint Basil’s Cathedral
Saint Basil’s Cathedral is one of the best-recognizable and most famous churches in the world. Its remarkable architectural style with its magnificent domes has become one of the ultimate symbols of Russia. It used to be a Christian church that was built between 1555 and 1561 by Ivan the Terrible and is one of the 12 buildings located in the Red Square in Moscow.
3. Lake Baikal
Lake Baikal is a magnificent lake one of the underwater wonders of the world and even though it’s only the 7th largest lake in the world by surface area, it is the largest lake in the world by volume. To make this even more astounding, it contains about 22% of all the fresh water in the world and has about the same volume as all of the 5 great lakes of North America combined! It’s also considered to be the clearest and oldest lake in the world as well.
4. Peterhof Palace
Peterhof Palace is one of the most famous palaces in the country and is often referred to as the “Versailles of Russia,” about the grandiose Palace of Versailles in France. It was constructed by Peter the Great just after he founded the city of Saint Petersburg, which was to be the new capital of Russia in the early 18th century. He was so impressed with the Palace of Versailles that he wanted to replicate a lot of its features, which he did in style!
5. Valley of Geysers
The Valley of Geysers is what the name suggests, a geyser field located on the Kamchatka Peninsula in the utmost eastern part of Russia. This valley is about 6 kilometers long (3.7 miles) and contains about 90 geysers and numerous hot springs. It’s the second-largest collection of geysers on the planet. There’s just one big problem, the area can only be reached by helicopter, making it difficult for people to visit this natural miracle.
6. Manpupuner rock formations
The Manpupuner rock formations are also referred to as the “Seven Strong Men Rock Formations” and the “Poles of the Komi Republic,” referring to their location just west of the Ural mountains in the Komi Republic of Russia. These rocks were formed through an erosion process that has lasted for over 200 million years and the 7 rocks stand between 30 and 42 meters (98 and 138 feet) tall, an absolute miracle of nature!
7. Mamayev Kurgan
Mamayev Kurgan is a height overlooking the city of Volgograd, which used to be known as Stalingrad, in southern Russia. The area contains several memorials commemorating one of the bloodiest battles in human history, the Battle of Stalingrad which lasted from August 1942 to February 1943. A huge sculpture was erected on the site called “The Motherland Calls” and represents “Mother Russia,” a perfect fit to become one of the 7 wonders of Russia. At the time of its installation, it was the highest sculpture in the world with a height of 85 meters (279 feet). | https://travelstratosphere.com/7-wonders-of-russia/ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.