content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
Feds List Approved Cleaning Products To Fight Coronavirus Germs
While saying the risk of exposure to the coronavirus is still small, the government's Environmental Protection Agency has released a list of products that can be used to clean and disinfect while the virus is a risk to the general public. Most can easily found at a store near you in the Kalamazoo/Battle Creek area.
Probably to two most well-known names on the list are Clorox and Lysol. (The entire list is available here.) The EPA says "one of the proactive steps everyone can take to prevent any respiratory illness this time of year is to clean and sanitize your environment often," according to an ABC News report.
An EPA spokesperson said the companies had to demonstrate their products are effective against viruses that are even "harder-to-kill" than the novel coronavirus. They also noted that any products without an EPA registration number haven't been reviewed by the agency. - ABC News
The ABC News story also notes the EPA does not review other household products, such as vinegar, or whether they're effective against viruses and bacteria. | https://wkfr.com/feds-list-approved-cleaning-products-to-fight-coronavirus-germs/ |
Classical Rayleigh Ritz Method
Introduction:
Classical Rayleigh Ritz Method is named after Walther Ritz and Lord Rayleigh and is widely used.
Classical Rayleigh Ritz Method is a method of finding displacements at various nodes based on the theorem of minimum potential energy.
Departure from classical Rayleigh Ritz Method leads to FEM. The two departures are:
- Trial functions are defined for sub-domains.
- Generally, the use of values of space variable at the nodes as unknowns.
Summary:
Firstly, for the sub-domains, there will be algebraic equations with some known and unknown coefficients. DA matrix is formed from these equations where D is nodal displacement and A is any arbitrary constant.
With the help of DA matrix and nodal displacements, displacement at any arbitrary point can be found and the matrix formed is called as Shape function matrix.
Further, strain and stress can also be found if displacements are known.
This method will also give the variation of the displacement with different slopes in the different domains in the form of a line diagram.
The Detailed explanation of the topic is given in the pdf embedded below with solved examples. | https://civildigital.com/classical-rayleigh-ritz-method/ |
FIELD OF THE INVENTION
BACKGROUND OF THE INVENTION
SUMMARY OF THE INVENTION
DETAILED DESCRIPTION
The present invention relates generally to a clip connector for use in an optical communication coupling system, and more specifically, to a clip connector for creating an access point in an optical fiber, when the optical fiber is received by the clip connector.
There has been an increased use of and greater complexity of active elements in a communication device, which need to be physically linked and/or communicatively coupled to other elements of the communication device. Examples of such a communication device include, but are not limited to, a radio telephone, a music playback device (i.e. MP3 player), a pager, a laptop computer, a desktop computer and a Personal Digital Assistant (PDA). Examples of the active elements include, but are not limited to, a camera, a display, and a fingerprint sensor. In at least one common configuration, the communication device can include one or more housings, where a greater number of the active elements are increasingly being placed on the one or more housings of the communication device. This has tended to result in an increasing amount of data such as video content and audio content to be transmitted either in each of the one or more housings and/or between multiple housings of the communication device. The increased data can be accommodated either by increasing the number of data lines and/or an increase in the data rate on at least some of the data lines.
In one of the known methods for transmitting data, the data is typically routed via a multi-layer electric flex circuit. The multi-layer electric flex circuit generally includes multiple layers of high density conductive traces interleaved with an insulating material. The multi-layer electric flex circuit is then passed through a restricted space between the one or more housings. However, routing a large number of signals through the restricted space can result in the multi-layer electric flex circuit that is less reliable mechanically and has greater radio-frequency interference. In a yet another known method for physically linking and/or communicatively coupling active elements to other corresponding elements, the use of an optical fiber is required. This method also requires the use of ferrules and plugs to interconnect the active elements with the other corresponding elements. However, the method requires that an optical fiber is always perpendicular to the active elements. Further in this method, plural segments of the optical fiber are required to couple the various pairs of active elements and/or corresponding elements. The use of multiple (i.e. plural) segments of the optical fiber can make the process of coupling of various pairs of elements more complex.
In light of the above-mentioned discussion there is a need for a system for inter and/or intra data transmission in the one or more housings of the communication device which limits the amount of the radio-frequency interference. The system should enable coupling of each pair of the active elements and/or the corresponding elements by using a reduced number of communicative elements. Further the system should be cost-effective and easy to assemble.
The present invention provides an optical communication coupling system for use in a device. In the present invention, a signal in the form of light is used for data transmission between a first optical communication element and a second optical communication element in the device. In at least one embodiment of the present invention, the optical communication coupling system includes an optical fiber and a clip connector. The optical fiber is capable of conveying light between the first optical communication element and the second optical communication element. The clip connector is capable of receiving the optical fiber. The clip connector is also capable of altering the optical fiber to create an access point, which enables transfer of the light between the optical fiber and at least one of the first optical communication element and the second optical communication element.
In a further embodiment of the present invention, a device is provided, that can include a first optical communication element and a second optical communication element. The device can also include an optical communication coupling system. Further, the optical communication coupling system includes an optical fiber and a clip connector. The optical fiber is capable of conveying light between the first optical communication element and the second optical communication element. The clip connector is capable of receiving the optical fiber. The clip connector is also capable of altering the optical fiber to create an access point, which enables transfer of light between the optical fiber and at least one of the first optical communication element and the second optical communication element.
In a yet further embodiment of the present invention, a clip connector that enables optical communication between an optical fiber and an optical communication element is provided. The clip connector has an abrasive surface. The abrasive surface is capable of abrading a first surface of the optical fiber when the optical fiber is inserted into the clip connector. The clip connector also includes an aperture that is capable of conveying light through the first surface from/to the optical communication element to/from the optical fiber. The resulting abrasion occurs when the optical fiber is inserted into the clip connector. When the optical fiber is completely inserted into the clip connector, the resulting abrasion of the optical fiber is aligned with the aperture and with at least one of a light emitting and a light receiving optical communication element.
These and other features, as well as the advantages of this invention, are evident from the following description of one or more embodiments of this invention, with reference to the accompanying figures.
Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated, relative to other elements, to help in improving an understanding of various embodiments of the present invention.
Before describing in detail the particular system for communication, in accordance with the present invention, it should be observed that the present invention resides primarily as apparatus components related to an optical communication coupling system. Accordingly, the apparatus components have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent for an understanding of the present invention, so as not to obscure the disclosure with details that will be readily apparent to those with ordinary skill in the art, having the benefit of the description herein.
In this document, the terms ‘comprises,’ ‘comprising,’ ‘includes,’ or any other variation thereof are intended to cover a non-exclusive inclusion, such that an article or apparatus that comprises a list of elements does not include only those elements, but may include other elements that are not expressly listed or inherent in such an article or apparatus. An element proceeded by ‘comprises . . . a’ does not, without more constraints, preclude the existence of additional identical elements in the article or apparatus that comprises the element. The term ‘another,’ as used in this document, is defined as at least a second or more. The terms ‘includes’ and/or ‘having’, as used herein, are defined as comprising.
FIG. 1
FIG. 1
100
100
100
102
104
100
102
104
102
104
106
108
102
104
illustrates an exemplary device , where various embodiments of the present invention can be applicable. Examples of the device include, but are not limited to, a wireless communication device, a radio telephone, a pager, a laptop computer, a music playback device i.e. MP3 player, and a Personal Digital Assistant (PDA). The particular device illustrated has a two part housing which is adapted to move relative to one another. While the particular exemplary device illustrated includes a two part housing, one skilled in the art will readily appreciate that the present invention can be implemented in other types of devices having multiple housings, as well as devices of the type having a single housing. As illustrated the device includes a first housing and a second housing . In other words, it will be apparent to a person ordinarily skilled in the art that though the device is shown to include the first housing and the second housing , the present invention is applicable for the same and other types of devices with greater, the same, or fewer number of housings. In the illustrated embodiment, the first housing and the second housing can move relative to each another along an axis . Doubly pointed arrow in illustrates a potential movement of the first housing and the second housing relative to each another, which results in a closed position and an open position, as well as any number of positions in between.
102
104
100
102
104
102
104
104
104
In at least some embodiments of the present invention, the first housing and the second housing of the device can include one or more active elements that need to be communicatively coupled to one or more corresponding elements present on either the same housing and/or the other one of the first housing and the second housing . Examples of active elements include, but are not limited to, a camera, a display and a fingerprint sensor. For example, a camera present on the first housing may need to be communicatively coupled to a microprocessor present on the second housing . Similarly, a fingerprint sensor present on the second housing may need to be communicatively coupled to another microprocessor present on the second housing .
FIG. 2
102
100
100
202
102
102
204
202
204
206
208
208
202
204
208
100
202
204
104
208
102
104
illustrates the first housing of the device , where various embodiments of the present invention are applicable. As illustrated, the device includes a first optical communication element on the first housing . The first housing also includes a second optical communication element . The first optical communication element , the second optical communication element , together with other optical communication elements form a plurality of communication elements , between which an optical communication coupling system can convey one or more optical signals. The optical communication coupling system includes an optical fiber . The optical fiber is capable of conveying light between at least some of the plurality of optical communication elements including the first optical communication element and the second optical communication element . While in the embodiment illustrated the optical fiber is used to couple optical communication elements associated with a common housing, it will be apparent to a person ordinarily skilled in the art, that the optical communication elements being coupled together by the optical fiber can be present on one or more different housings of the device . In at least one embodiment of the present invention, when the first optical communication element is present on the first housing, and when the second optical communication element is present on the second housing , the optical fiber can be passed through the hinge space between the first housing and the second housing .
206
208
202
204
208
210
208
202
208
FIGS. 3-5
The optical communication coupling system also includes a clip connector which will be explained in detail in conjunction with . The clip connector enables optical communication between the optical fiber and at least one of the first optical communication element and the second optical communication element . In some instances the optical fiber can be coupled to an optical communication element at one of the two endpoints. In other instances, the optical fiber can be coupled to an optical communication element at a point along the length of the optical fiber between the two endpoints. Generally, at least a pair of optical communication elements will be associated with a particular length of fiber, where one of the optical communication elements will function as a transmitter and be a source of the optical signal being carried or conveyed by the optical fiber, and one or more of the optical elements will function as a receiver and be the intended destination(s) of the optical signal. Further, it will be apparent to a person ordinarily skilled in the art that other optical communication elements for example, an optical communication element , can also operate as a source of an optical signal to be carried or conveyed in the optical fiber , along with the first optical communication element . In some instances, multiple optical communication elements operating as transmitters can share an optical fiber by implementing some form of multiplexing, such as time-division multiplexing, in which each transmitter has an assigned time slot during which the optical communication element can transmit.
204
208
In at least one embodiment of the present invention, wavelength-division multiplexing can additionally and/or alternatively be used and potentially allow for multiple sources of an optical signal to operate simultaneously. In wavelength-division multiplexing, multiple data carrying signals are multiplexed on a single optical-fiber by using different wavelengths of light. The multiple wavelengths of light each carry a different data signal. At the receiving end, the second optical communication element can de-multiplex the associated signal intended for the optical communication element from the combined light signals being conveyed by the optical fiber . In at least some instances a color filter can be used to effectively isolate and/or demultiplex the intended signal.
208
The one or more optical communication elements operating as a transmitter produce a light with one or more characteristics which can be varied so as to encode and/or superimpose a stream of data on the light produced. Examples of the characteristics of the light which can be varied for the purpose of encoding and correspondingly decoding the data can include frequency, wave-length and phase. Examples of the one or more optical communication elements, which can be used to produce an optical signal, can include a light-emitting diode, a vertical-cavity surface emitting laser, an edge-emitting diode, a PIN (p-type, intrinsic, n-type diode) diode and a photo-diode. Examples of the optical fiber can include an acrylic fiber, a plastic optical fiber and a glass optical fiber.
208
208
208
208
208
208
202
208
208
In at least one embodiment of the present invention, the optical fiber is provided with a cladding. The cladding has one or more layers of material that is in contact with a core of the optical fiber . The material of the cladding typically has a refractive index that is less than a refractive index of the core of the optical fiber . The lower refractive index of the cladding largely results in the total internal reflection of the light in the optical fiber . In total internal reflection, the light is largely reflected inside the optical fiber when the light attempts to transition between the optical core and the cladding. In another embodiment of the present invention the optical fiber can be covered with paint and/or a reflective material. Examples of the reflective material can include, but are not limited to, silver, gold and copper. In a further embodiment of the present invention, the light can be totally reflected internally without the use of the cladding, the paint and/or the reflective material. In this embodiment, the light emitted by the first optical communication element can be trapped and reflected inside the optical fiber when the angle of the incidence of the light is below a critical angle of the optical fiber . The critical angle is the minimum angle of incidence at which the total internal reflection occurs.
208
202
204
204
202
The optical fiber is generally capable of conveying at least a portion of the light introduced by the first optical communication element operating as a transmitter between the first optical communication element and the second optical communication element . The second optical communication element is capable of receiving and/or detecting the light including the changing characteristic of the light emitted by the first optical communication element .
FIG. 3
FIG. 4
FIG. 4
206
206
302
302
208
302
208
208
202
204
302
304
304
208
402
208
208
302
208
304
208
302
208
208
208
302
302
404
202
208
208
204
illustrates a cross-sectional view of a clip connector for use in the optical communication coupling system , in accordance with at least one embodiment of the present invention. The optical communication coupling system also includes a clip connector . The clip connector is adapted for receiving the optical fiber . When the clip connector receives the optical fiber, the clip connector is further adapted to alter the optical fiber to create an access point, which allows the transfer of light between the optical fiber and at least one of the first optical communication element and the second optical communication element . The clip connector has an abrasive surface . The abrasive surface is capable of abrading a first surface of the received optical fiber . An exemplary abraded surface is illustrated in . The abrasion of the optical fiber , when the optical fiber is inserted in the clip connector , creates an access point for light to enter or exit the optical fiber . The abrasive surface can abrade the first surface by scratching the cladding, the paint and/or the reflective material when the optical fiber is inserted into the clip connector . After insertion, the first surface of the optical fiber is in communicative contact with an optical communication element between which an optical signal can be exchanged. In the embodiment of the present invention, when the reflective material and/or the paint are used, the reflective material and/or the paint can be deposited in segments on the optical fiber . The segments of reflective material and/or paint are positioned along the length of the optical fiber , where the optical fiber is most likely to be inserted into the clip connector . In at least one embodiment of the present invention, the clip connector includes an aperture. When the optical fiber is inserted in the clip connector, the abraded surface is intended to be aligned with the aperture , which is illustrated in . The aperture is capable of conveying the light through the first surface of the optical fiber from either the first optical communication element to the optical fiber or from the optical fiber to the second optical communication element , depending upon which one of the optical communication elements the clip connector is associated with.
306
306
305
302
208
In at least one embodiment of the present invention, a second surface that is opposite to the first surface can be intruded and/or deflected inward. The second surface can be intruded by introducing a kink in the optical fiber at the second surface. The kink can be provided via an intruding surface or a protrusion on the portion of the clip connector , which comes into contact with the second surface of the optical fiber . The angle of the intruded surface is varied so as to deflect the light between a direction that would allow the light to escape from the optical fiber and a direction that enables the light to travel along the length of the optical fiber, when the light located in the optical fiber intersects the intruded surface.
208
302
208
202
310
208
208
In another embodiment of the present invention, the second surface of the optical fiber is provided with a notch-cut, through which a portion of the clip connector can enter the fiber and interact with the light traveling therein. In at least some instances, the surface of the clip connector in contact with and/or present within the notch-cut has a reflective surface. The reflective surface can deflect some of the light, which intersects the notch-cut between a direction that would allow the light to escape from the optical fiber and a direction that enables the light to travel along the length of the optical fiber . In accordance with another embodiment of the present invention, the first optical communication element can emit the light at an angle inside the optical fiber instead of emitting the light perpendicular to the first surface of the optical fiber .
FIG. 4
FIG. 3
302
206
100
302
100
302
304
402
208
208
302
208
302
302
404
404
402
208
202
208
302
208
404
208
302
402
302
406
302
406
208
208
illustrates the clip connector of the optical communication coupling system for use in conjunction with the device , in accordance with at least one embodiment of the present invention. In at least one embodiment of the present invention, the clip connector is placed on a substrate, such as a printed circuit board, which can be present in at least one housing of the device . The clip connector includes the abrasive surface which is capable of abrading a first surface of the optical fiber when the optical fiber is inserted into the clip connector . Abrasion occurs when the optical fiber is inserted into the clip connector . The clip connector also includes an aperture . The aperture is capable of conveying light through the first surface from/to the optical fiber to/from an optical communication element, for example, from the first optical communication element to the optical fiber as illustrated in . Further, the clip connector can enable the optical fiber to be aligned with the aperture and an optical communication element when the optical fiber is inserted into the clip connector . As noted previously, in at least one embodiment of the present invention, a second surface that is opposite to the first surface can be provided with a notch-cut. A surface of the clip connector in contact with the notch-cut is provided with a reflective surface . When the escaping light through the notch-cut strikes the clip connector , the light is deflected by the reflective surface into the optical fiber . The reflective surface could be deposited on the fiber side opposite to the entry/exit notch. This is done by creating a disturbance/roughed surface on the fiber opposite to entry/exit point created through an abrasion, such as through the introduction of an intrusion of the type noted above, and depositing a reflective material at the point of the disturbance. In some embodiments, the reflective surface could be a portion of the clip connector itself, that is positioned opposite the entry/exit point for reflecting light between a direction that would allow the light to escape from the optical fiber and a direction that enables the light to travel along the length of the optical fiber .
302
302
When the clip connector is associated with an optical communication element that is functioning as a receiver, the portion of the clip connector is intended to poke partially into the fiber body opposite of the abraded exit point through a notch-cut or a kink and intercept and redirect the optical signals traveling along the length of the fiber and reflect them back toward the exit point. When the clip connector is associated with an optical communication element that is functioning as a transmitter, the portion of the clip connector is intended to poke partially into the fiber body opposite of the abraded entry point through a notch-cut or a kink and intercept and redirect the optical signals being received via the entry point and redirect it so that it travels along the length of the fiber.
FIG. 5
206
208
502
208
502
502
208
504
506
504
508
208
208
208
506
208
504
208
506
illustrates a cross-sectional view of an optical communication coupling system , in accordance with another embodiment of the present invention. In the present embodiment, the optical fiber is provided with an increased diameter proximate one or both of the end points. The end point of the optical fiber is bulged during manufacturing to provide the increased diameter . Examples of potential techniques which can be employed to produce the increased diameter includes, but is not limited to, a hot knife cutting technique, a polishing technique and a hot plate flattening technique. The increased diameter enables the end point of the optical fiber to be captivated by the clip connector positioned proximate to an optical communication element, for example the optical communication element . The clip connector includes a retention element that at least partially grips the optical fiber so as to resist removal of the optical fiber . The bulged end of the optical fiber can also act as a lens by concentrating the light emitted by the optical communication element inside the optical fiber . The lens focuses the light associated with an optical signal into the fiber core from an outside source or alternatively the lens focuses the light exiting from the fiber core onto an outside detector. This is especially useful if the source or detector is positioned any meaningful distance away from the fiber ends, which otherwise might allow some of the light to escape as it traverses the distance, thereby resulting in lost rays. The use of a lens will help to capture most of the rays, thereby resulting in better efficiency and less signal loss. The clip connector helps to align the optical fiber relative to the corresponding optical communication element , thereby enabling the conveyance of an optical signal between the optical communication element and the optical fiber, which in turn can be conveyed between the optical fiber and other optical communication elements located at the opposite end of the optical fiber or along the length of the same.
Various embodiments of the present invention, as described above, provide an optical communication coupling system, which supports the conveyance of an optical signal between multiple optical communication elements. The present invention involves the use of an optical fiber and clip connectors that provide a cost-effective and reliable connection between the optical fiber and an optical communication element through the insertion of an optical fiber into a clip connector positioned and aligned with the optical communication element.
In the foregoing specification, the invention, as well as its benefits and advantages, have been described with reference to specific embodiments. However, one with ordinary skill in the art would appreciate that various modifications and changes can be made, without departing from the scope of the present invention, as set forth in the following claims. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense. All such modifications are intended to be included within the scope of the present invention. The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage or solution to occur or become more pronounced are not to be construed as critical, required or essential features or elements of any or all the claims. The invention is defined solely by the appended claims, including any amendments made during the pendency of this application and all equivalents of these claims, as issued.
BRIEF DESCRIPTION OF FIGURES
The accompanying figures, where like reference numerals refer to identical or functionally similar elements throughout the separate views, and which, together with the detailed description below, are incorporated in and form part of the specification, serve to further illustrate various embodiments and explain various principles and advantages, all in accordance with the present invention:
FIG. 1
illustrates an exemplary device, where various embodiments of the present invention can be applicable;
FIG. 2
illustrates a housing of a device incorporating a plurality of optical communication elements and an optical communication coupling system for facilitating a communication coupling between the optical communication element, where various embodiments of the present invention can be applicable;
FIG. 3
illustrates a cross-sectional view of a clip connector for use in an optical communication coupling system for facilitating a conveyance of an optical communication signal between an optical fiber and an optical communication element at a point between the end points of the optical fiber, in accordance with at least one embodiment of the present invention;
FIG. 4
illustrates a perspective view of a clip connector and a portion of an optical fiber being coupled thereto of an optical communication coupling system, in accordance with at least one embodiment of the present invention; and
FIG. 5
illustrates a cross-sectional view of a clip connector for use in an optical communication coupling system for facilitating a conveyance of an optical communication signal between an optical fiber and an optical communication element at an end point of the optical fiber, in accordance with another embodiment of the present invention. | |
SUMMARY
The Earth, a feather, and a bowling ball are all light projections on spinning vortexes as a whole and each part within no matter how small. When a feather and bowling ball are dropped from a high distance in an evacuated chamber both are pulled down at the same rate by the vortexes that project the Earth under them. When the chamber is filled with air the feather falls slower than the bowling ball because the very small vortexes that project it are equalizing with the vortexes that project whats in the air. If it were possible for the feather to be turned on and of at a certain rate it would not it would be sit suspended in the air.
While rotating at high speed, helicopter rotors interchange with areas where there are no rotors which are called expanded light bodies. These expanded light bodies mainly consists of what's in the air, mainly nitrogen and oxygen. The atoms and subatomic particles that make up the air are lights projected onto the bodies of rotating vortexes that are very small. The rotors themselves are compressed light bodies that go through changes when in motion. While the rotors are spinning at high speed they are geometrically divided into sections small enough for the vortexes that project them to equalize with those that make up the air causing an affect similar to magnets repelling each other.
The Disk:
The Air Equalizing Expanded Light Body Disk works as an electrical rotor spinning rapidly and equalizing or repelling the vortexes that make up what is in the air. The body of the Disk represents the geometrically divided air filled space between the rotor blades. The spinning rotor blades are represented by an electric current that is introduced at different intervals to the individual geometrically divided body sections in the Disc body.
The body of the Disk is made from a non-space compressing material that does not react to the compressing vortexes that encompasses all matter. One characteristic of non-space compressing materials would be products that go back to their original shapes after being compressed. This is due to the structure of the vortexes from within that make up the material and the space compressing vortexes extending from it.
Vortexes of Space:
Extending from and surrounding all matter are the vortexes of space rotating and compressing light from space to the zero planes of positive and negative black bodies throughout space. We can think of these black bodies as males and females or the navels on our bodies. For instance, a black hole is a negative female (innie), where the vortexes of space compress light to it's center. The opposite, a black protrusion, is a positive male (outtie) that compresses light onto its surface producing white light. Stars are black protrusions. Stars and black holes are the “electrical circuits” of the Universe and they prove the vital significance of space compressing vortexes that extend from all objects.
As a glass of liquid water freezes it's expansion is a result of two vortexes acting on it in different ways causing it to expand and break the glass. One type of vortex that surrounds the glass of water causes it to compress at each narrow part of each vortex. As the water freezes and compresses at each narrow point it is pulled outward by the wide rotating side of another vortex causing the water to expand and break the glass.
FIGURES
FIG. 1
Embodiment of Disc top view
FIG. 2
Embodiment of Disc standard view
FIG. 3
Profile view of Disc
FIG. 4
Embodiment of Disc body with geometrical divisions in the material
FIG. 5
Embodiment of Disc body with geometrical divisions in the material
FIG. 6
Embodiment of Disc top view with geometrical divisions in the body
FIG. 7
Embodiment of Disc body with geometrical divisions in the material
a
FIG. 7
Embodiment of Disc body with geometrical divisions in the material with space on top and bottom
a
FIG. 7
Animation: Interaction between Disc body and space as the sections are energized
c
FIG. 7
Animation: Energized section of Disc interacting with space
FIG. 25
Invisible white light
FIG. 12
Invisible white light (space)
FIG. 10
Light Wave field
FIG. 15
Invisible white light and the Light Wave field
FIG. 16
Light Wave field and Radio waves
FIG. 18
Light Wave field and Microwaves
FIG. 19
Light Wave field and Microwaves
FIG. 20
Light Wave field and Ultraviolet Light waves
FIG. 21
Light Wave field and Ultraviolet Light waves
FIG. 22
Invisible White Light (space) intersection
FIG. 23
Light Wave System Frame
FIG. 25, 26
Invisible white light
FIG. 27
Invisible white light and rotating Scanning Light Orbitals
FIG. 28
Invisible white light, Scanning Light Orbitals (RGB)
FIG. 29
Invisible white light and orbital planes of the Scanning Light Orbitals
FIG. 30
Vortex created by the revolving Scanning Light Orbitals
FIG. 31
Simulated Light projection of Matter/Apple
FIG. 32
Simulated Light projection of the Light Spectrum
FIG. 33
Simulated Light projection of Matter/Bar Magnet
FIG. 34
Simulated Light projection of Matter/Bar Magnet interactions
FIG. 45
Simulated Light projection of Air
FIG. 81
Simulated Light projection of Matter/Wing and the Air equalizing/repelling each other while a Bee is in flight
FIG. 36
Simulated Light projection of the ground from the Organic Layer to the Bedrock
FIG. 35
Simulated Light projected onto the body of the orbitals to simulate the ground from the Organic Layer to the Bedrock
FIG. 37, 38, 40
Matter/Apple falling to the ground
FIG. 41
Showing the vortexes extending from objects in space and how the interact to create Sun's and Black Holes
FIG. 42
Properties of single and double vortexes
FIG. 43
Properties of single and double vortexes shows why liquid water expands when freezing
FIG. 44
Double vortexes of space keeping the Disc off the ground
FIG. 45
Embodiment of The Disc resting on many vortexes
The Disc:
FIG. 1
FIG. 2
FIG. 3
FIGS. 4 and 5
24
24
27
29
47
Referring to is the embodiment of the top view of the Disc wherein the the center of the Disc is the housing for the power and electrical connections. The top side and underside of the Disc may be connected by an embodiment of a ridge. embodies a closer view of the non-space compressing material of the Disc body and the patterned geometric layout therein with said pattern located throughout the whole of the Disc body from a central point.
FIG. 6
FIG. 2
FIG. 7
a
FIG. 7
c
FIG. 7
24
47
51
51
27
47
51
47
59
51
embodies the top view of the Disc with the patterned geometric layout of the Disc body representing the empty space between helicopter rotor blades and surface area represents the rotor blades. The compressed surface areas are exposed to an electrical current received from the power and electrical connections unit at the center of the Disc. The patterned geometric layout of the Disc body representing the empty space are the areas where there is NO electrical current present. Referring to the four compressed surface areas rotate in the same direction at short intervals interchanging between electric current ON or (1), or electric current OFF or (0) within the non-space compressing material of the Disc body and the patterned geometric layout of the Disc. This rapid interchange in the body of the Disc has an affect on the space around it. Referring to this affect is the equalization of the vortexes of space and the vortexes of compressed surface areas the Disc body.
Gravitation is the interactions between large and small vortexes that occurs in a Light Wave System.
The Light Wave System/Gravitation:
Anti gravitation is the equalization of large and small vortexes in a Light Wave System. The following describes matter, gravitation, and what occurs during vortical equalization.
FIG. 25
FIG. 12
FIG. 10-21
FIG. 26
49
49
44
49
Referring to the general bases of the Light Wave System is a background of magnetic invisible white light. As shown in the spheres of invisible white light are configured in such a way that the edges form zero curvature planes. Within the magnetic memory of the invisible white light is the Light Wavefield stored. Referring to the Light Wavefield is the stored information of all light from Expanding Radio waves to Compressing cosmic rays with the visible light spectrum in the center.
FIG. 22
FIG. 23
49
44
Referring to is the embodiment of intersecting spheres of invisible white light forming a System Frame in which spheres are enclosed in cubes wherein the straight lines of the cubes are formed from the zero curvature planes of other spheres.
FIG. 31
FIG. 32
FIG. 27, 17
FIG. 23
FIG. 29, 30
FIG. 27, 19
Referring to Matter (simulated, compressed light) and Light Spectrum is projected on the orbital body while rotating and revolving around the invisible Light Wave System Frames. The fast motion and position of the orbital planes form a vortex wherein the wide top pulls down smaller vortexes.
FIG. 33
FIG. 34
17
Referring to the bar magnet embodies the theory of gravitation wherein the positive north side of the magnet always pulls in the negative south side of the magnet and showing how they repel or equalize by the motion of the spinning vortexes caused by the orbital bodies.
Gravitation is the interaction between small and large vortexes.
FIG. 81
when a bee flies the vortexes of it's wings equalizes or repels the vortexes that make up particles in the air. Equalization occurs in a very short period of time therefore the wings need to move very fast to keep resetting the position of the vortexes. The vortexes of the Disc body electrically mimics how the bee repels vortexes of the particles in the air.
FIG. 36
The earth is simulated matter. The expanded or positive side of the spinning vortexes is the Organic layer while the Bedrock is the compressed, negative side.
FIG. 35
the layers are projected onto the orbital bodies as they revolve and spin rapidly causing the ground to pull matter back toward it.
FIG. 37, 38, 40
depicts everything smaller than the earth as negative and the spinning vortexes of the Earth is always positive or larger than the matter on it. Energy strengthens the power of the spinning vortexes that makes up matter.
FIG. 41
As shown is vortexes make up matter and extend from it through space and have a very big impact on how the Universe works. Light wave compression through the vortexes extending from black bodies of space is the reason the Sun shines and how black holes work.
We can think of the Sun and Black holes in terms of belly buttons.
The Sun is an outie black body protruding from the zero plane of space where the vortexes extending from it compress light waves in space on it's black surface resulting in “sparks” or Sun light.
Black Holes are innie black bodies sinking into the zero plane of space compressing light through it's extending vortexes to it's center.
FIG. 42, 43
Water expands as it freezes because as it solidifies or compresses on its surface the positive spinning vortexes from space pulls the water making it expand.
FIG. 44
Embodies anti gravitation wherein the empty space between any two objects is occupied by two large rotating vortexes that have many other vortexes within them.
47
As the body of the Disc rapidly changes from non-space compressing material of the Disc body to the compressing of the patterned geometric layout therein through an electrical current causes the Disc body to sit on or hover on the positive side of the interchanging vortexes. | |
Waters is currently seeking a motivated and experienced Senior Software Engineer to provide leadership in developing CI/CD systems at Waters.
We are looking for a passionate Senior Software Engineer with a talent for building quality automation solutions. You will work in a fast-paced, agile environment and engage in technical discussions, participate in technical designs, demonstrate problem-solving abilities, and present and share ideas through global collaboration.
A self-starter attitude, excellent communication, and dedication to innovative technologies are critical to this role.
Responsibilities
Qualifications
This role can be based at either our Milford, MA, USA or Wilmslow, UK office.
Required skills:
It's easy, and free! Add jobs from any website! Get recommendations from your friends! Start by adding this job... | https://www.techgroups.com/opportunities/opportunity/38769/ |
Story Contact(s):
Nathan Hurst, [email protected], 573-882-6217
The views and opinions expressed in this “for expert comment” release are based on research and/or opinions of the researcher(s) and/or faculty member(s) and do not reflect the University’s official stance.
COLUMBIA, Mo. –Nov. 26, 2012, better known as “Cyber Monday,” was the biggest online sales date ever, according to analysts from comScore. Despite these high numbers, many states, including Missouri, have no effective means of collecting taxes on those sales. Researchers at the University of Missouri Truman School of Public Affairs found that the state lost approximately $468 million annually in sales tax revenue in the last decade and say that number will rise in the future.
Federal law and U.S. Supreme Court rulings only allow states to levy sales taxes on a business with a physical presence in the state. For example, Amazon.com does not charge sales tax in Missouri because it is physically located in California. However, Wal-Mart charges sales tax, since it has stores in Missouri. In the study, researchers analyzed historical data on e-commerce activity and estimated that the state could earn $1.4 billion in potential revenue from 2011 to 2014 if it had some form of e-commerce sales tax.
To attempt to collect some e-commerce sales tax revenue, 24 states have joined the Streamlined Sales and Use Tax Agreement. Missouri is not a member state. Member states encourage companies that sell over the Internet and by mail order to collect taxes on sales made to member states. However, online retailers participate voluntarily. On average, member states collected $30.7 million in e-commerce tax revenue from 2005 to 2010.
“The Streamlined Sales and Use Tax Agreement is a short-term fix,” said Andrew Wesemann, a doctoral student in the Truman School of Public Affairs Institute of Public Policy at MU. “Since the agreement is voluntary, the amount of revenue collected is much less than the amount of tax we would expect the state to collect if all e-commerce retailers remitted sales taxes.”
In the long term, Wesemann recommends that Missouri legislators lobby Congress to pass new federal legislation permitting sales tax on Internet transactions across state lines. Currently under consideration, the Marketplace Fairness Act would allow states to enter the Streamlined Sales and Use Tax Agreement or create their own systems. The act, co-sponsored by Sen. Roy Blunt (R-Mo.) and Sen. Richard Durbin (D-Il.), would give states additional enforcement power to increase compliance with tax laws.
In addition to increasing tax revenue, MU researchers think that the state economy could benefit from e-commerce sales taxes as well. By taxing out-of-state online retailers, the state would level the playing field for retailers located inside state lines, incentivizing consumers to buy locally. | https://munews.missouri.edu/expert-comment/2012/1217-expert-available-uncollected-internet-sales-taxes-costing-missouri-millions-in-potential-revenue-during-holidays-2/ |
The terms ‘carbon neutral’ and ‘net zero’ are often used interchangeably in the language of sustainability and climate but their differences are not necessarily well understood. In this article, we explore the difference between the two and why it matters for companies looking to reduce emissions and join the battle against climate change.
Global temperatures have risen by 1.1ᵒC from pre-industrial levels, and with each incremental rise there are increasingly harmful impacts on the environment. According to the Intergovernmental Panel on Climate Change (IPCC), if the world is to avert the worst impacts of climate change – widely recognised to be beyond 1.5ᵒC of warming – we must reach net zero carbon by 2050 .
However, what would you think if I said the world must reach carbon neutrality by 2050 to avert the worst impacts of climate change? You would be forgiven for not thinking there is a difference. ‘Carbon neutral’ and ‘net zero’ are two terms that are often used interchangeably with each other but represent two fundamentally different approaches to tackling climate change. This article explores these two terms, why it is important to understand the differences (particularly in the context of setting climate targets), and how net zero commitments more clearly demonstrate alignment to global emission reduction ambitions.
Understanding the difference between carbon neutrality and net zero
Carbon neutrality is defined by an internationally recognised standard, PAS 2060, and is where the sum of greenhouse gas (GHG) emissions produced are balanced or ‘offset’ by projects that either result in carbon reductions, efficiencies or sinks . This can be achieved by buying carbon avoidance/reduction credits, which support the funding of projects that reduce the amount of CO2 released into the atmosphere, such as renewable energy generation.
A commitment to carbon neutrality does not require a reduction in overall GHG emissions. However, for a business to be carbon-neutral, it must offset the GHG emissions it produces, even if those emissions are increasing.
In contrast, a commitment to net zero requires an organisation to reduce its GHG emissions in line with the latest climate science and 1.5ᵒC trajectory, with the remaining residual emissions balanced through carbon removal credits .
Note that a net zero commitment requires that credits are removal credits, whereas a carbon-neutral commitment permits avoidance/reduction credits. Removal credits support the funding of projects that remove CO2 from the atmosphere – for instance, through CO2 removal technologies or afforestation.
An organisation may go beyond a net zero commitment to be classed as ‘carbon positive’ if it removes more GHG emissions from the atmosphere than it produces.
Click here for more on carbon credits and carbon markets.
Lastly, there are also differences in the applicable scopes of emissions. Carbon neutrality has a minimum requirement of covering Scope 1 and 2 emissions, with Scope 3 encouraged . Net zero must cover Scopes 1, 2 and 3. The requirement to include Scope 3, which includes supply and value chain emissions, adds an additional layer of complexity with respect to measurement of emissions, which will be explored elsewhere in NatWest’s Carbonomics 101 series.
Summary of differences between carbon neutral and net zero
For investors and regulators, net zero is becoming the benchmark for climate action
With increased recognition of the need to act in the face of the climate crisis, governments and companies alike have taken to announcing ambitions to reduce their environmental impact. Some 70 countries, accounting for two-thirds of global carbon emissions, have now set net zero targets to be met by 2050 .
While business commitments are moving in the right direction, companies are at different stages in their decarbonisation journeys. This contributes to the variety of terminology used when stating environmental ambitions, with some choosing to pursue carbon neutrality first, followed by a longer-term net zero commitment. Sky, for example, achieved carbon neutrality as early as 2006 and has since committed to achieve net zero by 2030 .
Achieving net zero is more challenging than carbon neutrality given its holistic scope (Scope 1-3 emissions), level of ambition (emissions reduced in line with climate science), and approach to residual emissions (removal vs. avoidance/reduction). A commitment to carbon neutrality is a positive step; however, setting targets in line with climate science provides stakeholders with a clearer view of the environmental impact that the company is aiming to achieve. To support companies with setting science-based net zero targets, the Science Based Targets initiative (SBTi) recently released the Corporate Net Zero Standard, which sets out guidelines, criteria and recommendations for alignment.
New initiatives are supporting corporate net zero ambitions
In the UK, the direction of travel indicates a future focus on net zero commitments, with the UK Government recently announcing its commitment to become the world’s first ‘Net Zero-aligned Financial Centre’ . As part of this, asset managers, regulated asset owners and listed companies will be required to publish transition plans that consider the UK Government’s net zero commitment. Although this doesn’t make organisation-level net zero commitments mandatory, there may be increasing stakeholder expectations for companies to do so.
Investor-led initiatives are beginning to establish benchmarks to evaluate the corporate ambition and action of the world’s largest GHG emitters. Climate Action 100+ defines key indicators of success for business alignment with net zero emissions at the 1.5ᵒC trajectory, while the United Nations Principles for Responsible Investment (PRI) has released the Investor Climate Action Plans (ICAP) Expectations Ladder, which tiers companies according to the stage of their decarbonisation journey. The benchmark includes four tiers, with those beginning to consider climate aligned strategies falling into Tier 4 and those setting decarbonisation targets in line with the Paris Agreement (1.5ᵒC trajectory) falling into Tier 1. There are several other initiatives, including the United Nation’s Race to Zero, supporting the accelerated adoption of net zero targets .
The time to act is now
According to a recent report by Moody’s, examining 4,400 of the largest companies globally, 42% were found to have some form of emissions targets, but only 17% reference net zero. This serves to highlight that, whilst commitments to reducing environmental impact are becoming more prevalent, current corporate targets fall far short of alignment to 1.5ᵒC, with only 3% of companies with targets aligned to this benchmark .
Whether it’s carbon neutrality or net zero, it’s imperative that commitments are as clear and transparent as possible so that stakeholders can understand the level of ambition and impact. The ultimate aim of emissions reduction commitments should be to support our collective ambition to limit global temperature increases to 1.5ᵒC of warming. Net zero targets play a key role in ensuring transitions are in line with this goal.
Follow NatWest’s Carbonomics 101 series to stay informed on the development of the carbon markets and learn about the role they could play in your sustainability strategy. Access forthcoming articles in this series the moment they’re published by following us on social media, and visit the bank’s Sustainability Hub for essential tools and insights to help you on your sustainability journey.
Useful resources: | https://treasury-management.com/blog/carbonomics-101-carbon-neutral-vs-net-zero-why-the-difference-matters-when-setting-climate-targets/ |
The primary goal of the Equity in Community Investments (ECI) program is to support campaigns led by community-based organizations for more equitable policies and investments in high-need communities of color. We accomplish this through policy and budget analysis, advocacy, narrative development, and capacity-building efforts for our partners.
Examples of our budget and policy work include:
The Manager will report to the Director of Equity in Community Investments and lead partnership efforts with community-based organizations to support local advocacy campaigns that aim to provide equitable policies and investments of public resources to support community priorities. Specifically, the Manager will staff and manage the program’s criminal justice reinvestment efforts by 1) managing our ongoing research and policy development agenda and 2) support various partners’ policy and systems change campaigns.
This work will primarily focus on:
The successful applicant must have a strong, demonstrated commitment to social and racial justice, documented project management skills and experience overseeing multiple streams of work at the same time, a record of involvement in racial justice/systems change campaigns, ability to engage with local government decision-making processes, and be self-motivated, flexible, and skilled at fostering creative and collaborative spaces.
Specific Responsibilities include, but are not limited to:
Qualifications:
To perform this job successfully, an individual performs each essential job function assigned satisfactorily. The requirements listed are representative of the knowledge, skill, and or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.
Physical Demands:
Occasionally must be able to move office supplies and equipment weighing up to 10 pounds across the office or during events.
Salary:
Competitive compensation depending on experience. Includes full health, dental and retirement benefits.
Please send an email with the subject "ECI Application" and a cover letter, resume, and work sample in one PDF attachment titled last_name_first_name_eci.pdf (e.g., Jimenez_Jorge_eci.pdf) to:
Jorge Jimenez
Director of Human Resources, Finance and Administration
[email protected]
Advancement Project is an equal opportunity employer and does not discriminate on the basis of race, sex, religion, national origin, gender identity or expression, sexual orientation, disability, age, or any other category protected by local, state, or federal laws. We are committed to building a diverse, equitable, and inclusive staff team. We strongly encourage applicants who are people of color, LGBTQ, women, people with disabilities, and/or formerly incarcerated people. | https://la2050.org/jobs/3545 |
You have the totals at the top. Are you perhaps referring to billable time only?
PostFollow
Show total hours worked per day in Timesheet view
Our team work on a lot of different tasks per day. In the timesheet view it is not easy to tell how how much time they have logged each day without having to add up all the times against each individual task.
Can you provide a total at the top (or bottom) of each day that shows the total time worked? Users can then quickly see if they have logged their 8 hours for each day.
Please sign in to leave a comment.
3 comments
Ok, now I feel like a complete fool. I don't know how I missed that. :(
For some reason I was looking at the bottom and completely missed the hours at the top. | https://success.clarizen.com/hc/en-us/community/posts/230968488-Show-total-hours-worked-per-day-in-Timesheet-view |
IN THE SUPREME COURT OF MISSISSIPPI
No. 2008-CT-01933-SCT
PAULA DENHAM AND PAMELA CALDWELL
v.
ADAM HOLMES, A MINOR BY AND THROUGH
DONNIE HOLMES, HIS FATHER & NATURAL
GUARDIAN
ON WRIT OF CERTIORARI
DATE OF JUDGMENT: 07/28/2008
TRIAL JUDGE: HON. ANDREW K. HOWORTH
COURT FROM WHICH APPEALED: LAFAYETTE COUNTY CIRCUIT COURT
ATTORNEYS FOR APPELLANTS: TOMMY WAYNE DEFER
BOBBY T. VANCE
ATTORNEY FOR APPELLEE: JOHN BRIAN HYNEMAN
NATURE OF THE CASE: CIVIL - PERSONAL INJURY
DISPOSITION: THE JUDGMENT OF THE COURT OF
APPEALS IS AFFIRMED. THE JUDGMENT
OF THE CIRCUIT COURT OF LAFAYETTE
COUNTY IS REVERSED AND THE CASE IS
REMANDED - 04/07/2011
MOTION FOR REHEARING FILED:
MANDATE ISSUED:
EN BANC.
CARLSON, PRESIDING JUSTICE, FOR THE COURT:
¶1. Paula Denham and Pamela Caldwell filed a complaint against Adam Holmes in the
Lafayette County Circuit Court. Denham and Caldwell alleged that Holmes had negligently
operated his motor vehicle, resulting in an accident in which they were injured. A Lafayette
County jury returned a verdict in favor of Holmes, and the circuit court entered a judgment
consistent with the jury verdict. Aggrieved, Denham and Caldwell appealed, and we assigned
this case to the Court of Appeals. After the Court of Appeals reversed and remanded for a
new trial, Holmes filed a petition for writ of certiorari, which we granted.
¶2. While our disposition of today’s case is the same as the Court of Appeals, our reasons
differ from the Court of Appeals as to why a reversal of the trial-court judgment is required.
We agree with the Court of Appeals that at least one of the jury instructions granted by the
trial judge was erroneous. We find, however, that the trial court did not abuse its discretion
in failing to admonish the jury to disregard defense counsel’s comments during closing
arguments regarding the plaintiffs’ failure to provide expert testimony to support their claims
of Holmes’s liability. For purposes of the new trial to be conducted on remand, we likewise
address the extent to which the trial court abused its discretion by excluding the plaintiffs’
expert witness under Daubert and Mississippi Rule of Evidence 702. See Daubert v. Merrell
Dow Pharm., Inc., 509 U.S. 579, 113 S. Ct. 2786, 125 L. Ed. 2d 469 (1993).
FACTS AND PROCEEDINGS IN THE TRIAL COURT
¶3. For the most part, we present here the Court of Appeals’ recitation of the facts.
Denham v. Holmes, 2010 WL 1037494, at *1-2 (¶¶4-8) (Miss. Ct. App. Sept. 23, 2010).
However, we add certain facts found in the record for the sake of today’s discussion. On the
day of the accident, Paula Denham was driving her vehicle within or near the corporate limits
of Oxford. Denham’s vehicle also was occupied by Denham’s sister, Pamela Caldwell, who
was seated on the front seat-passenger side. Denham was traveling east on the portion of
University Avenue which is east of the Highway 7-University Avenue intersection and west
2
of the intersection of University Avenue with Highway 6/278. In this area, University
Avenue is a two-lane, paved street, with one lane for east-bound traffic and one lane for
west-bound traffic. Denham and Caldwell were traveling to Ken Ash Construction Company
(Ash), which is located on the north side of University Avenue, for the purpose of soliciting
business for their residential/commercial cleaning service. According to Denham, as she
prepared to make a left-hand turn off University Avenue into the Ash parking lot, she
engaged her left-hand-turn blinker and came to a complete stop in the east-bound lane to
allow three or four west-bound vehicles to pass. Denham stated that, although there was a
hill a short distance east of the Ash parking lot, the hill was a sufficient distance away from
her location such that she observed all of the vehicles (three or four in number)
simultaneously as they approached and traveled past her. As Denham attempted to negotiate
the left-hand turn into the Ash parking lot, the front left side of Adam Holmes's truck struck
the front passenger side of Denham's car. Denham stated she never saw Holmes’s truck
coming toward her. Lee Durham was a passenger in Holmes's truck. Before trial, the parties
stipulated that certain physical injuries and medical expenses had resulted from the accident.
¶4. Denham and Caldwell testified that no oncoming traffic was visible when Denham
commenced her left-hand turn. Denham testified that the front wheels of her car were in the
Ash parking lot when Holmes's truck hit her car, pushing it across both traffic lanes on
University Avenue. The car stopped on the roadside opposite the collision with the car’s
front end completely off the road. Holmes's truck stopped seventy-five feet inside Ash's
parking lot. The wreck totaled both vehicles.
3
¶5. Caldwell testified that an “instant” prior to impact, she saw Holmes’s truck. She stated
that Holmes was driving “crazy fast” and that she did not have time to warn Denham of the
approaching vehicle. Holmes testified that he estimated his speed to be between forty and
forty-five miles per hour when Denham turned in front of him. The accident report revealed
that Holmes had told officers that he was traveling forty-five miles per hour. Durham
testified that Denham's vehicle had abruptly turned left when it was “almost right on them.”
Holmes testified that he had applied his brakes and had steered to the right to avoid hitting
Denham's car. Deputy Shane Theobald with the Lafayette County Sheriff's Department was
the investigating officer at the scene. He testified that the speed limit on University Avenue
in the area of the accident was forty miles per hour.
¶6. Before trial, Denham and Caldwell designated Donald Rawson, a traffic-collision
reconstructionist, as their expert witness. The parties stipulated that Rawson was properly
and timely designated and that he would testify by deposition. The parties also stipulated that
Rawson was qualified to give certain expert opinions on the traffic accident.
¶7. However, at trial, Holmes made an ore tenus motion to exclude Rawson's deposition
testimony on the basis that it would not aid the jury. Holmes also questioned the reliability
of the testimony, noting that Rawson did not view the actual wrecked vehicles or speak with
Holmes, Durham, or Deputy Theobald. Holmes argued that Rawson's testimony only
4
reiterated what the police report stated and was unnecessary. Holmes, 2010 WL 1037494,
at *1-2 (¶¶4-8).1
¶8. Rawson’s expert opinion, as provided in his deposition, concluded that Holmes had
been speeding and could have avoided the accident. Rawson had viewed the accident report,
the accident site, the deposition testimony, and several photographs taken some time after the
accident. The accident report stated that Holmes had estimated his speed to be forty-five
miles per hour. Rawson calculated that Holmes’s vehicle had been 206 feet from Denham’s
car when her car had begun to turn, based on the “time it took [Denham] to turn before” the
vehicles collided – 3.12 seconds 2 – and Holmes’s estimated speed – forty-five miles per hour.
Using basic mathematics, Rawson concluded that Holmes’s truck, at a constant speed of
forty-five miles per hour, traveled the 206 feet in 3.12 seconds.
¶9. Timing vehicles at the accident scene, Rawson found that a stopped vehicle could
cross the opposite lane and completely exit the roadway in 3.62 seconds. Rawson opined that
if Holmes had been driving forty miles per hour and had braked properly, then “he would
have slowed down to 31 miles per hour and gone behind [the plaintiff] as she cleared.” This
1
We emphasize that some of the facts we have added were not included in the Court
of Appeals’ opinion.
2
Specifically, according to Rawson, Denham’s turn took 3.12 seconds such that the
front wheels of her car were off the road and in the gravel drive when the collision occurred.
In his post-accident visit to the scene, Rawson determined this exact time by timing cars as
they turned at the accident site.
5
opinion also assumed that Holmes had been 206 feet from the point of impact when Denham
had commenced her left-hand turn.
¶10. Importantly, Rawson opined that the lack of skid marks proved that Holmes had not
braked, and if he had braked or taken evasive action, then Denham’s car could have cleared
Holmes’s truck in the necessary 3.62 seconds. Moreover, Rawson concluded that, based on
the absence of skid marks, Denham had not been negligent in the operation of her vehicle.
He opined that if Denham had created an immediate hazard 3 when turning and, therefore, did
not have the right of way, then Holmes would have reacted by braking his truck, thus causing
skid marks on the pavement, in an attempt to avoid the collision. Rawson concluded that
since no skid marks existed, Denham had not created an immediate hazard and that she had
the right of way. Accordingly, Rawson surmised that Holmes had failed to avoid the
accident, which was his duty.
3
In Rawson’s deposition, he testified that, according to Mississippi’s traffic
regulations and rules of the road, “if someone is turning and they don’t create an immediate
hazard, they have the right-of-way.” See Miss. Code Ann. § 63-3-803 (Rev. 2004) (“The
driver of a vehicle within an intersection intending to turn to the left shall yield the right-of-
way to any vehicle approaching from the opposite direction, which is within the intersection
or so close thereto as to constitute an immediate hazard. However, said driver, having so
yielded . . . may make such left turn and the drivers of all other vehicles approaching the
intersection from said opposite direction shall yield the right-of-way to the vehicle making
the left turn.”). See also Baxter v. Rounsaville, 193 So. 2d 735, 739 (Miss. 1967) (“[T]he
law does not require the operator of an automobile traveling at a lawful rate of speed to stop
or even to reduce the speed of his vehicle merely because he sees another vehicle on the
highway, unless he sees, or by the use of ordinary care should have seen, that the other
automobile is in a place of peril . . . .”).
6
¶11. The trial judge read Rawson’s deposition and determined that Rawson had not
performed a true accident reconstruction; that Rawson’s information was unreliable; and that
Rawson’s conclusions were unnecessary:
There is no accident reconstruction done here. There’s a man who has
qualifications as an accident reconstructionist – that’s stipulated and accepted
– who comes to the scene approximately two years after the accident to
determine lines of sight.
Now, is that relevant or can that be relevant and assist the trier of fact for him
to offer an opinion on that; and the answer to that is yes.
But an accident reconstructionist’s specialized expertise revolves around the
reconstruction of the accident; and he didn’t reconstruct the accident. He
extrapolated some information from a police report and photographs, which
are, absent a photographic expert, not suitable evidentiary matters for the
reconstruction of an accident.
I don’t know that you could do it with a layer of experts, but to calculate speed
you have to understand and the reconstructionist has to factor in crush damage
to the vehicles. Skid marks are not determined from photographs. I know that
much about accident reconstruction, and that’s what he purports to do, and his
information is not based upon sufficient facts or data, and the testimony is not
a product of reliable principles and methods, because none of any significance,
other than a couple of mathematical calculations that could have been done by
a high school student, were employed in this case, none, no coefficients, no
reaction time, none of the things that a true accident reconstruction could use
to benefit the jury.
Any conclusions that he has that are accurate and valid, based upon the limited
information he had, are certainly subject to be overridden by the confusion
associated with the conclusions that he draws based upon insufficient facts and
data to arrive at those conclusions, some of which are legal conclusions that
he doesn’t necessarily need to render because they certainly invade the
province of the jury.
I’m of the opinion that preDaubert, whatever, let it come in. PostDaubert this
Court is the gatekeeper of evidentiary matters, particularly expert testimony.
7
I am of the decided opinion that under the Daubert standard that the Court
would be committing error to allow this testimony to come in as proffered to
the Court, that being the deposition that I have read in full numbered 31 pages.
¶12. The plaintiffs then tried to admit portions of Rawson’s deposition pertaining to
measurements and distances. The trial court refused this request but allowed plaintiffs’
counsel to proffer Rawson's deposition, report, and curriculum vitae as exhibits for
identification purposes only. Denham and Caldwell objected to this ruling, and the trial court
overruled their objections.
¶13. During closing arguments, defense counsel commented on the plaintiffs’ failure to
provide expert testimony after plaintiffs’ counsel had informed the jury in opening statements
that such evidence would be offered. The reason the plaintiffs had not offered expert
testimony was due to the trial court’s exclusion of such testimony. Plaintiffs’ counsel
objected and moved for a mistrial. The trial court overruled the objection, denied the motion
for a mistrial, and did not admonish the jury to disregard defense counsel’s statements. The
trial judge also granted jury instructions D-4 and D-9, over the plaintiffs’ objections.
PROCEEDINGS IN THE COURT OF APPEALS
¶14. Before the Court of Appeals, Denham and Caldwell asserted that the trial court had
erred by: “(1) excluding the testimony of their expert witness; (2) allowing Holmes’s attorney
to make reference during closing arguments to the lack of expert testimony; (3) granting jury
instructions D-4 and D-9; (4) denying their motion for a JNOV; and (5) denying their motion
for a new trial.”
8
¶15. The Court of Appeals reversed the judgment of the Lafayette County Circuit Court,
finding that the trial court had abused its discretion by failing to permit the plaintiffs’
accident reconstructionist to testify; by failing to instruct the jury to disregard defense
counsel’s comments during closing argument; and by granting Holmes’s proffered jury
instructions D-4 and D-9.
¶16. We restate and reorder the assignments of error for the sake of today’s discussion.
DISCUSSION
I. WHETHER THE TRIAL COURT ERRED BY
GRANTING JURY INSTRUCTIONS D-4 AND D-9.
¶17. Jury instructions are to be read as a whole. Bickham v. Grant, 861 So. 2d 299, 301
(Miss. 2003) (citing Southland Enters., Inc. v. Newton County, 838 So. 2d 286, 289 (Miss.
2003)). The trial judge has considerable discretion in instructing the jury. Id. A defendant
generally is entitled to an instruction which presents his side of the case; however, such
instruction must correctly state the law. Id. (citing Southland Enters., Inc., 838 So. 2d at
289) (citations omitted)). Furthermore, “[i]t would be error to grant an instruction which is
likely to mislead or confuse the jury as to the principles of law applicable to the facts in
evidence.” Id. (citing Southland Enters., Inc., 838 So. 2d at 289); see also McCary v.
Caperton, 601 So. 2d 866, 869 (Miss. 1992), overruled on other grounds by Robinson
Property Group, L.P. v. Mitchell, 7 So. 3d 240, 244-45 (Miss. 2009).
¶18. The Court of Appeals found that the trial judge erred when, over objection, he granted
jury instructions D-4 and D-9. Holmes, 2010 WL 1037494, at *5 (¶¶23-25).
9
¶19. Instruction D-4 states:
The violation of any posted speed limit or allegations of driving at an
excessive speed are only relevant if the plaintiff has shown, from a
preponderance of the evidence, that the speed of Adam Holmes was the sole
proximate cause or proximate contributing cause to the accident. Unlawful
speed is not a proximate cause of an accident caused by the intervening
negligence of another person.
Therefore, should you find from a preponderance of the evidence that the
motor vehicle accident of July 15, 2004 was the sole proximate cause of the
actions of Paula Denham, then any violation of the speed limit or allegations
of driving at an excessive speed are irrelevant to your decision.
¶20. Instruction D-9 states:
When considering who is at fault for an accident, and/or injuries, you may take
into account the conduct of those who are not parties to this lawsuit. Although,
not a party to this lawsuit, you may consider the actions or omissions of Paula
Denham in reaching your verdict.
¶21. The Court of Appeals held that Instruction D-4 misstated the law of comparative
negligence4 and conflicted with a second jury instruction, D-10, regarding assigning
negligence/fault percentages. Id. In addition, the Court of Appeals reasoned that Instruction
D-9 clearly misstated the facts, because, contrary to the language of the instruction, Denham
was a party to the lawsuit. Id.
¶22. “Defects in specific instructions do not require reversal ‘where all instructions taken
as a whole fairly – although not perfectly – announce the applicable primary rules of law.’”
Richardson v. Norfolk S. Ry. Co., 923 So. 2d 1002, 1011 (Miss. 2006) (quoting Bradford
4
“Mississippi is a pure comparative[-]negligence state.” Coho Res., Inc. v. Chapman,
913 So. 2d 899, 911 (Miss. 2005) (citing Miss. Code Ann. § 11-7-15 (Rev. 2004)).
10
v. Barnett, 615 So. 2d 580, 583 (Miss. 1993)). Viewing all the jury instructions and
considering these instructions as a whole, this Court cannot say that Instruction D-4 “fairly”
announced the law. Not only does Instruction D-4 misstate the law, but it conflicts
irreconcilably with Instruction D-10.5
¶23. This Court agrees with the Court of Appeals that Instruction D-4 improperly stated
that “[u]nlawful speed is not a proximate cause of an accident caused by the intervening
negligence of another person.” But again, our reasoning in finding fault with this instruction
is somewhat different than that of the Court of Appeals.6 Our problem with this instruction
is that the jury was informed that, even if it found that Holmes’s speed at the time of the
accident was in excess of the posted speed limit, and even if the jury found that this unlawful
speed was a proximate cause of the accident, Holmes still could avoid culpability due to the
“intervening negligence of another person,” for example, Denham. Likewise, contrary to the
assertions of Holmes that any confusion caused by the language of the first paragraph of the
jury instruction was remedied by the second paragraph of the instruction, we find that, in
reading the two paragraphs of Instruction D-4 together as a whole, the second paragraph of
the jury instruction only added to the confusion. In the end, we find that Instruction D-4
5
Instruction D-10 was the familiar form-of-the-verdict instruction based on
comparative negligence.
6
The Court of Appeals stated, in part, that “[a]s it implied contributory negligence,
we find that jury instruction D-4 was a misstatement of the law and was potentially
misleading to the jury.” Holmes, 2010 WL 1037494, at *5 (¶23).
11
misstated the law and likely confused and misled the jury, especially in light of Instruction
D-10.
¶24. Furthermore, we do agree that, while Instruction D-9 was obviously incorrect in
stating that Denham was not a party to the lawsuit, on the other hand, Denham was the driver
of the other vehicle in the accident, and the record reveals that counsel for both the plaintiffs
and the defendant referred to Denham as the driver during opening statements and closing
arguments. Moreover, the trial court provided a form-of-the-verdict instruction, allowing the
jury to consider the negligence, if any, of both Holmes and Denham when rendering the
verdict. This form-of-the-verdict instruction was consistent with Instruction D-9, which
instructed the jury that it could consider the negligence of Denham in reaching a verdict.
While permitting the defendant to offer Instruction D-9 was error, the trial court did not
commit reversible error in granting this instruction, since it was unlikely to have confused
the jury’s understanding of the facts in evidence. Grant, 861 So. 2d at 301 (citing Southland
Enters., Inc., 838 So. 2d at 289) (citations omitted).
¶25. For these reasons, while we agree with the Court of Appeals that the trial court’s grant
of Instruction D-4 was reversible error, we disagree with the Court of Appeals in its finding
that the trial court’s grant of Instruction D-9 was likewise reversible error.
II. W HETHER THE TRIAL COURT ERRED BY NOT
INSTRUCTING THE JURY TO DISREGARD DEFENSE
COUNSEL’S STATEM ENTS DUR ING CLOSING
ARGUMENTS.
12
¶26. In opening statements, plaintiffs’ counsel had informed the jury that expert testimony
would support a finding that Holmes had caused the accident. During closing arguments,
after plaintiffs’ counsel did not offer any expert testimony because the trial judge had
excluded such testimony, defense counsel stated:
Now they want to talk about property damage and where the vehicle ended up.
There’s no evidence. There’s no evidence here of how fast what causes what
property damage, what speeds cause what property damage. The plaintiff,
[through counsel], got up and told you that you would hear from the witnesses
and you would hear from an experts [sic] get up and testify.
Plaintiffs’ counsel objected and moved for a mistrial. The trial court overruled both the
objection to the remarks and the motion for a mistrial.
¶27. The Court of Appeals found that the trial judge had committed reversible error in not
sustaining the objection and instructing the jury to disregard these comments made by
defense counsel. Holmes, 2010 WL 1037494, at *4 (¶18). The Court of Appeals reasoned
that “Holmes’s counsel’s comments . . . were not made in furtherance of evaluating the
evidence or understanding the law; rather the comments were made to arouse prejudice in
the eyes of the jury.” Id.
¶28. This Court has held that “[t]he test in determining whether a lawyer has made an
improper argument which requires reversal is ‘whether the natural and probable effect of the
improper argument . . . creates an unjust prejudice against the opposing party resulting in a
decision influenced by the prejudice so created.’” Eckman v. Moore, 876 So. 2d 975, 986
(¶38) (Miss. 2004) (quoting Davis v. State, 530 So. 2d 694, 701-02 (Miss. 1988)). Moreover,
“[t]he only legitimate purpose of the [closing] argument of counsel in a jury case is to assist
13
the jurors in evaluating the evidence and in understanding the law and in applying it to the
facts.” Shell Oil Co. v. Pou, 204 So. 2d 155, 157 (Miss. 1967).
¶29. In Clemons v. State, 320 So. 2d 368, 369 (Miss. 1975), the prosecutors described the
evidence – marihuana – as being “the stuff that cause[s] people to drown little babies” and
to go crazy. Later, the prosecutor argued that the defense had failed to call certain witnesses
because they all were in criminal cahoots and were trying to protect each other. Id. at 370.
In the same argument, the prosecutor pointed to other criminal defendants – in unrelated
cases – as “[b]irds of a feather” waiting to be tried for crimes. Id. Finding these statements
obviously prejudiced the defendant’s right to fair trial, this Court reversed and remanded. Id.
at 373.
¶30. In Davis v. State, 530 So. 2d 694, 701 (Miss. 1988), the prosecutor – in an armed-
robbery case – referred to the incident as being “a heartbeat away from a massacre. . . .” The
judge admonished the jury to disregard this statement, and this Court found the prosecutor’s
statement was harmless. Id. at 702. In Roundtree v. State, 568 So. 2d 1173, 1177 (Miss.
1990), the prosecutor remarked that the people of the county were fed up with waking up in
the mornings to news of another body being discovered. The judge admonished the jury to
disregard this statement. Id. at 1178. This Court found the statements improper but found
that the judge’s admonishment had cured any prejudicial effect. Id.
¶31. Recently, in Eckman, the plaintiff stated that the physician defendant thought he was
“above the law.” Eckman, 876 So. 2d at 987. We reiterated that “[t]he only legitimate
purpose of the [closing] argument of counsel in a jury case is to assist the jurors in evaluating
14
the evidence and in understanding the law and in applying it to the facts. Appeals to passion
or prejudice are always improper and should never be allowed.” Id. (quoting Shell Oil Co.,
204 So. 2d at 157). This Court found that the trial court had erred “in finding that this
improper argument did not exceed the bounds of the evidence.” Id. The plaintiffs’ statement
had nothing to do with the standard of care at issue. Id.
¶32. In today’s case, we agree that defense counsel made the objectionable statements with
the intent of being “prejudicial” to the other side’s case. The reality of our advocacy system
is that the purpose of a party’s presentation of evidence and the comments of that party’s
counsel, throughout the trial, is to aid that party’s case, and to “prejudice” (be detrimental to)
the other party’s case. As to the objectionable comments made by defense counsel during
closing arguments in today’s case, these statements, while perhaps prejudicial (detrimental)
to the plaintiffs’ case, were not “unjust[ly]” prejudicial. Davis, 530 So. 2d at 701-02. The
plaintiffs, through counsel, had remarked in opening statements that they would introduce
expert testimony into evidence; however, based on the trial court’s ruling during trial as to
the plaintiffs’ expert, the plaintiffs failed (were unable) to present this expert testimony.
Defense counsel’s bringing to the jury’s attention this “missing link” in the plaintiffs’
evidence most likely harmed the plaintiffs’ case. But, acting at their own peril, the plaintiffs
invited this comment by informing the jury during opening statements that they would
provide expert testimony during the trial but failing to do so. See Taylor v. State, 672 So. 2d
1246, 1269 (Miss. 1996) (holding that the State may respond in its closing arguments to
defense counsel’s failure to follow through on promises of proof to come made during
15
opening statements) ; Herring v. Poirrier, 797 So. 2d 797, 804 (Miss. 2000) (holding that
plaintiff may not make comments during opening statements and then complain when the
opposing party reasonably attempts to respond to them); Echols v. State, 759 So. 2d 495,
497-98 (Miss. Ct. App. 2000) (reasoning that defense counsel’s remarks during the opening
statement and a subsequent lack of supporting evidence “invited the State’s remarks in the
closing argument”); Hinton v. Waste Techniques Corp., 364 A.2d 724, 728 (Penn. 1976)
(“An attorney [in a civil case] cannot promise to prove certain facts to the jury, fail to prove
them, and then expect to escape with impunity.”).
¶33. Thus, the Court of Appeals should not have found that defense counsel committed
error by identifying a legitimate evidentiary weakness in the plaintiffs’ case and rebutting the
plaintiffs’ opening statements. The facts of this case are distinguishable from cases in which
we have found unjust prejudice. For the foregoing reasons, we find the trial judge did not
commit error in failing to admonish the jury to disregard these comments made by defense
counsel during closing arguments.
III. WHETHER THE TRIAL COURT PROPERLY EXCLUDED
THE PLAINTIFFS’ EXPERT TESTIMONY.
¶34. “The standard of review for the admission or exclusion of evidence, such as expert
testimony, is an abuse of discretion.” Investor Res. Servs., Inc., v. Cato, 15 So. 3d 412, 416
(Miss. 2010) (citing Adcock v. Miss. Transp. Comm’n, 981 So. 2d 942, 946 (Miss. 2008)).
This Court will not overturn a trial court’s decision on an evidentiary issue unless the trial
court abused its discretion, meaning it acted arbitrarily and clearly erroneously. Hubbard v.
16
McDonald’s Corp., 41 So. 3d 670, 674 (Miss. 2010) (citing Kilhullen v. Kansas City So.
Ry., 8 So. 3d 168, 172 (Miss. 2009)). “A trial judge’s determination as to whether a witness
is qualified to testify as an expert is given the widest possible discretion and that decision
will only be disturbed when there has been a clear abuse of discretion.” Worthy v. McNair,
37 So. 3d 609, 614 (Miss. 2010) (quoting Sheffield v. Goodwin, 740 So. 2d 854, 856 (Miss.
1999)).
¶35. The admissibility of expert testimony is evaluated in light of Mississippi Rule of
Evidence 702, which states:
If scientific, technical, or other specialized knowledge will assist the trier of
fact to understand the evidence or to determine a fact in issue, a witness
qualified as an expert by knowledge, skill, experience, training, or education,
may testify thereto in the form of an opinion or otherwise, if (1) the testimony
is based upon sufficient facts or data, (2) the testimony is the product of
reliable principles and methods, and (3) the witness has applied the principles
and methods reliably to the facts of the case.
This rule emphasizes that it is “the gate[-]keeping responsibility of the trial court to
determine whether the expert testimony is relevant and reliable.” M.R.E. 702 cmt.
¶36. For expert testimony to be admissible, it must be both relevant and reliable. Daubert
v. Merrell Dow Pharm., Inc., 509 U.S. 579, 592-94, 113 S. Ct. 2786, 125 L. Ed. 2d 469
(1993). “Relevance is established when the expert testimony is sufficiently tied to the facts
of the case that it will ‘assist the trier of fact to understand the evidence or to determine a fact
in issue.’” Hubbard, 41 So. 3d at 675 (quoting Daubert, 509 U.S. at 591). To be reliable,
“the testimony must be grounded in the methods and procedures of science, not merely a
17
subjective belief or unsupported speculation.” Worthy, 37 So. 2d at 615 (citing Miss. Transp.
Comm’n v. McLemore, 863 So. 2d 31, 36 (Miss. 2003)) (citations omitted).
¶37. We have stated that “an expert’s testimony is presumptively admissible when relevant
and reliable.” Hubbard, 41 So. 3d at 675 (quoting McLemore, 863 So. 2d at 39). “The
weight and credibility of expert testimony are matters for determination by the trier of fact.”
Id. (quoting Univ. Med. Ctr. v. Martin, 994 So. 2d 740, 747 (Miss. 2008)) (citations
omitted). “Vigorous cross-examination, presentation of contrary evidence, and careful
instruction on the burden of proof are the traditional and appropriate means of attacking
shaky but admissible evidence.” Id. (quoting McLemore, 863 So. 2d at 36 (quoting Daubert,
509 U.S. at 596)).
¶38. Ultimately, if an expert’s testimony survives the threshold scrutiny under Rule 702,
it is subject to further review under Rule 403. See Worthy, 37 So. 3d at 614 (citing Daubert,
509 U.S. at 595) (“The [trial] judge in weighing possible prejudice against probative force
under Rule 403 of the present rules exercises more control over experts than lay witnesses.”).
“[E]xpert evidence can be both powerful and quite misleading . . . .” Daubert, 509 U.S. at
595. Accordingly, “lack of reliable support may render [expert testimony] more prejudicial
than probative” under Rule 403. Viterbo v. Dow Chem. Co., 826 F.2d 420, 422 (5th Cir.
1987).
¶39. Rawson’s deposition testimony concluded that, based on basic mathematics and the
lack of skid marks, Holmes should have avoided the accident. Specifically, Rawson’s
deposition testimony concluded that Denham was not negligent and did not create an
18
immediate hazard when turning, because there were no skid marks. Denham and Caldwell
argued that the trial court should have admitted this deposition testimony because “several
of the jury instructions hinged on the speed of Holmes’s vehicle and distance between the
vehicles.” Denham, 2010 WL 1037494, at *3 (¶11).
¶40. The Court of Appeals agreed, finding that Rawson’s deposition testimony “was
technical in nature and would have assisted the jury in understanding the evidence and
determining the facts in issue.” Holmes, 2010 WL 1037494, at *3 (¶14). The Court of
Appeals further found that, because the “testimony was based on the facts available from the
accident scene . . . then this testimony should have been admitted.” Id. And the Court of
Appeals reasoned that, because the jury had no expert testimony from which to determine the
negligence of the parties, the jury “did not have sufficient information from which to reach
a verdict.” Id.
¶41. In Hollingsworth v. Bovaird Supply Co., 465 So. 2d 311, 315 (Miss. 1985), this Court
first held that a properly qualified and examined expert witness could provide testimony on
issues of ultimate fact regarding the cause of car wrecks without invading the jury’s province
– essentially placing accident reconstructionist experts on equal footing with other experts.
Under Daubert, however, Rawson’s testimony must still be both relevant and reliable.
Daubert, 509 U.S. at 591-2.
1. Relevance
¶42. This Court agrees that Rawson’s basic mathematics and timing estimates, based on
Holmes’s professed speed, were relevant. See Cato, 15 So. 3d at 418 (discussing the low
19
threshold for relevant evidence). Both the Court of Appeals and the trial court agreed that
Rawson’s timing and distance estimates would have assisted the trier of fact.
2. Reliability
¶43. However, contrary to the language in the Court of Appeals’ opinion, the mere
existence of expert testimony– based on the facts available from an accident scene – does not
mean that this testimony, although technical in nature, should be admitted as a matter of
course. See M.R.E. 702 (“If . . . technical . . . knowledge will assist the trier of fact to
understand the evidence or to determine a fact in issue, a witness . . . may testify thereto in
the form of an opinion or otherwise, if (1) the testimony is based upon sufficient facts or data
. . . .”).
¶44. Rawson’s deposition testimony reveals that Rawson concluded Holmes had been
speeding. This opinion is based on Holmes’s admitted estimated speed. Based on this
estimated speed, Rawson performed some basic mathematical calculations to determine the
distance between Holmes’s car and Denham’s car.7 He also viewed several photographs and
determined no skid marks existed. He then made a second conclusion: that Denham had not
been negligent and that Holmes had failed to avoid the accident. Rawson reasoned that,
based on the lack of skid marks, Holmes had not attempted to avoid the accident:
[Defense Attorney]: If I understand it right, traveling at -- in excess of the
speed limit was -- the basis of your opinion was simply the 45 miles per hour
written on the police report.
7
Rawson indicated in his report that no physical evidence permitted him to test
Holmes’s speed.
20
[Rawson]: That’s correct.
[Defense Attorney]: Nothing else?
[Rawson]: Nothing else.
[Defense Attorney]: The failure to take appropriate action – and when you say
that, are you saying to avoid the accident?
[Rawson]: Yes, sir.
[Defense Attorney]: Okay. Failure to take appropriate action to avoid the
accident, it is simply by no evidence of skidding, no skidmarks on the
pavement?
[Rawson]: That’s correct.
Rawson could not perform any tests to confirm or disprove Holmes’s estimated constant
speed of forty-five miles per hour.8 As the Court of Appeals acknowledged, little physical
evidence in this case lent itself to expert analysis in the field of accident reconstruction to aid
the jury.
¶45. Reviewing Rawson’s conclusions based on the limited evidence in this case, we
cannot say the trial judge abused his discretion in finding this deposition testimony to be
unreliable as expert testimony with regard to Rawson’s ultimate conclusion concerning
causation and avoidance. But we agree with the Court of Appeals to the extent that the trial
court should have permitted Rawson to offer his timing and distance estimates to aid the jury.
A. Timing & Distance Estimates
¶46. Rawson’s ultimate conclusion – that Holmes should have avoided the accident –
assumed that Holmes’s estimated speed was correct and further assumed that Holmes had
constantly maintained his estimated rate of speed until impact.
8
At trial, Holmes testified that he had told police that he estimated his speed to have
been between forty and forty-five miles per hour. He also testified that he had slowed to
thirty miles per hour when Denham’s car had turned into his lane.
21
¶47. Rawson’s testimony hinges on precise timing – the difference between 3.12 seconds
and 3.62 seconds. If these assumptions were incorrect, then Holmes would not have been
206 feet away from the plaintiff’s car when it began to turn. Consequently, his conclusions
regarding accident avoidance and the fact that Denham did not create an immediate hazard
most likely would have been altered.
¶48. Evidentiary weaknesses stemming from a lack of physical evidence in the plaintiffs’
case should not induce the introduction of unreliable expert testimony. See M.R.E. 702.
Generally, however, when expert opinion is based on reliable methodology, the facts as
applied in the methodology are a credibility determination for the jury. See Treasure Bay
Corp. v. Ricard, 967 So. 2d 1235, 1240 (Miss. 2007). Here, Rawson relied on basic
mathematics – an obviously reliable methodology – to create his timing and distance
estimates. In reaching his conclusions, Rawson applied facts in the record, including
Holmes’s estimated speed of forty-five miles per hour.
¶49. While Rawson’s timing and distance estimates arguably were shaky, the credibility
of this portion of Rawson’s deposition was an issue for the trier of fact. See id. (“[E]xperts
in many fields, including medicine, accident reconstruction and forensic pathology,
frequently rely on histories provided by patients and witnesses.”); see also Hubbard, 41 So.
3d at 675 (quoting McLemore, 863 So. 2d at 36 (quoting Daubert, 509 U.S. at 596))
(“Vigorous cross-examination, presentation of contrary evidence, and careful instruction on
the burden of proof are the traditional and appropriate means of attacking shaky but
admissible evidence.”).
22
¶50. Moreover, although the trial judge expressed concern over whether Rawson had
performed a true accident reconstruction, this Court finds that Rawson’s testimony regarding
his timing and distance estimates, based on common mathematics, did constitute expert
testimony in the field of accident reconstruction. Although jurors could have performed
Rawson’s common calculations, Rawson collected data from the accident using his
specialized knowledge. He measured sight distances, timed cars, and determined the location
of the accident from the available evidence. He interpreted this evidence and, ultimately,
based on this limited evidence, he reached conclusions about causation and avoidance. As
applied mathematically and at the accident site, Rawson’s expert analysis and methods
regarding timing and distance estimates, were beyond the average juror’s “common
knowledge” and should have been presented to the jury. Palmer v. Biloxi Reg’l Med. Ctr.,
Inc., 564 So. 2d 1346, 1355 (Miss. 1990); Smith v. Ameristar Casino Vicksburg, Inc., 991
So. 2d 1228, 1230 (Miss. Ct. App. 2008); see also 9 Am. Jur. 3d Proof of Facts § 115, 4
(2010) (Trial court should admit expert testimony if relying on the “‘knowledge and
application of principles of physics, engineering, and other sciences [is] beyond the ken of
the average juror.’”).
B. Lack of Skid Marks
¶51. Under Daubert, all of Rawson’s relevant conclusions, however, must have been
sufficiently reliable. We find that Rawson’s ultimate conclusion regarding causation and
avoidance was not sufficiently reliable. Based on the lack of skid marks, Rawson concluded
that Denham had not negligently caused an immediate hazard when she had turned in front
23
of Holmes. Rawson reasoned that if Denham had caused an immediate hazard, then Holmes
would have applied his brakes and left skid marks or swerve marks. And since no skid marks
existed, according to Rawson, Denham could not have been negligent.
¶52. But other than speaking generally about creating immediate hazards and pointing to
the lack of skid marks, Rawson’s deposition testimony never clearly explained under the
limited physical evidence why skid marks were required for a finding that Holmes had
attempted to avoid the accident:
[Defense Attorney]: Now, we can agree that in the general course of traffic,
disregarding the events of this accident, in the general course of traffic,
vehicles not turning left and in clear sight, they would have the right-of-way
to vehicles turning left on the eastbound lane?
[Rawson]: Unless they created an immediate hazard?
....
[Defense Attorney]: Okay. So in this case, Ms. Denham created an immediate
hazard?
[Rawson]: No, she didn’t.
[Defense Attorney]: And so in this case, if she didn’t create an immediate
hazard, then how does Mr. Holmes not have the right-of-way?
[Rawson]: He didn’t attempt to avoid the accident.
[Defense Attorney]: Okay. And if I’m clear – I’m not clear. Can you state that
again?
[Rawson]: Had she created an immediate hazard, she would have caused him
to respond by locking his brakes or swerving left or right to take
immediate evasive action. She did not. She made her turn in sufficient time
for him to have slowed by normal brake and still avoid the accident.
[Defense Attorney]: Okay. And how does that situation – how does that
eliminate his right-of-way?
[Rawson]: Well, the law says if someone is turning and they don’t create an
immediate hazard, they have the right-of-way.
....
[Defense Attorney]: And is that the basis of your opinion that Ms. Denham
held no negligence, is the fact that the rules of the road state that when a
vehicle is turning left and turns – I’m sorry. I just – if you can state that for me
again so I’m clear.
24
[Rawson]: Okay. If the vehicle that makes a left turn across traffic does not
create an immediate hazard, the oncoming traffic has to yield to him or her.
[Defense Attorney]: So if you make a left turn across oncoming traffic, the
oncoming traffic then has the duty to yield to you?
[Rawson]: Yes, sir.
[Defense Attorney]: Which, I guess, would be the basis for your opinion that
he should have avoided the accident?
[Rawson]: Yes, sir.
(Emphasis added.) Based simply on the absence of skid marks, Rawson concluded that
Denham had not been negligent and that Denham had the right of way. Rawson also
concluded that Holmes had not attempted to avoid the accident.9
¶53. Even though “[v]igorous cross-examination, presentation of contrary evidence, and
careful instruction on the burden of proof are the traditional and appropriate means of
attacking shaky but admissible evidence[,]” this Court cannot say that the trial judge acted
arbitrarily by finding that Rawson’s ultimate conclusion regarding causation and avoidance
did not satisfy Daubert’s reliability standard. Hubbard, 41 So. 3d at 675 (quoting
McLemore, 863 So. 2d at 36 (quoting Daubert, 509 U.S. at 596)). “[N]othing in . . . Daubert
. . . requires a . . . court to admit opinion evidence which is connected to existing data only
by the ipse dixit of the expert. A court may conclude that there is simply too great an
analytical gap between the data and the opinion proffered.” Watts v. Radiator Specialty Co.,
990 So. 2d 143, 149 (Miss. 2008).
9
On the other hand, Holmes testified that, once Denham had pulled in front of him,
he slowed his truck to a speed of thirty miles per hour in an effort to avoid the accident.
25
¶54. While this Court has allowed, when appropriate, an accident reconstructionist to opine
as to ultimate conclusions regarding causation, Rawson’s ultimate conclusion as articulated
in his deposition testimony, based on the lack of skid marks, contained an obvious “analytical
gap.” Therefore, we cannot say that the trial judge abused his discretion in excluding this
testimony. Simply stated, Rawson failed to connect the dots between the skid marks and the
existing physical evidence; thus, as found by the trial judge, his conclusion regarding
causation was unreliable.
¶55. Moreover, Rawson’s conclusion regarding whether Denham had created an immediate
hazard was not based on specialized, technical, or scientific knowledge. See M.R.E. 702
(requiring “scientific, technical, [or] specialized knowledge”). As previously mentioned,
Rawson opined that, because no skid marks were present in the black-and-white photographs,
Holmes did not attempt to avoid the accident, although he had a duty to do so. However,
Holmes testified that he did brake to avoid the accident, probably slowed to thirty miles per
hour, and sought to circumvent Denham’s car in the gravel lot off the highway where the
collision occurred. Durham, Holmes’s passenger, stated that Denham’s vehicle had pulled
in front of Holmes’s truck at the last second. On the other hand, Denham and Caldwell
testified that they never saw Holmes’s truck approaching, and that he must have been driving
“crazy fast.”
¶56. Without clearly tying the physical evidence to the lack of skid marks, Rawson’s
speculation “on the implication of the lack of skid marks would not have been superior to a
conclusion a jury could [have drawn] for themselves” and, therefore, was not necessary. See
26
Garnett v. Gov’t Employees Ins. Co., 186 P.3d 935, 946 (Okla. 2008); Palmer, 564 So. 2d
at 1355 (Miss. 1990) (“Expert testimony is required unless the matter in issue is within the
common knowledge of laymen.”); Scott v. Sears, Roebuck & Co., 789 F.2d 1052, 1055 (4th
Cir. 1986) (“[E]xpert testimony is unnecessary . . . when ordinary experience would render
the jury competent to decide the issue.”).
¶57. “A trial judge’s determination as to whether a witness is qualified to testify as an
expert is given the widest possible discretion . . . .” Worthy, 37 So. 3d at 614 (quoting Univ.
of Miss. Med. Ctr. v. Pounders, 970 So. 2d 141, 146 (Miss. 2007)). Viewing Rawson’s
deposition testimony in its entirety, we find that the trial judge did not abuse his discretion
by not allowing into evidence expert testimony that was clearly speculative and based on
insufficient data. But the trial court did abuse its discretion by not permitting the jury to
weigh the credibility of Rawson’s distance and timing estimates, which largely were based
on facts in the record and would have aided the jury.
CONCLUSION
¶58. We agree with the Court of Appeals’ finding that the trial judge erred by granting
Instruction D-4, but we disagree with the Court of Appeals’ finding that the trial judge erred
in granting Instruction D-9. We disagree with the Court of Appeals’ finding that the trial
judge committed error by not instructing the jury to ignore defense counsel’s comments
during closing arguments concerning the plaintiffs’ inability to provide expert testimony.
Defense counsel’s statements did not unjustly prejudice the plaintiffs. Finally, we agree in
part and disagree in part with the Court of Appeals’ finding that the trial judge erroneously
27
excluded the testimony of the plaintiffs’ expert. In viewing the entirety of Rawson’s
deposition testimony, we find that the learned trial judge did not err in performing his role
as gatekeeper under Mississippi Rule of Evidence 702 and Daubert regarding Rawson’s
ultimate conclusions. But we likewise are constrained to find that the trial judge abused his
discretion in disallowing Rawson’s testimony concerning timing and distance estimates.
¶59. In sum, based on today’s discussion, we affirm the judgment of the Court of Appeals,
although for different reasons, and we reverse the trial-court judgment and remand this case
to the Circuit Court of Lafayette County for further proceedings consistent with this opinion.
¶60. THE JUDGMENT OF THE COURT OF APPEALS IS AFFIRMED. THE
JUDGMENT OF THE CIRCUIT COURT OF LAFAYETTE COUNTY IS
REVERSED AND THE CASE IS REMANDED.
WALLER, C.J., LAMAR, CHANDLER AND PIERCE, JJ., CONCUR.
KITCHENS, J., CONCURS IN PART AND DISSENTS IN PART WITH SEPARATE
WRITTEN OPINION JOINED BY DICKINSON, P.J., AND RANDOLPH, J. KING,
J., NOT PARTICIPATING.
KITCHENS, JUSTICE, CONCURRING IN PART AND DISSENTING IN
PART:
¶61. I disagree with the majority’s finding that the trial court did not abuse its discretion
in excluding Donald Rawson’s expert testimony regarding the lack of skid marks, as they
related to causation and avoidance of the accident.10 In all other respects, I concur with the
majority.
10
Rawson’s qualifications as an expert in accident reconstruction have not been
challenged.
28
¶62. I agree with the majority’s summary of the Daubert standard; however, I disagree
with its conclusion that Rawson’s opinion, as it related to the lack of skid marks, was
unreliable and inadmissible. See Daubert v. Merrell Dow Pharm., Inc., 509 U.S. 579, 592-
94, 113 S. Ct. 2786, 125 L. Ed. 2d 469 (1993).
¶63. Mississippi Rule of Evidence 702 provides:
“Rule 702 seeks to encourage the use of expert testimony in non-opinion form when counsel
believes the trier [of fact] can draw the requisite inference.” Miss. R. Evid. 702 cmt.
Additionally, it is possible for an expert to suggest the inference which should be drawn from
applying his specialized knowledge to the facts. Id. Moreover, “[t]his Court has permitted
the testimony of qualified accident reconstruction experts to give opinions on how an
accident happened, the point of impact, the angle of travel, the responsibility of the parties
involved, and the interpretation of photographs.” Fielder v. Magnolia Beverage Co., 757
So. 2d 925, 937-38 (Miss. 1999) (citing Miller v. Stiglet, Inc., 523 So. 2d 55 (Miss. 1988);
Hollingsworth v. Bovaird Supply Co., 465 So. 2d 311 (Miss. 1985)).
¶64. “It is not necessary that one offering to testify as an expert be infallible or possess the
highest degree of skill; it is sufficient if that person possesses peculiar knowledge or
information regarding the relevant subject matter which is not likely to be possessed by a
29
layman.” Hooten v. State, 492 So. 2d 948, 948 (Miss. 1986) (citing Henry v. State, 484 So.
2d 1012 (Miss. 1986)). In this case, Rawson had been employed with the Mississippi
Highway Safety Patrol since 1968, and, over the course of his employment and training, he
had become an expert in the field of accident reconstruction, which had led to his teaching
and publishing literature in that field. Clearly, Rawson possessed particular knowledge and
information regarding accident reconstruction that laypersons usually do not have.
¶65. In his deposition, Rawson concluded that Holmes had been traveling in excess of the
posted speed limit and that he had failed to take appropriate evasive action to avoid the
accident. Rawson’s report provided an explanation for each of his findings. Specifically,
Rawson’s report said that:
1. The Pontiac stopped eastbound on University Ave[.] to initiate a left
turn into a private drive. This opinion is based upon my review of the
Crash Report filed by the Sheriff’s office.
2. As the Pontiac was making a left turn it was struck in the front right
side by the front left of the westbound Chevrolet PU. This opinion is
based on my observation of the scene and vehicle photos as well as
plaintiff’s statements.
3. The Area of Impact was located at the north edge of University Ave[.]
and the middle of the private drive. This opinion is a result of my
observation of a photo depicting a tire scuff in the gravel area.
4. The impact rotated the Pontiac counterclockwise causing a secondary
collision of the Pontiac rear right side with the rear left side of the
Chevrolet PU. Opinion based on the photos showing the secondary
damage.
5. There was no skid marks prior to impact. Opinion is based on the
absence of skid marks in the photos.
6. The Pontiac was able to accelerate to the impact in 3.13 seconds before
being hit and could have cleared the westbound lane in 3.62 seconds.
This opinion was based on timed cars making a left turn at the collision
site.
30
7. The PU was approximately 206 feet from impact with a clear view of
the scene when the Pontiac entered his lane. This opinion is based on
the Crash Report in which the Chevrolet PU driver estimated his speed
at 45 mph, opinion #6 and my observations at the scene.
8. The speed of the PU was 45 miles per hour on impact. Opinion based
on the estimated speed of the PU driver and the absence of skid marks.
9. There is not a method to determine speed of the vehicle’s speed using
physical evidence. This opinion is based on the use of conservation of
linear momentum normally used in the case to determine speeds, but
necessary data is not available because:
a. The site has been altered due to construction in the post impact
areas.
b. The secondary collision has some input in speed and direction.
10. Using the posted speed limit of 40 mph, the PU could have reacted to
the car turning and applied normal braking and the Pontiac would not
create an immediate hazard. This would have allowed the PU to slow
to 31 mph and would have allowed the car to clear the lane in the 3.62
seconds required because he would arrive at the area of impact 3.72
seconds after perception. This opinion is a calculated value using the
speed limit of 40 mph, the normal braking value and the standard
perception-reaction time.
11. From the speed of 40 mph (posted speed limit). The driver could have
stopped the PU in 194 feet by reacting and locking his brakes. This is
12 feet prior to impact. This opinion is a calculated value based on the
road surface’s coefficient of friction.
(Emphasis in original.)
¶66. The majority asserted that “Rawson failed to connect the dots between the skid marks
and the existing physical evidence; thus, as found by the trial judge, his conclusion regarding
causation was unreliable.” Maj. Op. at ¶ 54. However, Rawson’s report provided a basis for
each of his findings, including specific findings based on the accident report, accident photos,
the posted speed limit, Holmes’s admitted speed, the normal braking value, the standard
perception-reaction time, the road surface’s coefficient of friction, and his personal
observations of the scene.
31
¶67. I agree with the majority’s assertion that “Nothing in Daubert requires a court to
admit opinion evidence which is connected to existing data only by the ipse dixit of the
expert. A court may conclude that there is simply too great an analytical gap between the
data and the opinion proffered,” Maj. Op. at ¶ 53 (quoting Watts v. Radiator Specialty Co.,
990 So. 2d 143, 149 (Miss. 2008)). However, as evidenced by Rawson’s report, there was
sufficient evidence in the record to “connect the dots” of Rawson’s opinion regarding the
lack of skid marks, making his expert opinion reliable and admissible.
¶68. Moreover, Rawson’s report was dated June 11, 2006, and his trial deposition was
taken on July 3, 2006. Prior to the deposition, Holmes was on notice of Rawson’s opinions
and their bases. As such, Holmes could have challenged Rawson’s opinions by conducting
a vigorous examination of Rawson, and he could have presented contrary evidence through
an expert witness of his own. See Miss. Transp. Comm’n v. McLemore, 863 So. 2d 31, 36
(Miss. 2003) (quoting Daubert, 509 U.S. at 595-96) (“Vigorous cross examination,
presentations of contrary evidence, and careful instruction on the burden of proof are the
traditional and appropriate means of attacking shaky but admissible evidence.”).
¶69. However, Holmes did not attempt to challenge the bases of Rawson’s opinions until
his ore tenus Daubert motion on the first day of trial, June 25, 2008, nearly two years after
Rawson’s deposition had been taken. While the trial judge acted within his discretion in
allowing the motion to be heard, one has to wonder why Holmes waited until trial to object
to expert testimony of which he had known the bases for more than two years. See Hyundai
Motor America v. Applewhite, So. 3d , 2011 WL 448032, *5 (Miss. 2011) (“[T]he trial
32
judge has discretion with regard to when and how to decide whether an expert’s testimony
is sufficiently reliable to be heard by a jury . . . .”).
¶70. Because Rawson is an educated and experienced accident reconstructionist who
possessed peculiar knowledge within his field not likely to be possessed by a layman, and
his opinions were not merely subjective belief or unsupported speculation but were based on
methods and procedures of science, the trial court abused its discretion in excluding
Rawson’s deposition in its entirety. Hooten v. State, 492 So. 2d at 948; McLemore, 863 So.
2d at 36. Accordingly, I would affirm the Court of Appeals’s reversal and remand of this
issue for a new trial with the inclusion of all of Rawson’s expert testimony.
DICKINSON, P.J., AND RANDOLPH, J., JOIN THIS OPNION.
33
| |
Share
Transcript
Zhen Ni: So, my name is Zhen Ni. I am a Senior Research Fellow working in NINDS. So, today I will talk about the Cortical Anatomy and the Clinical Neurophysiology for the TMS Introduction. As we know, TMS, the Transcranial Magnetic Stimulation, is a powerful but non-invasive technique to stimulate the human brain. And the -- it's a very good technique to study the human brain function.
So, as we can see from this slide that if we give a stimulation to the human brain, it produces a descending volley in the spinal cord. And then, you can activate the spinal motor neuron pool. In transmission, you can activate to the target muscle then we can record the response in the target muscle. So, in this talk, I want to focus on four different topics. The first one, how can we record the response from the -- from the stimulation of TMS? Second, how can we test the spinal cord activity? The third one, how do we do the motor cortical stimulation with TMS? And what is the descending volley of the TMS? And the fourth one, I want to focus on the stimulation outside of motor cortex.
So, let's go to the first part. How do we record the response of TMS? There are -- usually, there are two different techniques. One is electroencephalography, EEG, and there's another technique named electromyography, EMG. EEG is a very good technique to record the activity of the brain. So, it's usually recorded using a system named the 10-20 EEG System. Here, 10-20 refers to the factor that the actual distance between the adjacent electrodes is either 10 percent or 20 percent of the total front to back, or left to right, distance of the skull.
Here, we measure the distance between the left ear and to the right ear. Also, we measure the distance from Nasion point to the Inion point as the total difference. We can mount many electrodes on the scalp. Here, we use the abbreviation of F as the frontal area. And the temporal area can be shortened by the T. Parietal area is P, and the occipital area is O, and the central area is C.
So, there are many different techniques to record the EEG. We can use a cap, for example, like showing in this slide. We use a 64-channel cap to do the EEG recording. So, there is a very important step to record the EEG. That's the preparation of the cap or electrode in other techniques. The preparation -- the purpose of the preparation of the cap is to reduce the impedance of the electrodes. And in this slide, the red color means it's a very high impedance. And the green color means low -- lower impedance, which we can use it for the next step for the recordings.
So, like in this slide, it's a new cap; it's named active cap. On the left side, it's a cap before the -- before the preparation. And we can see the technician is using the gel to prepare for the cap. And on the right side, almost all lights turn to green. That means the cap is ready for the recording. And in the middle, that means we are doing the preparation.
There's another technique named the electromyography, EMG. This can also be used to record the response to TMS. And the mechanism behind the EMG is a size principle. The size principle means there are many motor neurons in the spinal cord. And these neurons can be recruited in order of -- from the smallest to the largest of size when the muscle is doing contraction, and the contraction is an increase little by little. So EMG, with different techniques, can record the activity of muscle fibers. And the EMG recorder can reflect the excitability of the neurons connecting to the muscle fibers.
In this slide, we show the recording of a single motor unit. Here is a very important concept named the single motor unit. That means a single motor unit includes an alpha motor neuron and all the corresponding muscle fibers it innervates. That means all the fibers which connect to the motor -- connect to the single motor unit -- have the same firing property. But when we use a needle to insert into the muscle, and the tip of the needle is close to the muscle fibers, we want to record. At that moment, we can get to the discharge of the muscle fibers. And this firing property of the muscle fiber can reflect the firing property of the motor neuron, which it -- which is connected to all these muscle fibers.
So, on the right side, we can see an example recorded of the single motor unit recorded in the -- in the first dorsal interosseous muscle. So, there is a very clear limitation of single motor unit recording. At first, of course, it's invasive and painful. And another important limitation is that it's almost impossible to record all the muscle fibers in the muscle with the single motor unit recording.
So, over to -- to overcome this limitation, we can use the surface EMG. So, see surface EMG is used to monitor the general picture of the muscle activity. So, the surface EMG can superimpose the activation potential of the muscle fibers under the electrode we use. So, here we used an electrode attached to the surface of the muscle. So, here is the concept named the compound muscle action potential, C-M-A-P, CMAP. It's a very useful measurement. And it can be recorded with surface EMG. And the CMAP represents the summation of almost simultaneous action potential from many muscle fibers in the same area. And usually, CMAP can be -- can be evoked by stimulation of the motor nerve.
So, with this surface or the single motor unit recording, we can test the spinal cord activity. The next part I will talk about, the test of spinal activity. So, in the clinical neurophysiology, a stimulation is often used to demonstrate whether the stimulated structure. For example, brain, spinal cord, this structure is involved in the -- in the special movement task. And we can analyze the response to the stimulation to look at the effect. So, spinal activity can be tested by stimulation on the afferent pathway of spinal motoneurons.
Here, we should say the basic mechanism, which underlining this test, is that the corticospinal neurons from the brain in the motor cortex send their fairly long axons down to the spinal cord and the synapse on these spinal alpha motoneurons. And the alpha motoneurons send the fibers, which can synapse on the muscle fibers. And this is the physiology called -- or -- and the anatomical mechanism for testing the spinal cord activity.
In this slide, we introduce the monosynaptic reflex in -- where -- which is related to the spinal cord activity. The very classical monosynaptic reflex is a stretch reflex. So, we can see on -- in the picture when the doctor taps the tendon of the muscle, the muscle spindle can be activated. And this will induce monosynaptic reflex, and we call them tendon reflex.
On the right side is the pathway for the tendon reflex. We can see the muscle spindle can be activated by the stimulation through a 1a afferent. It goes up to the alpha motoneuron with a transfer in the alpha motoneuron it goes -- it sends the pulse to the muscle through the efferent nerve, which is the motor nerve. On the physiological level, we can do a technique named the Hoffmann reflex, H-reflex. And the H-reflex is the analog to the tendon reflex.
On the left side, I show the example recordings for the H-reflex. I should mention here that when we do H-reflex, at the same time, we can record the muscle wave, M-wave. And this M-wave is similar to the CMAP, which I showed in the previous slides. So, in the recordings from bottom to the top means an increase of intensity. At the very bottom, when we use various low intensity, no response can be recorded. When we increase the intensity, H-reflex appeared first. When the intensity was increased again, the H-reflex become larger. And, at the same, M-wave appears.
And when the intensity becomes very high, the H-reflex becomes smaller and smaller while the M-wave becomes bigger and bigger. When the stimulus intensity reached a very high level, H-reflex disappeared while M-wave reaches its maximum. So, the mechanism behind the -- this H-reflex that the stimulus comes -- the stimulation, the electrical stimulation, given to the nerve can go both to the afferent -- go to the afferent and the efferent fibers in both directions.
So, in the -- in this slide, I show the H-reflex mechanism. On the left side is results. The group analysis results for the recordings shown in the previous video. So, the X-axis means the stimulus intensity, and Y-axis means the amplitude of H-reflex and the M-wave. We can see that the H-reflex has lower firing -- a lower threshold. And when the stimulus intensity increases, it reached to the maximum. And then, the decrease is the increase of the stimulus intensity.
On the other hand, the M-wave has lower -- has a higher threshold, and the increase almost linearly with the increment of stimulus intensity, and then finally reach the maximum. On the right side, is the mechanism behind the H-reflex and M-wave. We can see the red line means -- red arrow means the stimulus given to the motor nerve, and the blue arrow means stimulus given to the sensory nerve. With the single stimulation, the inputs can go both ways. The antidromic nerve way and the orthodramic way on both sensory and the motor nerve.
So, the orthodramic impasse on the motor pathway can produce an M-wave. And the antidromic pathway -- antidromic impasse on the sensory pathway can produce an H-reflex. At the same time, the antidromic impasse on the motor nerve can cancel the H-reflex. So, that's why we can get to the experiment results, which I discussed previously.
So, with all these slides, we can know the H-reflex is analog to the stretch reflex in the physiological lab. And the H-reflex is often used to test the spinal activity in motor physiology. That means if we use a TMS to stimulate the brain and get a response in the muscle if the -- we ask the subject to do two different tasks and we found a different response. At the same time, we must think about the spinal cord activity as the spinal cord is on the pathway of -- from the motor cortex to the -- to the muscle, which we want to record.
So, there's a limitation for H-reflex. That is, the H-reflex is sometimes very difficult to be recorded from hand muscles. So, as a replacement, we can do an F-wave experiment. The mechanism behind the F-wave is that when we use very, very high stimulus intensity, the H-reflex can completely be blocked by the antidromic current, as we discussed before. At that time, no H-reflex can be recorded.
But, at the same time, the antidromic current on the motor fibers is this; we call it super maximum intensity. Such very high stimulus intensity can activate the motor neuron directly, and they induce a wave. This is named the F-wave. So, there is an implication for F-wave. We also can use it -- use F-wave to test for the spinal cord activity. So, there's very clear limitation for F-wave that the F-wave only reflects the excitability of motor neurons with very high firing threshold.
There's an immediate and very important implication for the F-wave recording. This is named the central motor conduction time, CMCT. So, when we give a stimulation to the cortex we can get, we call them motor evoked potential, MEP, in the muscle. When we measure the MEP, we can get to the latency of this MEP. So, the MEP latency included two parts; one uses a central motor conduction time, another is a peripheral nerve conduction time.
Let's go back to the previous study, use F-wave. When we give electrical stimulation to the motor nerve, it goes up to the spinal cord and then goes -- then comes down to the muscle. This produces latency. If we add a slight latency with the M-wave latency, we can get two times of the peripheral nerve. So, at this time, we should consider a synaptic delay on the transfer from the motor nerve to the spinal cord and then come down to the muscle. So, in the form -- in the formula, we should subtract one synaptic delay, which is a one millisecond.
So, finally, the formula comes up like the central conduction time equals to the MEP latency minus F-wave latency plus M-wave latency minus one divided by two. So, the central conduction time can also be measured with TMS alone. In this figure, we can see when we measure the conduction time in the hand muscle, APB muscle on the thumb, we can give a stimulation to C3; that's close to the motor cortex, and it produces MEP.
And if we give the stimulation to the cervical level, it produced MEP with shorter latency. If we get to the difference between two -- between two MEP at a different side we can get -- finally get to the central conduction -- central motor conduction time. A very similar technique can be used to measure the conduction time in the leg muscle, tibialis anterior.
Then, we come to the next question. How can we do a motor cortical stimulation with TMS? This slide shows the primary motor cortex. The primary motor cortex is identified by Brodmann area 4. So, the primary motor cortex contains very large pyramidal cells, which is a cortical spinal neuron. These neurons send a very long axons down to the spinal cord and the synapses motoneuron pool. And the important primary motor cortex is a target for many TMS studies.
The physiological and the anatomical mechanism behind the TMS targeting motor cortex is showing in -- is shown in this slide. It's named the cortical homunculus. The cortical homunculus was founded by a Canadian neuron -- neurosurgeon named Penfield. And Dr. Penfield used a needle -- used a needle stimulation when he did neurosurgery, and he stimulated the motor cortex and created a map, which is shown in this slide.
So, what we can see is that the presentation of different muscles in the -- in the primary motor cortex are highly disproportionate, and this is named the cortical homunculus. With this mechanism, we -- that's why we do many TMS studies using muscle -- we use very small hand muscles, as the small hand muscles have a large cortical presentation and can be easily recorded with surface EMG.
This slide shows equipment for TMS -- for TMS study. In the TMS lab, there are several different devices that should be included. The first one, of course, is a stimulator of TMS. And this stimulator can be connected to the -- a coil, and the coil delivers the stimulation to the brain. And with this stimulation, we can record the -- from the small hand muscle. For example, the first dorsal interosseous muscle, the FDI muscle.
For many studies, we use different combinations. For example, we do a single pass; at that time, we only need one stimulator. And sometimes, we need two stimulators to do a paired-pulse protocol. At that time, we need a bistim to connect two different stimulators together. This bistim will show on the top in the -- in the figure. And when we record the muscle response, we should use the amplifier, as the amplitude of the response is very small. The response, as shown on the right side, it's named the motor evoked potential, MEP.
So, this slide shows the electrical mechanism of the transcranial magnetic stimulation. When we give this stimulation, it -- the stimulation produces a very large but brief current in the wire coil. And this brief but large current was produced by discharging of a bank of capacitors, which can be seen in the previous slide. After -- when we give the stimulation to the brain, there's a secondary induced current in the brain. The current induced in the brain has the opposite direction to the -- to the current in the coil. Here we should mention that in the figure, this -- the magnetic field is produced by a round coil. There are many different coils with different shapes. And the round coil is not now -- not well used in many, many studies.
So, this slide shows the magnetic coil with different shapes. The first A coil is the same one showing in the -- in the previous slide. It has a weak -- it produced a weak stimulation, which can -- we can see the electrical field on the right side. Then, this idea to improve the coil shape with a different purpose. B coil and the C coil focus on the tip, the electrical field focus on the tip of the coil. And the D coil has a large coil with high stimulus intensity. And the E coil is small in size, which produced focal stimulation.
And the many studies use the figure of 8 coil, the F one. The figure of 8 coil used two coils, which produce this current with the same direction at the joint point of two coils. So, that means at the center of the -- of the figure 8 coil, the electrical field is the strongest. The figure of 8 coil is now well used, and the most used in many TMS studies.
This slide shows the example recording with a single stimulation to a single site, but with a recording in different muscles. On the top is an electrical stimulation, and the bottom is a magnetic stimulation. In the early years, electrical stimulation was used to stimulate the brain. So, the disadvantage for the electrical stimulation is that when the current goes through the scalp, it activates the pain receptor in the scalp, and that's why the electrical stimulation is very painful, and that's why electrical stimulation cannot be widely used. And later, people developed the magnetic stimulation.
In this figure, we can focus on the motor evoked potential latency. What we can see is that with the stimulation -- with the single stimulation, the MEP latency is the shortest in the biceps muscle and they become longer in thrombus -- in the thumb, which is recorded in APB muscle. And it's longest in the leg muscle, tibialis anterior. The latency becomes longer and longer from the upper limb to the low limb is consistent with the anatomic findings, anatomical location of different muscles. The second evidence, which can be found in the figure, is that the magnetic stimulation produced MEP with a slightly longer latency than the electrical stimulation, which we will discuss later.
And this slide shows the stimulation with different -- at different locations, but recording in different -- in the same muscle. The recording was made in the APB muscle. The top one shows a recording right hand, and the bottom one shows recording in the left hand. What we can see is that the first trial on the very top is recording from electrical stimulation at the wrist, which is similar to the M-wave we discussed before.
The second trial is recording from TMS at the cervical level. It produced a little longer latencies than the M-wave. And the third line shows that MEP with TMS at the cortical level. It produced the longest MEP latency in three trials. This is also consistent with the anatomic locations of different stimulation. Here, we discuss the site of stimulation. The bottom one is the electrical stimulation to the motor cortex. Here, the cathode electrode is located in the center of the skull; we call it a vertex. And the anodal stimulation is located in the motor cortex, close to C3, C4.
So, the current produced by this electrical stimulation is a vertical current. It goes down from the surface of the cortex to the very deep area in layer -- in layer five, six where the large cortical spinal neuron is located. It is a little different for the magnetic stimulation on the top. The magnetic stimulation, which we discussed previously, it's parallel to the surface of the cortex. So, the stimulation only goes to -- only goes to the inter neurons in the layers two and the layer three. And with both stimulation, the stimulation -- with both stimulation of the electrical stimulation, and the magnetic stimulation the stimulation is given to the primary motor cortex, which can see from the right slide. The primary motor cortex is located at the anterior bank of the central gyrus or central sulcus.
So, we discussed that the magnetic stimulation and the electrical stimulation are different. As the magnetic stimulation is a parallel to the motor cortex, which activated the layer two and the layer three neurons in the motor cortex. And the electrical stimulation can activate the pyramidal cells located in the layer five and layer six. This led to a very important concept in TMS, which is D and I-waves. D-wave, which is a direct wave, it shows a direct activation of the pyramidal neurons. And I-wave is an indirect wave. It's indirect wave reflect the indirect activation of the cortical spinal neuron through the synaptic mechanism.
In this figure, we show the cortical spinal wave, the descending wave recordings with implanted spinal cord electrodes. On the right side, is an x-ray photo for the electrode. We can see the electrode. The spinal electrode was implanted at the cervical level. There are four contacts, zero, one, two, three on the electrode. When we give stimulation to the motor cortex, the impasse goes down to the spinal cord. Then, we can record a very tiny potential named the descending wave from contact to zero and the three.
These recordings are shown on the left side. And the -- on the middle is the MEP recording with EMG electrodes. There is a red line in each -- in each recording figure. For the cortical spinal wave recording, the red line means the D-wave latency. And then -- on the right side, for the MEP recording, the red line means the T -- the electrical stimulation induced the T -- the MEP latency.
There are different panels. On the top panel, is the recording for electrical stimulation. What we can see is that with the electro stimulation, it produced a D-wave. And the second line is the TMS with a lateral, medial current. With this current direction, we also can produce D-wave. But following this D-wave, this -- the indirect wave, I-waves, the bottom electrodes are -- the recordings with posterior-anterior TMS stimulation. With this direct current stimulation, we can see there is no D-wave can be recorded. The first wave appears as the I-wave, I1-wave. Then, there's a series of different waves we name it I1, I2, I3.
So, from this recording, we can get information that the lateral medial direct current first generated D-wave. And the posterior-anterior current generates an I1-wave, and the more waves can be generated when stimulus intensity increases. So, different waves can also be recorded with current or with TMS with different currents. In this figure, at the top is the recording for TMS with latero-medial current. As we already discussed, a D-wave can be generated, followed by later I1, I2, I3 waves.
And in the middle is the stimulation with the posterior-anterior current, which can be -- which can be seen in most of TMS studies. With this current direction, no D-wave can be recorded. Only I1-wave can be recorded with low intensity by incrementing stimulus intensity produce more waves, including I2 and I3 waves. And at the bottom, the anterior-posterior current initially produced I3-wave with low stimulus intensity. And with incrementing stimulus intensity, more waves, including I1-wave, can be seen.
And this slide shows the recording with a single motor unit. This -- the experiment was also done for recording different waves with different stimulus current. On the recording on the left side is the MEP recording. And there's a dashed line showing the latency with LM current direction. On the right side, is the single motor unit recording with the analysis named the Peristimulus time histogram, PSTH, analysis.
So, on the top of each panel is the recording with the posterior-anterior current. What we can see is that the MEP latency is slightly longer than -- with slightly longer than the dashed line, which shows the MEP latency with the lateral, medial current. And with the single motor unit recording, we can see there are three different waves that can be recorded I1, I2, and I3 waves. On the other side, the anterior-posterior current, which is showing in the bottom, with this current MEP latency is longer than the posterior-anterior current. And with the single motor unit recording, we can see that only later I-waves, including I3-waves and the I4-waves, were recorded.
This slide shows animal studies, which discusses the mechanism of later I-waves. On the left side is the stimulation to the motor cortex. And the recording is also the spinal cord recording in a monkey. With the motor cortical stimulation, we can see a series of I-waves can be recorded. At the bottom, with a stimulation to the pre-motor cortex, no I-waves can be recorded except for the stimulation with very high intensity.
So, on the right side showing the two stimulation was -- were given along -- were given together. What we can see is that when we give pre-motor stimulation with the motor cortical stimulation, the later I-waves were facilitated large -- were largely facilitated in both monkeys, CS-14 and the CS-17. That means the pre-motor cortex is not directly producing the later I-waves, but is involved in the production of later I-waves.
This figure shows the leading hypothesis in the field, which can explain the production of cortical spinal waves. With a single stimulation of TMS, many neurons can be activated in the primary motor cortex; these neurons, including the larger cortical spinal neurons in layer five, we call them P5. At the same time, the facilitatory inter neurons in layer two and the layer three, we call them P2 P3, can be activated. Also, there are inhibitory inter neurons in the motor cortex. Some of them are mediated by GABA transmitter, which is shown in the black square.
So, with a single stimulation, all these neurons can be activated. The activation of the axon of P5 can produce the D-wave. And the activation of the P2 and the P3 neuron can produce the I1-wave. So, when the stimulation activated the GABA neurons, it can eliminate the activation of I1-wave. This is the inhibitory phase of the I1-wave after the activation of I1-wave.
So, when the P2 and the P3 are activated again, and the GABA neuron stops firing it -- the later I-waves are produced. When we use a different current direction, for example, the anterior-posterior current direction, we may activate all these neurons. And the -- these -- all these neurons can be influenced by the inputs from the pre-motor cortex. And this is shown in the larger circle around all these areas. So, this figure shows the leading hypothesis in the field. But there are many other models that can explain the mechanism behind the very complex cortical spinal descending volley. And all these models and the hypothesis should be tested in further experiments.
So, let's move to the fourth part of the talk. TMS can also be applied to different cortical areas outside of the motor cortex. This one is showing an important TMS technique named the TMS Mapping. The coil of -- the TMS coil is moved in a different direction. On the X-axis, it's moved from the medial to the lateral side. And the Y-axis showing the coil moving from the posterior to the anterior locations. At the center of this figure, showing the location which produced the largest MEP amplitude, we call it the center of gravity. And when TMS coil is moving around this -- around this point of the center of gravity, the MEP size becomes smaller.
And finally, we can get an MEP map from this technique. This technique has many different but very important implications. For example, in the patient with amputation, the recording on the intact side was shown on the left, and the amputated side was shown on the right side. And the recording was made from biceps -- and the biceps muscle. And the patient has upper limb amputation, but the biceps muscle has remained. What we can see is that the amputated side has a much larger activation area than the intact side, which reflects the more activated state after the amputation.
The MEP mapping technique can also be used for the motor learning experiment. In this experiment, the subject does a serial reaction time task. That means the people -- the subject used four fingers from the index, middle finger, ring finger, to the small finger, the little finger, which we’re referring to as one, two, three, four. And the subject was asked to do a serial reaction time task with a different sequence, which can be shown on the screen. And this did a very long time to learn for the subject. And the learning of the course can be recorded. And the performance during the learning as shown on the right side, we can see both the reaction time and the performance become better after many, many blocks.
On the left side is the brain mapping, the MEP mapping, for each different block. We can see, before the training, the stimulation didn't cause any activation of the corresponding muscles. But with the stimulation -- with -- when the training course is going on, the activation of the involved muscles become larger and larger. And finally, at the block nine, the TMS map become the largest. So, that means the cortical areas for muscle involved in the task increased after the motor learning process.
TMS can also be given to other different cortical areas, as the -- there are very complex cortical networks in the brain. And the TMS can be used to test the connectivity between different cortical areas. In this figure, we use the electroencephalography, the EEG, to record the effect of TMS. And if we analyze the different response, the different component, and the different location for the TMS induced effect, we can know TMS at one side can activate the -- which area and which component. TMS can also be given to a different cortical area.
In this figure, we show the -- show the recording with electroencephalography after TMS on one side. And with analysis with different components and at different locations, we can know the TMS on one side can activate the remote cortical areas. But with EEG recording, there's an important limitation to do the experiment. That is, the TMS evoked potential, we call it TEP, the TEP may be technically difficult for the very large artifact produced by TMS. So, one way to overcome this problem is to select the TMS compatible caps in the experiment. We can also use different software to remove the TMS artifact with such kind of experiment. But with all these efforts, the TMS artifact is still a very big problem in the area of the TMS evoked potential.
The TMS can also be given to the non-motor cortical areas. Sometimes this can lead to very high impact studies. For example, in this nature experiment, TMS was given to different cortical areas. On the left side, we can see the experiment was set up. In this experiment, the authors compare the effect of TMS in the very early blind subjects to the healthy controls. So, all the subjects performed a Braille -- a Braille reading task, which shows on the left side. At the beginning of the task, the subject moved their index finger to start the Braille reading task. When the finger moved to the left side, we can see there is a laser beam on that. When the finger covered the laser beam, it triggered a TMS trait with three seconds at one -- at about 10 hertz. With this train of stimulation, it produced a virtual lesion in the stimulated area. The results are on the right side.
For this braille-reading task, the subject should read out what the characters they have read during the stimulation. And what was measured is the error of the reading task. The interesting thing is that, in the healthy control subject, the stimulation on the sensory motor cortex produced a lot of error in the subject. This is because the sensory inputs were impaired by the train of stimulation. Another very important result is that, in the blind subject, the sensory motor stimulation did not impair the results. But the middle occipital stimulation was -- which is targeted on the primary visual cortex impair the -- impair the task, and they impair the performance of the subject a lot. So, the results suggest that the visual cortex is involved in the cognitive task in the early blind subjects.
This is the follow-up study for the previous one. In this experiment, the stimulation -- the set up was changed a little bit. The motor task is that the subject was given -- was presented a word with auditory stimulation. For example, in this subject, the apple -- the word apple was presented to the subject. And with this auditory stimulation, the subject should respond with a logically correct word, for example, eat. But the subject cannot respond like a drink or other words.
At the same time, a train of stimulation at 10 hertz lasted for three seconds was given to different cortical areas. These areas, including the visual cortex, the prefrontal cortex, the somatosensory cortex, and the lateral occipital cortex. The interesting thing is that, in the site control, in the healthy control, the prefrontal cortex stimulation impaired the performance of the subject. That means in healthy control, the prefrontal cortex is important for the cognitive tasks.
But the result says different in the subject with early blind. Only visual cortex -- visual cortical stimulation produced the impairment of the motor performance. So, all these experiments show that the stimulation, the TMS stimulation, to different cortical areas can be important for the -- for the cognitive and the motor learning tasks.
Going to the summary of this lecture, at first, TMS is a powerful neurophysiological technique to study human brain functions. The effect of TMS can be recorded with electroencephalography and the electromyography. Spinal activity should be taken into consideration for a TMS study. Motor cortical stimulation activates the cortical spinal neurons, and it produces descending cortical spinal waves. The effect of TMS at other cortical areas are often complex, and they may interfere with motor learning and the cognitive process.
So, again, I am Dr. Zhen Ni. I am working at the National Institute of Neurology Disorders and Stroke. I am a Senior Research Fellow. Thank you very much for watching this video. I hope this video can help you when you do TMS studies. Thank you very much. | |
PURPOSE: To provide a hologram and its production method which can develop plural kinds of desired hologram images in one photosensitive material.
CONSTITUTION: The photosensitive material 7 in which interference fringes are recorded a designed part 71 where certain design is superposed and a margin part 72 where no design is superposed, and both parts give different kinds of hologram images from each other. In the production of a hologram, first, a photosensitive material 7 is applied on a glass substrate 9. Then the photosensitive material 7 is dried while a pattern 3 having the design 31 is used to change the drying state in the designed part 71 facing the design 31 of the pattern 3 from the margin part 72 which is the rest of the designed part. Thus, the design 31 is superposed to the photosensitive material 7. Then interference fringes are recorded in the photosensitive material and developed. For example, the pattern 3 is maintained at temp. different from the drying temp.
COPYRIGHT: (C)1996,JPO | |
Introduction
============
Breast cancer is the most frequent malignancy in women and the second leading cause of cancer death among women in the United States ([@b1-ijo-47-01-0262],[@b2-ijo-47-01-0262]). A family history of breast cancer is one of the most important risk factors for the disease ([@b3-ijo-47-01-0262]). In addition to the two major breast cancer susceptibility genes, *BRCA1* and *BRCA2*, several other genes associated with breast cancer predisposition have been identified, including *ATM, CHEK2, PALB2, RAD51C* and *BRIP1*. Many of these genes are associated with *BRCA1* and *BRCA2* in the DNA damage response (DDR) pathway ([@b4-ijo-47-01-0262]).
Germline mutations of *BRCA1* predispose female carriers to breast and ovarian cancers ([@b5-ijo-47-01-0262]). Although germline mutations in *BRCA1* account for only 5% of breast cancer cases, silencing of *BRCA1* by promoter hypermethylation and other mechanisms may contribute to ≤30% of sporadic breast cancers ([@b6-ijo-47-01-0262],[@b7-ijo-47-01-0262]). *BRCA1*-associated breast cancers usually contain *p53* mutations and often exhibit a triple-negative phenotype ([@b8-ijo-47-01-0262],[@b9-ijo-47-01-0262]). *BRCA1* and *BRCA2* have roles in homologous recombination (HR) for DNA repair ([@b10-ijo-47-01-0262],[@b11-ijo-47-01-0262]). When the remaining wild-type allele is lost in a tumor precursor cell, this repair mechanism does not work, resulting in genomic instability that is sufficient to enable tumor development ([@b12-ijo-47-01-0262],[@b13-ijo-47-01-0262]). Most cancers have defects in some part of the DDR pathway. This provides an opportunity for therapeutic intervention as genotoxic therapies cause significant DNA damage, which is repairable in healthy cells but not in DDR-defective cancer cells.
PARP family of proteins (PARP1 and PARP2), are involved in a number of critical cellular processes, including DNA damage repair and programmed cell death ([@b14-ijo-47-01-0262]). When activated by DNA damage, these proteins recruit other proteins that do the actual work of repairing DNA. Inhibition of PARP is a recently developed strategy for cancer therapy that exploits DDR defects in cancer cells ([@b14-ijo-47-01-0262]). PARP is responsible for the sensing and repair of single-strand DNA breaks via base excision repair ([@b15-ijo-47-01-0262]). When a replication fork encounters a single-strand break, the result is a double-strand break. In wild-type cells, these double-strand breaks are often repaired via homologous recombination ([@b16-ijo-47-01-0262]). Cells deficient with BRCA1 and BRCA2 are unable to repair these double-strand breaks efficiently and therefore undergo cell death ([@b17-ijo-47-01-0262],[@b18-ijo-47-01-0262]). Thus, PARP inhibitors exhibit efficacy in breast cancers with inherited mutations in *BRCA1* or *BRCA2* ([@b19-ijo-47-01-0262]). PARP inhibitors, Olaparib (AZD2281), Veliparib (ABT-888), and Iniparib (BSI-201) have been shown to be promising anti-cancer agents for breast and ovarian cancer and being tested in clinical trials. Recently, the orally active PARP inhibitor AZD2281 was evaluated as a single-agent therapy in humans and showed clinical antitumor activity in BRCA-associated cancers ([@b19-ijo-47-01-0262],[@b20-ijo-47-01-0262]). However, the mechanism of action of PARP inhibitors alone in cancer cells is not fully understood.
In this study, we investigated the effects of PARP inhibitors in *BRCA1* or *BRCA2* mutant breast cancer cell lines and in wild-type *BRCA* cell lines with and without BRCA1 allelic loss. We provide evidence that the PARP inhibitor AZD2281 inhibits the growth of breast cancer cells with BRCA1 allelic loss lacking mutation in *BRCA1*. These results might lead the way to new approaches for treating a broad spectrum of breast cancer subtypes. We also demonstrated that the PARP inhibitor AZD2281 induces autophagy in BRCA mutated breast cancer cells as well as breast cancer cells with BRCA1 allelic loss lacking mutation in *BRCA1*. Our results also indicate importance of selection of patients who would benefit from PARP inhibitor therapy and molecular subclassifications of BRCA-related breast cancers.
Materials and methods
=====================
Cell lines, culture conditions, and reagents
--------------------------------------------
We studied 14 human breast cancer cell lines: 3 *BRCA1* mutant lines with *BRCA1* allelic loss (HCC-1947, MDA-MB-436, and SUM-149PT), 1 *BRCA2* mutant line with *BRCA2* allelic loss (HCC-1428), 9 BRCA wild-type lines with *BRCA1* allelic loss (MCF-7, ZR75, MDA-MB-361, BT-474, SKBR3, MDA-MB-231, BT-549, MDA-MB-468 and BT-20), and 1 BRCA wild-type line without *BRCA1* allelic loss (T47D). T47D, MCF-7, ZR75, MDA-MB-361, BT-474, SKBR3, MDA-MB-231, BT-549, MDA-MB-468, and BT-20 cells were cultured at 37°C in DMEM supplemented with 10% FBS in a humid incubator with 5% CO~2~. SUM-149PT cells were cultured in Ham's F-12 supplemented with 5% FBS, insulin, and hydrocortisone. The PARP inhibitors veliparib (ABT-888), olaparib (AZD2281), and iniparib (BSI-201) were purchased from Selleck Chemicals (Houston TX, USA).
WST-1 assay
-----------
Cell viability was assayed by applying the cell proliferation reagent WST-1 (Roche Applied Science). First, a suspension of 4,000 cells per 90 μl was seeded into each well of a 96-well plate and cultured overnight. Then, the necessary amount of PARP inhibitor was added to the individual wells. After 3 days of PARP inhibitor treatment, 10 μl of the ready-to-use WST-1 reagent was added directly into the medium, the plates were incubated at 37°C for 30 min, and absorbance was measured on a plate reader at 450 nm. All experiments were done in triplicate. Cell viability was calculated as the percentage of cells killed by the treatment as measured by the difference in absorbance between treated and untreated wells.
Cell transfections
------------------
Lentiviral particles expressing BRCA1, BRCA2, ATG5, or control shRNA were purchased from Sigma. MDA-MB-231, BT-20, and HCC-1428 cells were transfected at a multiplicity of infection of 5. Five days after transfection, cells were treated with 5 μg/ml of puromycin concentration to select cells stably expressing shRNA. Lentiviral vector expressing mitochondrial yellow fluorescent protein (mYFP) was purchased from Biogenova. HCC-1428 cells were transfected at a multiplicity of infection of 5.
Western blot analysis
---------------------
After treatment, the cells were trypsinized and collected by centrifugation, and whole-cell lysates were obtained by using a cell lysis buffer. Total protein concentration was determined by using a detergent-compatible protein assay kit (Bio-Rad Laboratories). Aliquots containing 30 μg of total protein from each sample were subjected to SDS-PAGE with a 12% gradient and electrotransferred to nitrocellulose membranes. The membranes were blocked with 5% dry milk in TBS-Tween-20 and probed with primary antibodies against BRCA1 and BRCA2 (Cell Signaling Technology) and LC3 (Sigma). The antibodies were diluted in TBS-Tween-20 containing 2.5% dry milk and incubated at 4°C overnight. After the membranes were washed with TBS-Tween-20, they were incubated with horseradish peroxidase-conjugated anti-rabbit or anti-mouse secondary antibody (Amersham Life Sciences). Mouse anti-β-actin and donkey anti-mouse secondary antibodies (Sigma) were used to monitor β-actin expression to ensure equal loading of proteins. Chemiluminescence was detected with ChemiGlow detection reagents (Alpha Innotech). The blots were visualized with a FluorChem 8900 imager and quantified with densitometer software (Alpha Innotech).
Evaluation of acidic vesicular organelles
-----------------------------------------
To detect and quantify acidic vesicular organelles, cells were stained with acridine orange as described previously ([@b21-ijo-47-01-0262]). The number of acridine orange-positive cells was determined by fluorescence-activated cell sorting (FACS) analysis.
Transmission electron microscopy
--------------------------------
Cells were grown on 6-well plates, treated with AZD2281, ATG5 shRNA, or control shRNA, fixed for 2 h with 2.5% glutaraldehyde in 0.1 mol/l cacodylate buffer (pH 7.4), and postfixed in 1% OsO~4~ in the same buffer and then subjected to the electron microscopic analysis as described previously. Representative areas were chosen for ultrathin sectioning and viewed with a Hitachi 7600 electron microscope (Japan).
Flow cytometry analysis of apoptosis
------------------------------------
Cells were collected and double-stained with Annexin V-fluorescein isothiocyanate (FITC) and propidium iodide using an Annexin V-FITC apoptosis detection kit (BD Pharmingen) and evaluated with a flow cytometer.
Results
=======
AZD2281 inhibits cell survival in BRCA1 or BRCA2 mutant breast cancer cell lines
--------------------------------------------------------------------------------
According to the literature, 5 (12%) of 41 breast cancer cell lines have BRCA mutations and 28 (68%) of the 41 cell lines have *BRCA1* allelic loss. To investigate the effects of PARP inhibitors in BRCA wild-type breast cancer cell lines with BRCA allelic loss we treated BRCA wild-type ER/PR^+^, ER^+^, HER2^+^, and triple-negative cell lines with 3 different PARP inhibitors, ABT-888, BSI-201, and AZD2281, for 4 days. Growth rates were measured with the WST-1 assay. Whereas AZD2281 induced an average growth inhibition of 33% in BRCA wild-type cell lines at 2 μM, ABT-888 and BSI-201 did not induce growth inhibition in the same cell lines at 2 μM concentration. The growth inhibition effect of AZD2281 was significantly higher in the BRCA wild-type cell lines with *BRCA1* allelic loss than in the BRCA wild-type cell line without *BRCA1* allelic loss ([Fig. 1a](#f1-ijo-47-01-0262){ref-type="fig"}). We also used the same PARP inhibitors at the same concentration (2 μM) in the *BRCA1* mutant (HCC-1937, MDA-MB-436, and SUM-149PT) and *BRCA2* mutant (HCC-1428) cell lines. AZD2281 at 2 μM significantly inhibits cell survival in all 4 cell lines, whereas ABT-888 and BSI-201 did not induce cell death at 2 μM ([Fig. 1b](#f1-ijo-47-01-0262){ref-type="fig"}). We also evaluated the effects of AZD2281 at lower concentrations in the BRCA mutant breast cancer cell lines, where it had a significant dose-dependent growth inhibition effect ([Fig. 1c](#f1-ijo-47-01-0262){ref-type="fig"}).
BRCA1 or BRCA2 downregulation in BRCA wild-type breast cancer cell lines induces growth inhibition in response to AZD2281 treatment
-----------------------------------------------------------------------------------------------------------------------------------
To determine the effect of BRCA1 or BRCA2 in response to AZD2281 treatment, the BRCA wild-type MDA-MB-231 and BT-20 cells were stably transfected with BRCA1, BRCA2, or control lentiviral shRNA. BRCA1 or BRCA2 downregulation was demonstrated by western blot analysis ([Fig. 2a](#f2-ijo-47-01-0262){ref-type="fig"}). The 3 different PARP inhibitors used as single-agent treatments and growth rates were measured with the WST-1 assay. AZD2281 induced significantly superior growth inhibition compared with other PARP inhibitors, such as ABT-888 and BSI-201 in BRCA1- or BRCA2-knockdown cells than in control cells, indicating that the growth inhibition effect of AZD2281 is dependent on BRCA deficiency ([Fig. 2b](#f2-ijo-47-01-0262){ref-type="fig"}).
AZD2281 induces autophagy in BRCA1 or BRCA2 mutant breast cancer cell lines
---------------------------------------------------------------------------
Autophagy is lysosomal degradation pathway characterized by an increase in the number of autophagosomes that surround organelles such as mitochondria, Golgi complexes, polyribosomes, and the endoplasmic reticulum. Subsequently, autophagosomes merge with lysosomes and digest damaged organelles into amino acids to provide a new supply under stressful conditions to protect the cells ([@b22-ijo-47-01-0262]--[@b24-ijo-47-01-0262]). Although activation of autophagy is aimed at overcoming stressful situations, autophagy induction may lead to cell death ([@b25-ijo-47-01-0262]). To determine effects of the most potent PARP inhibitor we investigated whether AZD2281 induces autophagy in BRCA mutant breast cancer cell lines. To this end we treated *BRCA1* mutant (SUM-149PT) and *BRCA2* mutant (HCC-1428) breast cancer cells with 2 μM AZD2281 for 1 day and stained them with acridine orange. Acridine orange positive cells were counted using flow cytometry. AZD2281 induced significant autophagy (37 and 44%) in *BRCA1* mutant SUM-149PT and *BRCA2* mutant HCC-1428 breast cancer breast cancers, respectively, in 24 of treatment ([Fig. 3a](#f3-ijo-47-01-0262){ref-type="fig"}). We observed the same phenomenon by AZD2281 in BRCA wild-type breast cancer cell line MDA-MB-231 with BRCA1 or BRCA2 downregulation. The knockdown of BRCA1 by lenti-based stable shRNA in BRCA wild-type breast cancer cell line MDA-MB-231 demostrated induction of autophagy as indicated by the expression of LC3-II, an autophagy marker ([Fig. 3b](#f3-ijo-47-01-0262){ref-type="fig"}). AZD2281 treatment further enhanced LC3-II expression in BRCA1- or BRCA2-knockdown cells ([Fig. 3b](#f3-ijo-47-01-0262){ref-type="fig"}).
To further demonstrate the induction of autophagy we also investigated ultrastructure by transmission electron microscopy (TEM) before and after AZD2281 treatment. TEM images clearly demonstrated that AZD2281 induces autophagy, which results in mitochondrial degradation. AZD2281-treated cells had fewer mitochondria and more autophagosomes compared with untreated cells ([Fig. 3c](#f3-ijo-47-01-0262){ref-type="fig"}).
Inhibition of autophagy results in partial inhibition of AZD2281-induced apoptosis
----------------------------------------------------------------------------------
To investigate the roles of autophagy and mitochondrial degradation under AZD2281 treatment, we stably transfected HCC-1428 cells with mYFP using lentiviral vector. Fluorescence microscope images clearly demonstrated the presence of mYFP in the mitochondrial compartment of HCC-1428-mYFP cells ([Fig. 4a](#f4-ijo-47-01-0262){ref-type="fig"}). HCC-1428-mYFP was treated with AZD2281, and mitochondrial fluorescein was measured by flow cytometry; untreated cells were used as a control. AZD2281 induced significant mitochondrial degradation (\~45%) ([Fig. 4b](#f4-ijo-47-01-0262){ref-type="fig"}), which was also shown in the TEM images. Next, we inhibited autophagy by knocking down the key autophagosome structural protein ATG5 using lentiviral shRNA vector in HCC-1428-mYFP cells. Mitochondrial degradation was markedly rescued in ATG5-knockdown HCC-1428.mYFP-shATG5 cells compared with HCC-1428-mYFP-sh-control cells under AZD2281 treatment ([Fig. 4b](#f4-ijo-47-01-0262){ref-type="fig"}). Inhibition of autophagy by knocking down ATG5 also partially inhibited AZD2281-induced apoptosis ([Fig. 4c](#f4-ijo-47-01-0262){ref-type="fig"}), suggesting that autophagy contributes to AZD2281-induced cell death in BRCA mutated breast cancer cells.
Discussion
==========
In this study, we show for the first time that a PARP inhibitor as a single agent induces significant autophagy/mitophagy in *BRCA* mutant cell lines. In addition, we demonstrated that AZD2281 induces growth inhibition in BRCA wild-type breast cancer cell lines with *BRCA1* allelic loss, indicating that breast cancer patients with *BRCA1* allelic loss may benefit from PARP inhibitors.
Previously, AZD2281 was evaluated in a genetically engineered mouse model of BRCA1 breast cancer ([@b26-ijo-47-01-0262]). Treatment of tumor-bearing mice with AZD2281 inhibits tumor growth and prolonged survival. Combination treatment with AZD2281 plus cisplatin or carboplatin increased recurrence-free survival and overall survival ([@b26-ijo-47-01-0262]). AZD2281 has also been used as a single agent in clinical trials in breast and ovarian cancer patients with BRCA mutations ([@b19-ijo-47-01-0262],[@b20-ijo-47-01-0262]). In this study, we evaluated the effects of 3 different PARP inhibitors, ABT-888, BSI-201 and AZD228, in BRCA mutant breast cancer cell lines as single agents without DNA damaging agents; such a study has not been performed previously. BRCA mutations in breast cancer cell lines were not well described until 2006, when Elstrodt *et al*, reported a detailed *BRCA1* mutation analysis of 41 breast cancer cell lines ([@b5-ijo-47-01-0262]). Before the report was published, only one of the 41 cell lines was known to have *BRCA1* mutation. Elstrodt *et al*, identified *BRCA1* mutations in three cell lines that had not been described as *BRCA1* mutant before. They also found that 28 (68%) of the 41 cell lines had *BRCA1* allelic loss ([@b5-ijo-47-01-0262]). On the basis of these results, we evaluated PARP inhibitors as single-agent therapy in 14 breast cancer cell lines: 4 BRCA mutant lines with *BRCA1* allelic loss, 9 BRCA wild-type lines with *BRCA1* allelic loss, and 1 BRCA wild-type line without *BRCA1* allelic loss. Our data clearly demonstrated that BRCA mutant breast cancer cell lines with BRCA allelic loss were highly sensitive to AZD2281 as monotherapy ([Fig. 5](#f5-ijo-47-01-0262){ref-type="fig"}). Unfortunately, no cell line exists with BRCA mutation and without BRCA allelic loss; such cells may be resistant to PARP inhibitors because of a functional BRCA allele. When we investigated whether BRCA allelic loss results in sensitivity to PARP inhibitors in BRCA wild-type cell lines, we found significant growth inhibition, but not cell death, such as that seen in BRCA mutant cell lines.
Autophagy is lysosomal degradation pathway that is induced as a protective and prosurvival pathway against nuclear DNA damage and metabolic and therapeutic stress, if excessive this process can also lead to cell death in breast and other cancers ([@b22-ijo-47-01-0262]--[@b25-ijo-47-01-0262],[@b29-ijo-47-01-0262],[@b30-ijo-47-01-0262]). To the best of our knowledge, our study is the first to show that AZD2281 induces complete cell death (95--99%) and autophagy, which targets mitochondria. Our findings indicate that autophagy is involved in cell death mechanism as AZD2281-induced apoptosis was reversed by genetic inhibition of autophagy. Here, we speculate that AZD2281 not only induces nuclear DNA damage but may also induce elimination of mitochondria by autophagy by a process called mitophagy and may contribute to the cell death process ([@b28-ijo-47-01-0262]). Although the clinical implications of this finding are not yet known, we speculate that autophagy could serve as a predictive marker for PARP inhibition therapy. Furthermore, our study points out that BRCA wild-type cells with BRCA allelic loss may be more sensitive to PARP inhibitors than are those without BRCA allelic loss. This observation may potentially explain why differential response rates are being observed in clinical trials, even in homogeneous cohorts of germline BRCA mutation carriers. For example, the reported response rate is \~40% for AZD2281 and \~37.5% for ABT-888 (in combination with temozolamide), indicating that almost half of the patients with germline BRCA mutations are not responsive to these agents ([@b20-ijo-47-01-0262],[@b27-ijo-47-01-0262]). Therefore, the results of our current study might shed further light on the molecular subclassifications of BRCA-related breast cancers and ultimately lead to a better characterization of the molecular tumor type that would benefit from PARP inhibitors.
{#f1-ijo-47-01-0262}
{#f2-ijo-47-01-0262}
{#f3-ijo-47-01-0262}
{#f4-ijo-47-01-0262}
{#f5-ijo-47-01-0262}
| |
When the Great Recession hit with full force in 2008, many countries experienced a sharp decline in their economic output. However, the accompanying decline in international trade volumes was even sharper, and almost twice as big. Globally, industrial production fell 12%, and trade volumes fell 20% in the twelve months from April 2008 – shocks of a magnitude not witnessed since the 1930s (Eichengreen and O’Rourke 2010). In addition, the decline was remarkably synchronised across countries.
Standard models of international trade fail to account for the severity of the event now known as the Great Trade Collapse. The workhorse model of international trade – the gravity equation – typically cannot explain the disproportionate decline in trade. It can only match the trade collapse if it incorporates increases in bilateral trade frictions such as tariff hikes (Eaton et al. 2011). However, most evidence indicates that trade policy barriers moved little during the recession (Evenett 2010 Bown 2011, Kee et al. 2013), while freight rates actually declined for most modes of shipping, given the slackening of trade flows and surplus capacity.
Trade and uncertainty shocks
In a recent paper we offer a new explanation as to why international trade is so volatile in response to economic shocks, in the recent crisis as well as in prior episodes (Novy and Taylor 2014). We combine the ‘uncertainty shock’ concept due to Bloom (2009) with a model of international trade. Bloom’s approach is motivated by high-profile events that trigger an increase in uncertainty about the future path of the economy, for example the 9/11 terrorist attacks or the collapse of Lehman Brothers. Figure 1 plots an uncertainty index based on stock market volatility (see Bloom 2009). The major spikes are labelled in blue text and classified as ‘uncertainty shocks.’
Figure 1. The Bloom (2009) uncertainty index: Monthly US stock market volatility, 1962-2012.
In the wake of such events, firms adopt a ‘wait-and-see’ approach, slowing down their hiring and investment activities. Bloom shows that bouts of heightened uncertainty can be modelled as second-moment shocks to demand or productivity, and that these events typically lead to sharp recessions. Once the degree of uncertainty subsides, firms revert to their normal hiring and investment patterns, and the economy recovers.
We extend the uncertainty shock approach to the open economy. In contrast to Bloom’s (2009) closed-economy set-up, we develop a framework in which firms import intermediate inputs from foreign or domestic suppliers. This structure is motivated by the observation that a large fraction of international trade now consists of capital-intensive intermediate goods such as car parts and electronic components or capital investment goods – a feature of the global production system which has taken on increasing importance in recent decades (Campa and Goldberg 1997, Feenstra and Hanson 1999, Engel and Wang 2011).
In our model, due to fixed costs of ordering associated with transportation, firms hold an inventory of intermediate inputs, but the costs are larger for foreign inputs. We show that in response to a large uncertainty shock in business conditions, whether to productivity or the demand for final products, firms optimally adjust their inventory policy by cutting their orders of foreign intermediates more strongly than orders for domestic intermediates.
In the aggregate, this differential response leads to a bigger contraction and, subsequently, a stronger recovery in international trade flows than in domestic trade. Thus, international trade exhibits more volatility than domestic economic activity.
- In a nutshell, uncertainty shocks magnify the response of international trade, given the differential cost structure.
The differential impact of uncertainty on trade and the domestic economy
To motivate our approach, we first showcase the simplest possible evidence on the importance of uncertainty shocks. Using data for the US, we let uncertainty shocks hit two key data series: imports on the one hand, and industrial production (which is more representative of the domestic economy) on the other. We compute the impulse response of these two series based on a simple vector autoregression (VAR) with monthly data from 1962 through 2012. Figure 2 presents the results.
Figure 2. The impulse response for uncertainty shocks at the aggregate level (real imports in the left panel, industrial production in the right panel.
The bottom line from Figure 2 is clear. In response to the uncertainty shock, both industrial production and imports decline. But the response of imports is considerably stronger, about 5 to 10 times as strong in its period of peak impact during year one. The response of imports is also highly statistically significant.
Some industries react more to uncertainty shocks
Our model generates some additional predictions that we confirm in the data. For instance, we find that the magnified effect of uncertainty shocks on trade should be more muted for goods characterized by higher depreciation rates. Perishable goods are a case in point: the fact that such goods have to be ordered frequently means that importers have little choice but to keep ordering them frequently, even if uncertainty rises. Conversely, durable goods can be considered as the opposite case: they have very low depreciation rates, which allows less frequent ordering and a wait-and-see response to shocks. We find strong evidence of this pattern in the data when we examine the cross-industry response of imports to elevated uncertainty.
Can uncertainty shocks explain the Great Trade Collapse?
Could our model, which takes second-moment uncertainty shocks as its main driver, provide a plausible account of the Great Trade Collapse of 2008/09? We use a simulation exercise to argue that it could.
The four months following the collapse of Lehman Brothers – from September to December 2008 – were characterized by strong increases in uncertainty as measured by the index in Figure 1, with elevated volatility persisting into the first quarter of 2009. To simulate this shock we feed the model with a series of uncertainty shocks that generate a path of volatility similar to that actually observed.
Figure 3. Actual and simulated real imports (blue) and industrial production (orange) in the crisis
Figure 3 presents the model-implied and the actual observed responses of industrial production and real imports. The model is capable of explaining a large fraction of the actual observed industrial production response, especially up to six months out (compare the dashed orange line to the solid orange line). The model is also capable of explaining most of the real import response over a similar horizon (compare the dashed blue line to the solid blue line). The sharp difference between the two actual series, which reflects the amplified response of international trade flows compared to domestic flows, is well captured by the simulated paths.
Conclusion
Our results offer an explanation for the Great Trade Collapse of 2008/09 and previous trade slowdowns in a way that differs from the conventional static trade models. The simulations show that our model can, on average, explain over three-quarters of the imports collapse. But of course, there might have been other factors at work, for instance financial frictions and the drying up of trade credit (Amiti and Weinstein 2011). We see these approaches and ours as complementary.
References
Amiti, M and D Weinstein (2011), “Exports and Financial Shocks”, Quarterly Journal of Economics 126(4): 1841–1877.
Bloom, N (2009), “The Impact of Uncertainty Shocks”, Econometrica 77(3): 623–685.
Bown, C (ed.) (2011), The Great Recession and Import Protection: The Role of Temporary Trade Barriers, London: Centre for Economic Policy Research and World Bank.
Campa, J and L Goldberg (1997), “The Evolving External Orientation of Manufacturing Industries: Evidence from Four Countries”, NBER Working Paper 5919.
Eaton J, S Kortum, B Neiman, J Romalis (2011), “Trade and the Global Recession”, NBER Working Paper 16666.
Eichengreen, B and K H O’Rourke (2010), “A Tale of Two Depressions” VoxEU.org, 8 March
Engel, C and J Wang (2011), “International Trade in Durable Goods: Understanding Volatility, Cyclicality, and Elasticities”, Journal of International Economics 83(1): 37–52.
Evenett, S (ed.) (2010), Tensions Contained ... For Now: The 8th GTA Report, London: Centre for Economic Policy Research.
Feenstra, R C and G H Hanson (1999), “Productivity Measurement and the Impact of Trade and Technology on Wages: Estimates for the US, 1972–1990”, Quarterly Journal of Economics 114(3): 907–940.
Kee H, C Neagu, A Nicita (2013), “Is Protectionism on the Rise? Assessing National Trade Policies During the Crisis of 2008”, Review of Economics and Statistics 95(1): 342–346.
Novy, D and A M Taylor (2014), “Trade and Uncertainty”, NBER Working Paper 19941. | https://voxeu.org/article/uncertainty-and-great-trade-collapse-new-evidence |
The First Commandment: Get Out Of Your Comfort Zone
Ah, comfort zones. How we need our comfort zones. How we are perturbed when our comfort zones are disturbed.
Try to get somebody out of his or her comfort zone. Do you think that you can be remotely successful?
Abraham’s story in this week’s Torah portion gives us much to think about comfort zones, freedom, conformity, subjectivity, religious dogma and more such “trivial” issues.
I just finished reading an interesting new book by Stephanie Levine, Mystics, Mavericks and Merrymakers (NYU Press, 2003): An Intimate Journey Among Hasidic Girls. It is essentially a study of Lubavitch teenage girls in Crown Heights, with the presenting question: What happens to a girl’s individuality and independent voice growing up in an ultra-orthodox Jewish community characterized by its rigid regulations?
The common stereotype is that any personal voice is squelched in an inflexible religious society. It was with this attitude that author Levine approached her subject as she began the research for her book. You may be surprised at her conclusions after she spent over a year as a “participant observer” living in Crown Heights, hanging out with and interviewing these girls. She basically concludes that the exact opposite is true: The girls in this community were freer, more self-actualized, more expressive and more in touch with the voice of their souls than their peers in the secular world.
By no means does the book portray a perfectly rosy picture. Yet, I found it fascinating that the author was able to perceive and appreciate the free spiritedness of the girls (a rare feat today indeed, with all the negative perception usually associated with anything orthodox). Yet, even more interesting, is her analysis for the reasons behind this apparent paradox.
Jonathan Mahler, author of the New York Times magazine article, Waiting for the Messiah of Eastern Parkway (NYT Magazine, Sept. 21 2003), would do well reading this book. Mahler’s linear and surprisingly simplistic piece misses the entire complexity and spiritually diverse nature that Levine captures in her book. (If you would like to receive my detailed critique of Mahler’s NYT article, please e-mail [email protected] and I’ll have my office send it to you).
Here’s a conundrum, if I may: If someone chooses, without external pressure, to follow a path that has already been tread, is this person a conformist?
What about someone who subjugates himself to the peer pressure of a free-spirited society – is he a conformist or not?
The answer obviously lies in understanding the meaning of conformity and freedom.
May I submit that despite the popular notion regarding religious obedience, conformity has nothing to do with the choices we make; it is all about the reasons that compel us to make these choices. In other words, it is not about the activity we have chosen to be involved in (the “cheftza” in Talmudic jargon) but about the person (the “gavra”).
It is like freedom. What is freedom? Many people would say that freedom means doing whatever you like. But that is a very simplistic definition. There are quite a few people who are indulging in whatever they wish and don’t necessarily feel free. There are others, who don’t do whatever it is that pleases them, and they feel entirely free.
Freedom is not about what you are doing, but why you are doing it. Freedom means that whatever it is that you do is not imposed upon you from without but is your choice from within. Running around all your life experimenting every which way, does not necessarily mean that you are free. You may be running out of fear, even panic, terrified of not being stuck in one (“dangerous”) place for too long. Who was it that sang “freedom is another word for nothing left to lose” and then tragically overdosed?
On the other hand, you may choose to sit and meditate in one corner for an entire day, and be completely free – because you made this choice without any external or internal imposition.
Now, returning to the conundrum. Choosing a certain path in itself does not determine whether the person making the choice is free. Because freedom is not about the path you choose, but why you choose it. If the path you choose is not due to imposition then it becomes your path. Similarly, as it would be absurd to say that a musician who “plays by the book” of musical notes is conforming to an existing structure. Indeed, a musician who, in the name of non-conformity, would refuse to use the musical notes that those before him have used, would be considered insane.
And this, inevitably, will also lead to an even more important point. The free person will not suffice with just walking on the same path that others have trod before him, but he will add his particular gait, his unique contribution. Not unlike a true musician who will play the same notes, even the same piece of music (composition), with his/her unique voice.
Ok, I know that some of you may argue that every choice we make is ultimately a result of many factors that have subjectively shaped our lives. Even free will itself can be debated. As one cynic writes: We must believe in free will; we have no choice.
Nevertheless, a very strong distinction exists between behavior driven by imposition and one that comes from an inner struggle that leads to an individual choice and commitment to follow a certain path. A conformist is someone who behaves a certain way because that is the way others behave. Often it’s someone who doesn’t want to “rock the boat” and likes the comfort zone of the conventional road (the road more traveled). Sometimes it may come out of pressure, fear of being different, acceptance and the like. A free person is not driven by fear, peer (or other such) pressure, but by the sincere search for truth. Epitomized by Abraham, as Maimonides defines him: “committed to truth because it is true.”
This is what Lech Lecho is all about. Abraham is told to leave his past behind – to get out of his comfort zone – all the subjective influences of his “land,” “place of birth” and “parents’ home.” Free yourself from the pressures and influences of your own subjective self-love, of your society and of your parents – and you will begin to find yourself, your true self.
Take Abraham, the first and ultimate revolutionary. He grew up in a privileged home and society, and yet chose to reject it all in search of truth. Abraham is even called “Ivri,” from the expression “m’aiver ha’nahar,” the other side of the river, because Abraham defied the entire world in which he lived. While everyone stood on one side of the river, Abraham crossed over and stood on the other side.
Of course you can chalk up his rebellion to some genetic drive within, and perhaps the need to make his mark on the universe. But the indisputable, underlying point is this: Abraham did not make his choices due to outside forces – familial or social – imposing themselves on him. Abraham independently chose to begin a new journey, never before embarked upon, and the world has never been the same since.
Recognizing Abraham’s non-conformity was relatively easy. He simply was not the product of any community – not even of rebels. He created his own community. Today, however, it is not as easy to discern a true independent voice amidst all the existing cultures. If someone, for instance, were to choose to follow Abraham’s path, join his community and live by Abraham’s standards, the argument can be made that this person is conforming to a time-treaded path.
But, in truth conformity is not as much about the choices you make as it is about what drives you to make those choices. Abraham gives us each the power to be non-conformists – to play the same musical notes that have been played before, but in completely new ways.
I once shared a billing with the author Chaim Potok. In his Friday night lecture he shared his life story. Growing up in a traditional Jewish home, his parents expected him to become a Talmud teacher. Instead, to their chagrin, he became a writer. During his tenure in Korea he began to question his faith. Potok’s personal struggles became the theme of his books, beginning with The Chosen. In the early 70’s, Potok continued, he was invited to go see the Lubavitcher Rebbe, but he refused. “I didn’t want to lose my objectivity,” Potok explained. “Had I met with the Rebbe in a personal, face to face encounter, I was afraid that his formidable presence would have slanted my views.” Instead, he compromised and came to one of the Rebbe’s public Farbrengens.
Sitting in the audience, I was taken by Potok’s comments. As Potok took questions following his talk, I stood up and asked him: “Dr. Potok, if you were invited by G-d to Mt. Sinai, would you refuse the invitation in fear that you may lose your objectivity?”
Potok and his wife, for that matter, were, understandably, quite offended by my question. After they blurted some words I couldn’t understand, Potok said, that had the Rebbe commanded him to come see him, he would have gone. “Clearly, the Rebbe did not want to impose himself upon me,” Potok speculated. “Lame answer,” I thought, but left it at that. (For the record, I later apologized to Potok in case I had said something inappropriate).
The next day, Shabbat day, was my turn to lecture. I decided to address the issue of objectivity that Potok had initiated the night before. In brief here is what I said.
“Dr. Potok, you were afraid to meet the Rebbe in fear that you may lose your objectivity. I must admit, that I did not have this fear, and I did meet the Rebbe and perhaps did lose my objectivity. I, however, must have a much stronger “yetzer hora” than Dr. Potok’s. Because even after meeting the Rebbe I still retained my free will and G-d knows how I have not been free of iniquity. So perhaps I didn’t lose my objectivity after all. I therefore commend Dr. Potok for feeling that had he met the Rebbe he would have lost his freedom and objectivity, and perhaps never transgressed again.
“But I will say this: Is Dr. Potok more objective than I am because he did not allow himself to be open to certain strong influences? Isn’t that just another form of prejudice? By not choosing to read certain books or listen to music in fear that they may affect or influence us do we become less or more objective? We all have our subjective experiences and reasons for making the choices we make, and everything in life can and does influence us.
“Objectivity is not determined by whom you meet and what you experience, it is not about what influences have affected you or which places you have traveled to. It is about what you do with those influences. How you allow them to inform and educate you. How you use them to transcend your subjective nature and generate objective energy.”
Not every revolutionary is a free spirit and not everyone living by defined rules is a conformist. Of course there are conformists in the religious world and there are free spirits in the secular world. But the converse is equally true.
Indeed, Abraham challenges us all to ask the question: Wouldn’t it make sense to say, that you are at your freest and can best express your truest self when you align yourself with the Divine inner parameters (what some may call “rules”) of existence?
Case in point: Exercising each day takes effort and discipline to follow certain rigid guidelines. Yet, by doing so we align our bodies to its natural rhythms and therefore allow the body to work at its best. To perfect his art an artist requires hours of training and discipline, and must follow a defined musical structure. Yet it is precisely this rigid discipline that allows him/her to perform with the highest standard of excellence.
So too in our personal, psycho/spiritual lives: True freedom is attained by discovering your inner self and allowing its rhythms to express themselves, without imposition from any force outside of your own true essence.
To achieve this self-discovery and freedom, the first and foremost thing we must do is Lech Lecho: Get out of your comfort zones!
Comfort zones may be more comfortable. But they are never more growthful. Yes, there is a time for nurturing, for being in a place, a home, where we can feel comfortable to explore, to just be. But the real challenge – and true growth – begins when we leave our comfort zones, when we go out and need to initiate and create on our own.
Think back in your own life: When did you accomplish most? While you were still at home, provided for by your parents, or when you went away from home for the first time?
The first commandment to Abraham rings throughout history, its voice speaking to each one of us: You want to find your true self, you want to reach your greatest potential, to be the best you can be – first you must leave your comfort zones, your biased attitudes, your previous contexts, your old patterns. Open yourself up to a new perspective, travel on new roads, lift your eyes and see new vistas.
Wherever you are in life, whether you have no absolute guidelines that direct your life, or whether you live by fixed laws that regulate every aspect of your day, each of us has the obligation of Lech Lecho: To get out of our comfortable habits, to cease conforming to the past.
Lech Lecho is not just about leaving a negative past or a bad habit; the trap of conformity includes conforming to old standards, even healthy ones! Even someone who follows every iota of Torah and mitzvot is warned not to fall into the trap of mechanical behavior, and stale mitzvot by rote. “Bechol yom yi’hiyu bi’aynehcho ka’chdosim,” every day you must see and experience a mitzvah anew, with fresh vitality. Every relationship, especially one with G-d, must be dynamic and alive. The Talmud tells us, even if one reviews his studies 100 times out of habit, he is considered as if he did not serve G-d because that is his conventional routine. When he reviews his studies 101 times, he becomes a true Divine servant (“oved elokim”); the one additional time demonstrates that he has grown beyond his own previous comfort zone.
The call of Lech Lecho – leave your past – resonates perhaps today more than ever. How often do we feel stuck in our lives? With the dizzying pace of modern life, accelerated technology continuously raising our standard of living, our comfort zones continue to widen, bringing with it a profound complacency.
If you want to change your life – and who does not? – Lech Lecho is the answer. You must shake up your life. Ok, shake up may sound too harsh. Let’s call it “shift.” You want change, you want growth, you want movement, you want freedom – you must shift your life into new arenas.
So in this week of Lecho Lecho, let us shake ourselves up, shake each other up, shake the world out of its reverie.
During this week we have special power to stop being conformists and become revolutionaries.
Afternote:
I just received the following e-mail:
“At 8:13 PM (New York Time) on November 8, 2003 – this coming Shabbat eve, when we read Lech Lecho – a geometrically perfect six sided (Star of David) configuration will appear in the sky, linking and balancing the energies of six astrological bodies; the Sun, Jupiter, Mars, Saturn, Chiron and the Moon. In addition, there will be an eclipse of the full moon at this time. The interaction of this significant planetary alignment at the moment of the eclipse combines to produce a powerful alchemical transformation offering the opportunity for both personal and planetary shifts in consciousness. The name that has been given to this particular energetic window of time is the Harmonic Concordance. It goes from November 5th through the 11th with the peak at 8:13 PM on November 8th!
“This Grand Sextile astrological configuration, accompanying a total lunar eclipse, has never before occurred in recorded history. This is an immensely powerful vibrational activation that many see as a major interdimensional gateway fulfilling ancient prophecies and ushering in a new activation of energy upon the Earth.”
This year the energy of Lech Lecho has a unique power to help us make our move and align ourselves to our higher calling.
Use it well. | https://www.meaningfullife.com/lech-lecha-conformist/?tva_skin_id=2632/feed/feed/ |
We compared 12 pediatric T cell acute lymphoblastic leukemias collected at initial diagnosis and relapse with their corresponding PDX models. The analysis was performed on genomic level (whole genome sequencing (WGS), whole exome sequencing (WES), multiplex ligation probe amplification (MLPA), targeted sequencing) and the epigenetic level (DNA methylation, Assay for Transposase-Accessible Chromatin sequencing (ATAC-seq)). In sum, this study underlines the remarkable genomic stability, and for the first time documents the preservation of the epigenomic landscape in T-ALL-derived PDX models.
Study Datasets 1 dataset.
Click on a Dataset ID in the table below to learn more, and to find out who to contact about access to these data
|Dataset ID||Description||Technology||Samples|
|EGAD00001004459||
|
Each dataset cosist of WES data from 5 samples (1 patient): original leukemia initial diagnosis T-ALL, original leukemia relapse T-ALL, PDX derived of initial diagnosis T-ALL, PDX derived of relapse T-ALL, remission (normal control)
|Illumina HiSeq 2000,Illumina HiSeq 2500,NextSeq 500||164|
Who archives the data?
Publications
Citations
Retrieving...
Retrieving...
Retrieving...
Retrieving... | https://ega-archive.org/studies/EGAS00001003248 |
Today's webinar is titled "A post-Corona, field-driven system" and I would like to propose an approach to production management after the corona epidemic that successfully combines the pull-type kanban system, which is good at adjusting production onsite, and the push-type scheduler, which issues production instructions based on demand.
-
-
Why is it said that the Kanban system does not require production planning?
2020/6/19
The pull-type kanban system, which operates autonomously on the shop floor by taking only the required quantity of items from the front-end process and producing only the quantity that is in short supply, is said to be in direct contrast to the MRP production plan that is created by the management organization using a push-type system.
-
-
Division of roles between the Kanban system and the scheduler
2020/5/28
The Kanban system, which is the core of the Toyota Production System, is a system of operation at the manufacturing site to produce the orders received. Since only the "number of Kanban" (number of Kanban x number of units) required by the back-end process is produced by the own process, the flow rate of the Kanban can be fine-tuned to prevent overproduction in case of order cancellation.
-
-
Paperlessness and IoT accelerated by corona pandemic
2020/5/24
We have argued in our blog and at regular seminars in industrial parks that paperlessness is the first thing Japanese companies in Indonesia need to do to improve their business.
-
-
Installation of a suitable system for the Indonesian plant
2020/4/30
In Japan, the introduction of a manufacturing system tends to involve high cost customization to add advanced functions to meet the demands of management, but in Indonesia, within a limited budget, customization is required to make it easier for on-site workers to enter information and to save labor.
-
-
Implementation of Manufacturing IoT in Indonesia
2020/2/11
When I visited customers in industrial parks in Indonesia as part of my system sales activities, one of the lines I often heard from people in charge was, "Our president (the chairman) invests positively in machinery, but is reluctant to invest in systems.
-
-
Manufacturing system to be reorganized with a focus on the field
2019/12/15
We had a booth to introduce Asprova, a production scheduler, at Manufacturing Indonesia 2019, Indonesia's largest manufacturing trade show, which will be held at the annual JIExpo Kemayoran from December 4 to 7.
-
-
Seminar on operational efficiency and visualization in factories for the manufacturing industry
2019/10/7
Total optimization is when production efficiency is maximized, interest rates due to price costs and inventory are minimized, and profits are maximized.
-
-
Think of systemization as a process of business improvement, not an investment
2019/9/6
In general, the implementation of a business system using packaged software requires large initial investment costs for software, DB licenses, new servers, etc.
-
-
Indonesia Customs Strengthens Online Monitoring of Bonded Factories
2019/5/24
Bonded is "a temporary deferral of the collection of customs duties" and the bonded area in Indonesia is called Kawasan Berikat (KB or Kaber) and the non-bonded area is called Daerah Pabean Indonesia Lainnya (DPIL).
-
-
Proposal for Japanese companies in Indonesia who are having trouble with their business systems
2019/3/30
A company's business is determined by what it provides for the needs of the market, and how it interacts with customers and suppliers for that business, and how its internal staff relates to it, determines how it works internally. We recommend that you implement the management function with HanaFirst business template and Asprova scheduler for the planning part, and we think that this is the best practice that we can propose to Japanese companies in Indonesia.
-
-
"Making Indonesia 4.0" era business system
2019/3/17
In the manufacturing industry, we purchase raw materials, add value by processing them and sell them to customers, and in the service industry, we provide services as added value to customers' requests.
-
-
System Implementation Methodology for Indonesia
2018/12/24
Why don't they explain it to us until we are satisfied?" is a phrase I have heard many times in the field of system implementation in Indonesia. Although systemization is done as a means to improve the current business, the system implementation project in Indonesia will not proceed well unless it is properly presented that "the results from the system can be seen through the process".
-
-
Difference between production order and KANBAN of planned production
2017/8/20
When we visit Japanese trading companies and manufacturers in Indonesia and ask them about their business requirements for system implementation, most of them tell us that they receive "Kanban" from the customer every morning, which corresponds to the delivery instructions for sub-payment of fixed orders received at the beginning of the month.
-
-
Understanding the entire business system from an accounting perspective
2017/8/6
When the inventory based on the actual input in the production control system is in the production control location, the outflow is an incurred expense and the inflow is recorded in the asset account as work in process inventory, but when the inventory is in the sales control location, the inflow is the cost of production and the outflow is recognized as cost of sales.
-
-
Until the production control system specifications were finalized in Indonesia and the system was put into operation in the field
2017/7/22
The production management system covers production activities from the time a material arrives at the material warehouse to the time it is processed in the manufacturing process, and sales activities after the product warehouse. The cost management system manages a series of cost flows: material costs and processing costs are incurred at the time a material is put into the manufacturing process, and are turned into manufacturing costs at the time it becomes a product, and are turned into sales costs at the time it is shipped.
-
-
Time based utilization and stroke based load factor
2017/4/8
The load factor is the ratio of demand to the supply capacity of the machine, and is the ratio of the number of strokes required to complete the order to the number of strokes per hour (GSPH) in press work. The utilization rate, on the other hand, is the ratio of operating hours to the operating hours of the machine, which is the ratio of the operating hours needed to digest the order to the operating hours of the day.
-
-
Handling of multiple pieces and left and right sets
2017/3/4
In the case of RL products, two pieces of the same item can be taken in a single shot, so if only one order is taken, the relationship is 1:1 (either the left or right is more than 1), but if the same number of orders are taken in both the left and right, the relationship is 2:1, which is the same as for multiple orders.
-
-
Paid and free supply to subcontractors
2016/10/3
However, the P/O price issued to the subcontractor includes material cost and processing cost only in the case of paid supply and processing cost only in the case of free supply.
-
-
What is the difference between the month of VAT tax treatment and the month of purchase recording?
2016/7/27
In the case of Indonesian domestic transaction, the month of Faktur pajak should be the same as the month of invoice date, but this means that taxation should be done in the month of invoice date. | https://bahtera.jp/en/category/erp/indonesia-production-control-system/ |
"Ladyfest Deep South aims to provide a forum for a diverse group of artists and activists to demonstrate their talents and passions and bring together a wide array of people to celebrate local art and activism and participate in community-building and skill-building workshops. The organizers of Ladyfest Deep South are committed to establishing an environment that embraces diversity based on gender, race, class, sex, sexuality, and other identity categories and welcomes those who further our commitment to enriching the breadth of contributions made by organizers, volunteers, performers, participants, and attendees. The Executive Committee, Advisory Board, and Organizers are committed to safe and best practices and expect that volunteers, participants, and attendees engage in the same, not only in the form of inclusive language, but that they contribute to all practices necessary for an atmosphere of respect and productive coalition-building. | https://grassrootsfeminism.net/cms/node/788 |
Back to 2020 Residents & Fellows Research Conference
REDUCTION OF INTRAPERITONEAL ADHESIOGENESIS BY PROTEASE INHIBITORS IN A CECAL LIGATION AND PUNCTURE MODEL OF SEPSIS AND PERITONITIS.
Philip Plaeke, University of Antwerp, Wilrijk, Belgium
Introduction
Intraperitoneal adhesions following surgery or peritonitis are responsible for a wide array of complications, including bowel obstruction, abdominal pain, and even infertility. Additionally, these adhesions tend to make subsequent abdominal procedures more challenging. Proteases, involved in the coagulation and fibrinolysis, have been presumed essential in the etiopathogenesis of adhesions.
Aims
Our experiments aimed to modulate adhesiogenesis by administering several protease inhibitors which act on several proteases involved in the coagulation and fibrinolytic pathways.
Methods
Intraperitoneal adhesions were induced in OF1 mice (Charles River, France) by cecal ligation and puncture (CLP; 50% ligation, single 21G puncture) under ketamine-xylazine anesthesia. Sham mice underwent a midline laparotomy without ligating or puncturing the cecum. Analgesia (buprenorphine) and fluid resuscitation were provided throughout the experiments. Mice were euthanized 48 hours later and adhesions were scored based on the number of abdominal tissues involved (extent) and the tenacity. The time between the abdominal skin incision and performing a ligation of the terminal ileum was quantified as an objective timed marker for the surgical easiness of access. An overview of the different protease inhibitors and experimental protocols are provided in Table 1. Statistical analysis was performed with SPSS v26, using one way ANOVA with Dunnett’s post-hoc test.
Results
Only one adhesion was encountered after sham surgery, while the CLP procedure resulted in adhesions in all vehicle-treated mice (Table 1). The broad-spectrum protease inhibitor Nafamostat Mesylate (NFM), significantly and dose-dependently reduced the extent (Figure 1A) and tenacity of the adhesions, which resulted in less time required to achieve access to the ileum (p<0.001, Table 1). A preventive dose was needed to notice the beneficial effect. Another broad-spectrum protease inhibitor, UAMC-00050, which has less factor Xa inhibitory activity, failed to reduce adhesions. Similarly, GM6001, a broad-spectrum matrix metalloproteinase inhibitor, had no effects on adhesion formation and increased mortality (Table 1). Finally, specific inhibition of factor Xa with Enoxaparin significantly and firmly reduced the extent and tenacity of the adhesions (Table 1 and Figure 1B). As a result, the time required to gain access and ligate the ileum was no longer different from sham-operated mice.
Conclusion
Protease inhibitors significantly reduced the extent and severity of intraperitoneal adhesions under the condition that they were administered preventively and specifically targeted coagulation pathways, as demonstrated by our experiments with enoxaparin and NFM. Since these protease inhibitors should target the coagulation system, accurate titration and specification of the proteases involved need further study.
Table 1 – Overview of the different experimental regimens and different effects on the adhesion scores.
Figure 1 - Overview of the effects of Nafamostat Mesylate (A) and Enoxaparin (B) on the severity of adhesions in the CLP-model for sepsis. | http://meetings.ssat.com/abstracts/2020-Virtual/895.cgi |
Temporal lobe epilepsy (TLE) is a general term for a set of conditions which give rise to recurrent seizures originating from the temporal lobe of the brain. The term is used generally to refer to any epilepsy originating in the temporal lobe but can include several different underlying pathologies that cause the seizures.
One subclassification separates the temporal lobe epilepsies into mesial (or medial) and lateral types. Mesial refers to seizures that originate from the medial part of the temporal lobe, particularly structures known as the hippocampus, amygdala and parahippocampal gyrus, whereas lateral refers to seizures which originate from the more lateral, superficial parts of the lobe.
There are numerous pathologies that can affect the temporal lobe and cause seizures, such as vascular malformations, brain tumors, infection and trauma. However, the most common pathologic finding in patients who present with typical termporal lobe epilepsy is known as hippocampal sclerosis or mesial temporal lobe sclerosis. The term sclerosis refers to the hardening of the hippocampus structures manifested as a smaller hippocampus on neuroimaging studies such as MRI. This sclerosis is evidence of some damage to the cells in the hippocampus in the temporal lobe, but in most cases the underlying cause of the damage is unknown.
The cause of hippocampal sclerosis and the resulting temporal lobe seizures is generally unknown. Most children who present with febrile seizures generally do not develop a chronic epilepsy condition in later life. However, a subset of these patients do go on to develop temporal lobe epilepsy in adulthood.
Patients with temporal lobe epilepsy present most frequently with recurrent partial seizures. These seizures, which affect only part of the brain, can occur without an effect on the patient's level of consciousness (termed a simple partial seizure) or with a concomitant loss or alteration of consciousness (termed a complex partial seizure). In a smaller set of patients with temporal lobe epilepsy, the seizure activity can spread to involve most of the brain on both sides, known as a partial seizure with secondary generalization.
Because of the involvement of the temporal lobe, there are some common manifestations of temporal lobe seizures. For example, an aura of some type precedes the seizure in many cases. This can include olfactory and/or gustatory illusions or hallucinations. Visual illusions or hallucinations can also occur. Psychological symptoms, such as fear or anxiety, can also occur, presumably due to involvement of the amygdala and other temporal structures involved in these emotions. Each patient's set of specific symptoms they experience with a seizure vary and can include these as well as other manifestations.
In a patient with recurrent seizures, the work up will generally involve a complete neurological examination followed by imaging studies such as CT scan and MRI scan. These studies help to determine if there is a lesion which can be seen to explain the epilepsy. Hippocampal sclerosis, the most common cause of TLE, although subtle, can often be seen on a good quality MRI.
Electroencephalogram is generally also performed to identify the seizure activity and attempt to localize the seizures in the brain. Typical partial seizures arising from the temporal lobe can often be localized to the general area. However, in some patients additional testing, such as MEG, PET scan or implanted electrodes, may be required to confirm the origin and hone in on its precise location.
As with other forms of epilepsy, management of temporal lobe epilepsy can include both medical and surgical treatments which attempt to decrease or eliminate the occurrence of seizures. In general, most patients are first started on various anti-epileptic medications. If this medical treatment alone is not sufficient to adequately control seizures, surgical treatments are also considered. If the source of the seizures is positively identified on testing to be the temporal lobe, the most common procedure used to treat this condition is a temporal lobectomy. A temporal lobectomy surgically removes part of the temporal lobe. The specific structures removed and how much of the lobe is removed depends on the specific procedure and the patient's underlying pathology.
For more information about treatment for epilepsy in general, see the Epilepsy Treatment page.
from the Temporal Lobe Epilepsy page. | http://www.nervous-system-diseases.com/temporal-lobe-epilepsy.html |
What is epilepsy?
Prior to understanding types of epilepsy it is important to understand the types of seizures it may cause. There are several types of seizures, which are categorized based on the type of brain activity involves and the changes in behavior they cause. Seizures are divided into two main groups; generalized and partial (also called focal). During generalized seizures, there is activity in both hemispheres of the brain, whereas, during a partial seizure, the activity is restricted to a localized area of the brain.
This condition affects men and women of all ages and races. Epilepsy has several causes including head trauma, brain disorders, and infection, although 50% of the time the cause is unknown. In some cases, there may be warning signs before a seizure, such as changes in vision or taste however for others it may occur with no prior indication.
During a seizure, the brain transmits sudden, short bursts of electrical activity which can cause muscle spasms, loss of motor control, and a variety of other symptoms. What symptoms you will experience as a result of a seizure will depend on the type of seizure you are afflicted by.
Types of Seizure
Prior to understanding types of epilepsy it is important to understand the types of seizures it may cause. There are several types of seizures, which are categorized based on the type of brain activity involves and the changes in behavior they cause. Seizures are divided into two main groups; generalized and partial (also called focal). During generalized seizures, there is activity in both hemispheres of the brain, whereas, during a partial seizure, the activity is restricted to a localized area of the brain.
Clonic Seizures
Clonic seizures cause repetitive and uncontrollable jerking of body parts including the arms and legs. They can occur in one side of the brain (focal) or both sides of the brain (generalized). During generalized clonic seizures the sufferer is usually unconscious while in focal clonic seizures they may retain some level of consciousness.
If you observe someone having a clonic seizure the best way to help is by preventing them from falling or hitting objects while jerking. Restricting or restraining their movement will not help and is not advised. These seizures are rare and typically start in babies and early childhood although they can affect people of any age.
Tonic Seizures
Tonic seizures cause stiffening, tension, or flexion of the body and extremities which can cause the sufferer to fall backward. They typically affect both sides of the brain, although they may begin in a localized area.
Often, tonic seizures happen during sleep and usually last no longer than 20 seconds. Tonic seizures are rare. When they do occur, they are often associated with Lennox-Gastaut syndrome.
Lennox-Gastaut Syndrome is a severe form of infant-onset epilepsy, which causes several different types of seizures including tonic-clonic as well as atonic.
Atonic Seizures
Also called drop attacks: Atonic seizures result in the sudden loss of muscle control causing the body and extremities to go limp. The head may drop suddenly, eyes may droop, and the sufferer may slump or fall forward.
They may affect one or both sides of the brain. This is not a common type of seizure though it may occur with Lennox-Gastaut Syndrome.
Tonic-clonic or Convulsive Seizures
Formerly called grand-mal seizures: This results in electrical activity throughout the brain and the person loses consciousness immediately. Tonic-clonic seizures generally start in both sides of the brain (generalized) but can also begin in one side and progress to both (focal to bilateral). These seizures combine the symptoms seen during tonic and clonic seizures. First the tonic seizure causes the body to stiffen and person fall to the ground, then clonic seizures cause rapid jerking body movements.
A convulsive seizure typically lasts from 1-3 minutes. Sometimes, prior to a tonic-clonic seizure, a person may experience what is known as an aura. An aura is also called a focal aware seizure (FAS). They serve as a warning sign of an impending seizure, giving patients time to prepare. However, an aura may not always lead to a tonic-clonic seizure and occur independently. Auras can be experienced in different ways including:
- A feeling of deja vu
- Flashing lights or dark spots in the peripheral vision
- Ringing or buzzing in the ears
- A salty or metallic taste in the mouth
- Heightened sensitivity to sounds
- A sensation that a limb is larger or smaller than it is
Myoclonic Seizures
Myoclonic seizures usually start between the ages of 3 and 12 years. This type of seizure is very brief, often only lasting or a few seconds. The sufferer will not lose awareness during myoclonic seizures.
He or she will experience muscle spasms, jolts, or twitches that are localized or may affect the whole body. A person who suffers from epilepsy may experience both myoclonic and atonic seizures.
Absence Seizures
Formerly called petit-mal seizures: Activity occurs through the entire brain (generalized) and causes unconsciousness but no convulsions. There are two types of absence seizures: typical and atypical absence seizures. In both forms the sufferer will have a blank stare and seem disconnected. They may also blink rapidly, roll their eyes upward, smack their lips, or gently pull or rub their clothing.
One difference between these types of seizures is that typical absence seizures are shorter, lasting 10 seconds or less, while atypical absence seizures can last between 10 and 30 seconds. In addition, atypical absence seizures build up slower than typical seizures. Typical absence seizures begin and end very suddenly. When the person regains consciousness they may not realize they had an absence seizure. In the case of children, a parent may not always notice them as they can easily be confused with daydreaming.
Status Epilepticus
This term is used to describe seizures that last more than 5 minutes or seizures that occur back-to-back without the person being able to regain consciousness. Status epilepticus can present as convulsive (tonic-clonic) or nonconvulsive seizures. It is considered a medical emergency requiring immediate intervention including anesthetics to calm or sedate the body or benzodiazepines.
Focal Seizures
Focal (partial) seizures occur in specific parts of the brain. They are divided into two main groups.
- Simple partial seizures (now preferably called focal aware seizures): A simple partial seizure will begin in a local region of the brain (such as the temporal, frontal, parietal, or occipital lobe) but may extend to other areas. During this type of seizure a person remains conscious. Most notably, they experience what is referred to as an “aura”. Auras are described as strange feeling of altered emotion and sensation including perception, vision, smell, and taste. The patient may also experience twitching, jerking, or stiffening of the body. These seizures often last less than two minutes and can be a warning for more intense seizures to come like tonic-clonic seizures.
- Complex partial seizures (now preferably called focal impaired awareness seizures): Typically, this type of seizure begins in the temporal or frontal lobe. In the temporal lobe they may begin in the hippocampus or amygdala (the area of the brain which controls memory and emotion). A person is not fully conscious during this type of seizure. They may stare blankly and perform automatisms (involuntary actions like facial twitching, mouth movements, and rubbing clothing).
Secondary Generalized Seizure: This term is used to describe focal seizures that develop into generalized seizures (on both sides of the brain).
Types of Epilepsy
Important definitions:
- Focal (partial): occuring in one area of the brain or hemisphere in the brain
- Generalized: occurring in both hemispheres of the brain
Brain Functions Areas
- Touch perception
- Movement control
- Manipulation of objects
- Voluntary movement
- Planning
- Intellect
- Problem solving
- Abstract reasoning
- Long term memory
- Speech comprehension
- Objects perception
- Faces recognition
- Hearing
- Visual reception
- Local orientation
- Shape percention
- Coordination
- Balance
- Reflex motor acts
- Conduction
- Tract for pain
- Temperature and preassure sensations
Temporal lobe epilepsy (TLE)
Implicit in the name, TLE is epilepsy that originates in the temporal lobe of the brain. TLE accounts for 60 percent of all focal epilepsy. TLE causes focal (partial) seizures that may either impair awareness (called complex partial seizures) or slightly alter perception (called simple partial seizures) also referred to as “auras”. It often begins in children around 10 years old but can start at any age. Some brain functions of the temporal lobe include emotions, memory, speech, and hearing. Becoming seizure free will generally require surgery. It is unlikely that medications alone will eliminate seizures, although medications can help control seizure frequency, duration, and intensity.
There are two forms of TLE:
- Mesial temporal lobe epilepsy (MTLE): Around 80 percent of TLE take this form. It begins in the inner area of the temporal lobe such as the hippocampus. It can cause tonic-clonic seizures.
- Lateral temporal lobe epilepsy (LTLE): This form begins in the outer region of the temporal lobe. It can cause simple and complex partial seizures.
Frontal lobe epilepsy (FLE)
This type of epilepsy can begin at any age. It causes electrical activity in the frontal lobes. FLE causes simple or complex focal seizures or a combination of the two. Usually anti-epileptic drugs can manage FLE, however, surgery or neurostimulation may be necessary.
Benign Rolandic Epilepsy (BRE)
This form of epilepsy is one of the most common. BRE is also called benign epilepsy with centrotemporal spikes because the seizure is caused by electrical activity that begins in the rolandic/ centrotemporal area of the brain. It is the most common epilepsy in children, affecting children between the ages of 3 and 12 years. BRE causes motor and sensory symptoms in the face including twitching, drooling, numbness, tingling, and speech impairment. BRE develop into tonic-clonic seizures affecting both hemispheres of the brain.
Photosensitive Epilepsy
For an estimated 3 percent of epilepsy sufferers, their condition can be triggered by lights that flash at certain speeds or in certain patterns. This is generally more common in young children and tends to lessen as they age. It causes tonic-clonic seizures.
Catamenial epilepsy
Also called menstrual seizures, this form of epilepsy is specific to menstruating women. Women who suffer from catamenial epilepsy experience more frequent seizures at certain times during their menstrual cycle. Menstrual seizures are caused by hormone changes before or during menstruation including declining progesterone and increasing estrogen levels. Imaging studies like EEG, MRI, and CT can help diagnose catamenial epilepsy, however, a menstrual/seizure journal will help in diagnosis. If seizures are identified to increase during menstrual periods relative to non-menstrual periods, this is an indication of catamenial epilepsy. Catamenial epilepsy is generally treated with anti-seizure medications and drugs that can regulate hormone levels.
Nocturnal Epilepsy
This form of epilepsy only occurs when a person is sleeping, generally in stages of lighter sleep. Between 7.5 and 45 percent of epilepsy sufferers only experience seizures while they are sleeping. For some people, establishing a consistent sleep cycle (circadian rhythm) can help decrease the frequency of nocturnal seizures. Some anticonvulsants may also help but only those that do not disrupt sleep stages. Seizures during the night may go unnoticed. If you awaken with injury, weakness, or headaches this may be a sign of nocturnal epilepsy. Other signs include loss of bladder control during the night, waking in positions you did not fall asleep in, or a disheveled area around you. Sleeping with a partner may also help identify some of the symptoms of seizures during the night.
Refractory Epilepsy
This is also called uncontrolled, drug-resistant, or intractable epilepsy. It refers to epilepsy which is resistant to medication. Refractory epilepsy is common, affecting about 33 percent of epileptics. It is important to choose an experienced epilepsy specialist that prescribes the correct medications for you seizure type. If two types of anti-seizure medications are used to no avail, dietary therapies, lifestyle improvements, surgery, and neurostimulation are the next line of treatment for refractory epilepsy.
Sudden Unexpected Death in Epilepsy (SUDEP)
This refers to the sudden death of a person suffering from epilepsy, who is otherwise healthy, that upon post-mortem evaluation does not have a cause of death. If death is caused by drowning, trauma, or status epilepticus (prolonged seizures) this is not considered SUDEP. The exact cause of SUDEP is unknown and may differ from case to case. Some studies indicate cardiac, respiratory, and neurological factors may contribute to SUDEP.
Lennox-Gastaut syndrome (LGS)
A form of severe epilepsy affecting infants and children (typically between 3 and 5 years old). LGS causes multiple types of seizures including tonic, atonic, and absence seizures. They can also cause cognitive and behavioral impairments like mental retardation (i.e. learning problems) or psychomotor regression (losing recently attained abilities). LGS may delay children from attaining developmental milestones like crawling and sitting.
Dravet Syndrome
A severe and rare form of epilepsy affecting infants. The onset of Dravet Syndrome usually occurs during fever or illness. The most common seizures are myoclonic (muscle twitching or jerking) and tonic-clonic seizures (full body stiffening followed by jerking seizures on the ground). Seizures can be triggered by body or environmental changes in temperature, flashing lights, or strong emotions. Children generally develop disabilities as they age. Treatment can include seizure medications, diet, and neurostimulation. Surgery is not commonly used.
If you or a loved one suffers from recurrent seizures, now is the time to take back control.
Our epilepsy treatment center offers diagnostic, nonsurgical, and surgical treatment options for epilepsy. Our epilepsy specialists, neurologists, and neurosurgeons provide comprehensive treatment options from epilepsy medication to epilepsy surgery. Together we hope to overcome epilepsy and cultivate a life of independence and peace of mind for all those affected by recurrent seizures. If you are searching for a seizure doctor in Miami, contact us today to schedule an appointment. | https://miamineurosciencecenter.com/en/conditions/epilepsy/ |
Please use this identifier to cite or link to this item:
http://hdl.handle.net/123456789/495
|Title:||Determination of Water Resource Suitability for Grazing at Makoholi Research Station, Masvingo, Zimbabwe.|
|Authors:||Moyo, Ziso|
|Keywords:||water resource suitability|
grazing
water points
water quality
|Issue Date:||May-2018|
|Publisher:||Lupane State University|
|Abstract:||The study was carried to determine water resource suitability in Makoholi Research Institute, Zimbabwe. Parameters like water quality and distance from water points were used to determine the water resource suitability using the FAO land suitability classification method of 1991. Vegetation structure and composition assessment was done thereafter to assess its change as one moves away from water points. The water points were identified through the use of scanned map in a Geographic Information System (GIS) environment and ground verification after consulting the local community. The geographic coordinates of the water points were taken using a Global Positioning System (GPS). Water from these sources was sampled and sent for laboratory analysis to establish its physio-chemical characteristics. Water suitability classes were established by determining distance from water points using GIS software. The General Linear Model (GLM) was used to establish differences in water physio-chemical parameter due to sampling site and season. Analysis of variance was used to test significant differences in vegetation structure in different suitability classes. The data was tested for normality using the Shapiro Wilk test. The homogeneity of variance was assessed by Levine’s test for equality of error variances. Results obtained from this research showed that the range studied had no shortcomings in terms of water quality, all site fell in SI category with regards to water. However they showed that water physio-chemical parameters were significantly affected by season (p<0.05). Total dissolved solids were significantly affected by season (p=0.003), the turbidity and pH were significantly affected by season (p=0.000), and electric conductivity was also significantly affected by season (p=0.003). However nitrates were not significantly affected by season (p=0.062). However, the results showed that the rangeland was limited to some extent by distance from water points. The results also revealed a significant difference (p<0.05) of vegetation structure between suitability classes. The canopy cover, litter cover, soil compaction and top hamper differed significantly within suitability class or buffer distances. On assessing the species composition the Increaser II grass species which included mainly the Eragrostis species dominated the area. The grass height also differed significantly within buffer distances. It can be concluded using the limitation approach that Makoholi rangeland falls under the Suitability class S2 as far as the water resources are concerned and that vegetation near the water point is heavily utilized hence proving that water resource distribution affects range land utilization.|
|URI:||http://hdl.handle.net/123456789/495|
|Appears in Collections:||Department of Animal and Rangeland Management|
Files in This Item:
|File||Description||Size||Format|
|Moyo_Sizo.pdf||167 kB||Adobe PDF||View/Open|
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated. | http://ir.lsu.ac.zw/handle/123456789/495 |
Flashcards in Intro to Lymphomas Deck (43):
1
reactive disorders
benign. cells responding appropriately to stimuli via a mixed polyclonal expansion.
2
neoplasm
malignant clonal expansion of a single cell. autonomous to a stimulus, can't be controlled and eventually leads to organ replacement, functional compromise, and death.
3
leukemia
neoplasm that extensively involves the bone marrow and spills into the peripheral blood. Often cells native to the marrow and immature lymphocytes (lymphoblasts)
4
lymphoma
tumors that form solid masses, typically involving lymph nodes or related sites (spleen, GI, skin). Composed of native cells, i.e. mature lymphocytes.
5
CLL/SLL (chronic lymphocytic leukemia/small lymphocytic lymphoma)
low grade/indolent B cell neoplasm that always involves lymph nodes but usually has circulating peripheral blood component.
6
acute
immature stage of differentiation. early undifferentiated state (blast cells)
7
chronic
mature stage of differentiation. cyte cells.
8
acute term for lymphoid lineage neoplasms
precursor
9
chronic term for lymphoid lineage neoplasms
peripheral
10
components for modern classification system for non hodgkin lymphomas
morphology, immunophenotype, genotype, putative cell of origin, clinical features
11
where are lymphocytes derived from?
immature pluripotent cells in the bone marrow
12
where does maturation of lymphoblasts take place?
bone marrow (b cells) and thymus (T cells)
13
lymphatics in the bone marrow
none! clinically important because if lymphoma cells are found in the marrow, it is indicative of stage 4, widely disseminated disease
14
difference between primary and secondary follicles
primary: aggregate of naive, unstimulated, mature B cells
secondary: stimulated B cells (develop following antigen exposure). consists of germinal center and mantle. antibodies are refined
15
lymph node cortex
houses B and T lymphocytes. divided into perifollicular T cell rich zone and the B cell rich follicles
16
lymph node medulla
location of plasma cells
17
multiple myeloma
malignant plasma cells home in on bone marrow instead of staying confined to medulla
18
what occurs within germinal centers?
antibody refinement via class switching and somatic hypermutation. expansion of CD10+BCL6+BCL2- B cells
19
mantle of secondary follicles
rim of naive b cells surrounding germinal center. not antigen stimulated
20
tangible body macrophages
histiocytes gobbling up dying cells within the germinal center and laden with cellular debris
21
why don't germinal center cells express BCL2?
need apoptosis to occur since so much growth and mitosis occurs within the germinal center
22
centrocytes
smaller quiescent activated B cells that have modified their Ig loci. migrate from dark to light zone of germinal center to make contact with follicular dendritic cells.
23
margin in secondary follicles
composed of B cells that have traversed the germinal center, refined their Ig's and have become quiescent as memory or plasma cells. prominent in the spleen.
24
vast majority of non hodgkin lymphomas are derived from what type of cell?
mature B cells. they undergo multiple rounds of DNA damage as they refine their antibodies
25
why is Hodgkin lymphoma a classification of its own?
neoplastic cells don't look like lymphocytes under the microscope and don't resemble them immunophenotypically either. but is still of lymphoid origin (mature B cells). | https://www.brainscape.com/flashcards/intro-to-lymphomas-1813721/packs/3200276 |
What is a Somatoform Disorder?
A somatoform disorder is a mental disorder that is characterized by physical symptoms of a physical illness or injury which cannot be explained by other medical conditions. The symptoms are also not the result of substance use, or attributed to another mental illness.
Somatoform disorders are not the result of someone faking or trying to get attention. Those who have somatoform disorders believe that they are actually sick.
Read more about somatoform disorders.
Read more about Munchausen's.
Read more about Munchausen-by-proxy.
Somatoform disorders include the following (click the links to learn more about each disorder):
Somatization Disorder (read more below)
What is Somatization Disorder?
Somatization Disorder (also known as hysteria or Briquet's Syndrome) is a somatoform disorder in which an individual under the age of thirty experiences a number of long-term, recurring physical symptoms that cannot be attributed to a specific medical condition or illness. These physical symptoms are often about the gastrointestinal system, pain, sexual, or pseudoneurological symptoms and may last for several years.
It is important to note that these physical symptoms are real and are not imagined. Often, individuals with this disorder are accused of faking symptoms that are, in fact, legitimate medical complaints. They may often visit many doctors before a proper diagnosis is made, which may lead those with somatization disorder to be labeled as "treatment seekers," or "doctor shopping." In reality, someone with somatization disorder has legitimate symptoms that may not be properly diagnosed.
What Are The Causes of Somatization Disorder?
The precise cause for somatization disorder is, as yet, unknown. Some researchers believe that, due to a strong connection between the brain and the body, people with somatization disorder experience pain differently than others.
For example, an affected individual's brain may interpret physical pain in different, more extreme ways that lead to increased experienced pain levels. There also appears to be a link between emotional pain and stress and exacerbation of physical symptoms, whether that is the onset of symptoms as a trigger, or the worsening of the symptoms once present.
Prevalence of Somatization Disorder:
Somatization disorder is a relatively uncommon mental illness. It's suspected to occur in up to 2% of women and 0.2% of males.
Somatization disorder often occurs with other mental illnesses, especially mood disorders or anxiety disorders.
Read more about mood disorders.
Read more about anxiety disorders.
Risks for Developing Somatization Disorder:
While there are no specific indicators of who may or may not develop somatization disorder, it tends to occur more commonly in people who already suffer from irritable bowel syndrome or who have chronic pain issues. The onset of somatization disorder is before the age of 30. Somatization disorder affects women more frequently than men.
Individuals who have suffered from physical or sexual abuse in the past may have a higher risk of developing this disorder, although not everyone who is affected by somatization disorder has suffered from abuse. It is not know why these conditions are correlated.
Having a family member with somatization disorder can increase the risk of a female developing the disorder as well; men with an affected family member are not necessarily more prone to the disorder, but have a higher likelihood of developing personality disorders or substance abuse issues.
Cultural factors may also affect who develops somatization disorder (i.e., men or women) and what sort of symptoms they experience.
Symptoms of Somatization Disorder:
Symptoms tend to occur over the course of several weeks, months, and years. The symptoms of somatization disorder may not be explained by a medical condition or can simply be too excessive or extreme to be attributed to a medical condition.
While symptoms most commonly involve complaints of pain or issues with the nervous, reproductive, or digestive system, other possible symptoms associated with somatization disorder are as follows:
- Chronic pain (such as headaches, backaches; abdominal, chest or join pains)
- Amnesia
- Bloating
- Diarrhea
- Swallowing problems
- Loss of voice
- Dizziness
- Impotence
- Nausea and/or vomiting
- Shortness of breath
- Vision problems or blindness
- Paralysis or weakness of muscles
- Heart palpitations
- Food intolerance
- Pain during intercourse or menstruation
- Pain or burning sensation in sexual organs without intercourse
- Heavy, irregular, or painful menstruation
- Vomiting throughout the course of pregnancy
Some individuals with this disorder will tend to describe their symptoms in an emotional or noticeably dramatic manner. However, just because a person describes their ailments this way does not mean that they have somatoform disorder.
Stress can exacerbate symptoms. The cumulative effects of the physical symptoms can lead to the individual having problems functioning and maintaining a normal life.
Diagnosing Somatization Disorder:
Due to the varied physical symptoms, as well as a driving need to seek help, the individual may have several consultations with varying health providers and physicians.
However, due to the ongoing and often years-long symptoms, it is important that the individual consult with one primary care physician on a consistent basis. This doctor may diagnose somatization disorder after the patient has had years of observable symptoms that aren’t attributable to a specific medical disorder.
Prior to diagnosing somatization disorder, the doctor may test for the following diseases in order to rule them out as possible diagnoses:
- Lupus
- Multiple sclerosis
- Fibromyalgia
- Chronic fatigue syndrome
- Irritable bowl syndrome
Other tests may vary depending on which symptoms the individual has.
Criteria for Diagnosis of Somatization Disorder:
According to the DSM-IV, the diagnostic criteria for somatization disorder are as follows:
- A history of somatic complaints over several years starting before age 30.
- At least four different sites of pain on the body
- At least two GI symptoms other than pain (nausea, bloating, diarrhea, vomiting, diarrhea, food intolerances)
- One sexual symptom - a history of at least one sexual or reproductive symptom that is not pain (sexual indifference, erectile dysfunction, irregular menses, excessive menstrual bleeding)
- One pseudoneurological symptom - a history of at least one symptom of deficit suggesting that the person has a neurological condition NOT related to pain (paralysis, localized weakness, impaired coordination, difficulty swallowing, urinary retention, hallucinations, loss of senses, seizures, amnesia)
As well as either:
- Symptoms cannot be fully explained by another general medical condition or substance use
- If there is an associated medical condition, the physical complaints or social or occupational impairments are much more severe than generally expected based upon medical history, examination, or laboratory results.
The symptoms do not have to occur all at once, but may occur over the course of the disorder.
Treatment for Somatization Disorder:
There is no specific treatment for somatization disorder, because the illness is not due to any specific medical condition. However, treatment usually focuses on managing the physical symptoms of the disorder while treating possible mental health causes or symptoms or the disorder.
If your doctor suspects that you have somatization disorder, he or she may request evaluations for anxiety or depressive disorders from a mental health professional. Many people find that antidepressants or anti-anxiety medications can alleviate symptoms, but most who suffer from somatization disorder benefit from talk therapy (also known as psychotherapy) or cognitive behavioral therapy (CBT) with a mental health professional.
Therapy can help the individual learn to:
- Gain awareness of pain triggers
- Develop coping methods
- Stay active despite pain
As stated above, it is important to have one doctor consistently observing the individual's symptoms over the years.
Tips for Living With Somatization Disorder:
Individuals with somatization disorder are especially sensitive to biases or stigmas related to mental health disorders. This is especially due to the fact that it can be difficult to obtain a specific diagnoses, or "validation," or the individual's condition. Some people may accuse the individual of faking their symptoms for different reasons. Some primary care physicians may not understand the cause of the symptoms and thus can be dismissive of the patient’s complaints.
It’s important for the individual and their family and friends to understand that someone with this disorder is NOT "faking" it. All medical and mental health professionals readily acknowledge that the physical symptoms are, indeed, very real. If your doctor does not, is dismissive of your symptoms, or says that you're "imagining" it, then it is time to find a new primary care physician that you trust.
Individuals with this disorder can sometimes develop dependencies on pain relievers or sedatives, which ultimately only complicate the diagnosis and treatment of the disorder. If you suspect you may be developing a dependency, consult with your doctor immediately.
Related Resource Pages on Band Back Together:
Additional Resources For Somatization Disorder:
Somatoform Disorder Info - A website with statistics and information on somatoform disorder, conversion disorder, body dysmorphic disorder, and more.
Campaign on Conversion Disorder - An ongoing conversation clarifying Conversion Disorder and its causes, symptoms and treatments.
Disorders.org - This website is dedicated to somatoform disorders, including somatization. Information, resources, diagnosis, and treatment options are contained within. | http://www.bandbacktogether.com/Somatization-Disorder-Resources/ |
Ceramic artist Tim Kowalczyk is drawn to objects of little material value—crushed tin cans, ripped up cardboard, and Polaroids that have been damaged during development. It is in these typical throw aways that he finds beauty, an attraction to the history embedded in their wrinkles and folds. To memorialize these items Kowalczyk creates their likeness in clay, creating works that look exactly like mugs haphazardly formed from cardboard with “Please Handle With Care” stickers still stuck to their sides.
“Ceramic’s ability to replicate any form, texture, or surface is what draws me to the material,” says Kowalczyk in his artist statement. “Replicating real objects out of ceramic material and putting them in a tableau is my version of writing a poem. I am able to sculpt, form, design, and construct sculptures with a sense of purpose, priority, and preciousness.”
The Illinois-based artist graduated with an MFA from Illinois State University in 2011, and is the adjunct Ceramics instructor at Illinois Central in East Peoria, IL. You can see more of his work on his website or at Companion Gallery where he is represented.
Share this story
Art
An Explosive New Mural and Paintings by Collin van der Sluijs
From the smallest details expressed on canvas to the cracked facade of a multi-story building, Dutch artist Collin van der Sluijs is comfortable investigating what he refers to as “personal pleasures and struggles in daily life.” Working without sketches or notes, the artist dives into each artwork with spray paint, acrylics, and ink as ideas take hold and images slowly emerge. He frequently examines themes of the natural world such as the cycle of life, the depictions of various species of birds, and the psychology of beings both human and animalistic.
Van der Sluijs was most recently in Chicago where he completed a tremendous mural in the south loop as part of the Wabash Arts Corridor that depicts two endangered Illinois birds amongst an explosion of blooms. He also opened his first solo show in the U.S. titled “Luctor Et Emergo” at Vertical Gallery, featuring a wide range of paintings and drawings. You can follow more of his work on Flickr.
Share this story
Art Illustration
Quirky New Chalk Characters on the Streets of Ann Arbor by David Zinn
Michigan illustrator David Zinn (previously) has brightened the streets of Ann Arbor with his off-the-wall (or technically on-the-wall) chalk drawings since 1987. The artist works with chalk or charcoal to create site-specific artworks that usually incorporate surrounding features like cracks, street infrastructure, or found objects. Over the years he’s developed a regular cast of recurring characters including a bright green monster named Sluggo and a “phlegmatic flying pig” named Philomena.
Many of Zinn’s artworks are available as archival prints, and he recently published a new book titled Temporary Preserves. You can follow his almost daily street chalk adventures on Instagram and Facebook.
Share this story
Art
Hyperrealistic Oil Paintings of Haphazardly Wrapped Packages and Gifts by Yrjö Edelmann
The works of Yrjö Edelmann are so precise that they translate without question as photograph. Even with double, triple, and quadruple takes it is nearly impossible to imagine that the pieces have been produced from precisely placed oil paint. The objects Edelmann depicts are not perfectly wrapped pieces, but rather haphazardly taped and constructed, often on irregularly shaped canvases to heighten the trompe-l’œil effect. Scotch tape and twine hold the wrapping paper in place, with wrinkles covering the bright and often reflective package’s surface.
Edelmann was born in 1941 in Finland, and studied at the University College of Arts in Stockholm, Sweden. Edelmann is represented by Craighead Green Gallery in Dallas, Gallerie GKM in Malmö, and Scott Richards Contemporary Art in San Francisco where he has an upcoming solo exhibition in March of 2016. You explore more of his work in detail on Artsy. (via This Isn’t Happiness)
Share this story
Art
Trompe L'Oeil Ceramics That Imitate the Natural Appearance of Decaying Wood
Ceramicist Christopher David White (previously) accurately captures the decay of wood through ceramics, portraying the distinct character of the natural material from the fine wood grain to the light ash coloration at the pieces’ edges. By utilizing a trompe l’oeil technique, White forces the viewer to take a closer look at his work while also investigating the truth hidden in the hyperrealistic sculptures.
Through his ceramic pieces White explores the reality of impermanence, often combining man and nature through treelike limbs and faces. “I seek to expose the beauty that often results from decay while, at the same time, making my viewer question their own perception of the world around them,” explains White. He hopes to highlight the fact that we are not separate from nature, but rather intrinsically connected to it.
White has a BFA in Ceramics from Indiana University and MFA in Craft and Material Studies from Virginia Commonwealth University. White’s work will be included in the exhibition Hyper-realism at the Daejeon Museum of Art in South Korea opening this fall. (via Artist a Day)
Share this story
Art
New Trompe L’oeil Sculptures of Flowing Dresses and Leaves Constructed from Plywood by Ron Isaacs
When looking at these wall-mounted sculptures depicting wrinkled dresses that sprout leaves or butterflies by artist Ron Isaacs (previously), you would be forgiven for thinking they were constructed from anything other than their actual materials: plywood and acrylic paint. Isaacs uses pieces of layered Finnish birch to construct every detail of these architectural clothes which he then covers in trompe l’oeil painting to create the illusion of depth. “I am still fascinated by the old simple idea of resemblance, the very first idea of art after tools and shelter: That an object made of one material can take on the outward appearance and therefore some of the ‘reality. of another,” says Isaacs. You can see his most recent collection of work as part of his second solo show at Tory Folliard Gallery in Milwaukee through May 23, 2015.
Share this story
Editor's Picks: Art
Highlights below. For the full collection click here. | https://www.thisiscolossal.com/tags/trompe-loeil/page/3/ |
Terms and Conditions
If you use this website you will accept this website’s terms and conditions in full. If you do not think its terms and conditions to be reasonable, then do not use this website. The general terms and conditions of this website are as follows:
The term ‘creators’ whenever used includes without limitation authors, editors, technicians who share and retain all site copyrights.
The term ‘materials’ whenever used includes without limitation, text, images, audio content, video content, and audio-visual content.
General Conditions: The website thecosmiccomic.com is designed by its creators to provide entertainment and online community for readers of the Cosmic Comic books and other materials created by and associated with this Cosmic Comic. thecosmiccomic.com is available for general audience reading and viewing, however, children under thirteen years must secure permission from his/her parent or guardian before accessing this website.
Values: Whenever possible all contributions to thecosmiccomic.com will and must be firstly approved by its creators before inclusion on the site. All effort is made to ensure that the material on the website is inclusive and non judgemental, does not contain personal or disrespectful information from or about its contributors, complies with child protection laws and is respectful towards all personal differences, cultural diversities and ecosystems.
Content: Certain links in this site will lead to sites which are not under the control of thecosmiccomic.com. The creators of thecosmiccomic.com are not responsible for the material of external sites.
Ordering: When placing an order you agree that the information given is accurate and complete. Personal information remains confidential and will only be used for the purposes intended.
Returns and Complaints: Please use the contact email facility for issues and concerns regarding purchasing, or issues and concerns regarding website material.
Privacy & Security: thecosmiccomic.com does not disclose buyers’ information to any third parties. In order to prevent unauthorised access or disclosure, suitable physical, electronic and managerial procedures are in place to safeguard and secure personal details with regards to purchasing of goods and material contributions.
Cookies: Cookies are small files that a site or its service provider transfers to your computers hard drive through your Web browser [if you allow] that enables the sites or service providers systems to recognise your browser and capture and remember certain information. Cookies are used by thecosmiccomic.com for this purpose.
Acceptable Use: You must not use this website in any way that causes, or may cause, damage to the website or impairment of the availability or accessibility of the website, or in any way which is unlawful, illegal, fraudulent or harmful or in connection with any unlawful, illegal, fraudulent or harmful purpose or activity.
You must not use this website to copy, store, host, transmit, send, use, publish or distribute any material which consists of (or is linked to) any spyware, computer virus, Trogan horse, worm, keystroke logger, rootkit or other malicious computer software.
You must not conduct any systematic or automated data collection activities (including without limitation scraping, data mining, data extraction and data harvesting) on or in relation to this website without the expressed written consent of its creators.
No Warranties: thecosmiccomic.com is provided ‘as is’ without any representations or warranties, expressed or implied. The creators of thecosmiccomic.com make no representations or warranties in relation to this website or the information and materials provided on this website.
Nothing on this website constitutes, or is meant to constitute, advice of any kind. Therefore the creators of thecosmiccomic.com disclaim all liability and responsibility arising from any reliance placed on such material by any visitor to the site, or by anyone who may be informed of any of its materials.
thecosmiccomic.com does not warrant that this website will be constantly available at all times, or that the information on this website is complete, true, accurate or non-misleading.
No Endorsements: The website, thecosmiccomic.com and its creators do not expressly or implicitly endorse or support the content or actions of any third party websites or services, including, but not exclusive to those sites hyperlinked to this website.
Linking to this site: You are permitted to link to the homepage of this website in a way that is fair and legal, provided that the association does not damage the reputations of the site and/or its associated contributors. However, you must not establish a link in such a way as to suggest any form of association, approval or endorsement where none exists.
Contributors: The creators of thecosmiccomic.com will remove material from its site that has been deemed to infringe upon the copyright of others.
All contributors maintain copyright of their own material submitted. However, by submitting his/her material to this website for whatever purpose, a contributor grants permission to the creators of this website to use his/her material royalty-free, perpetually, irrevocably, non-exclusive, unrestricted, and with a worldwide license to reproduce, adapt, translate, prepare derivative works, distribute copies, perform, or for public display in any existing or future media. The contributor also grants the creators of thecosmiccomic.com to sub-license these rights to use his/her material, and the right to bring an action for infringement of these rights.
When identity is required, people seventeen years or under, will require written consent from their parent or guardian before his/her material can be considered for inclusion on this website.
Intellectual Property Rights: You are not permitted to republished, posted, transmit, store, sell, distribute or modify any of the material that appears on this site unless you have firstly obtained the written permission of the creators of thecosmiccomic.com.
Variation: The creators of thecosmiccomic.com may revise these terms and conditions from time-to-time. Revised terms and conditions will apply to the use of this website from the date of the publication of the revised terms and conditions of this website. Please check this page regularly to ensure you are familiar with the current version.
Severability and Enforcement: If any provision of this user agreement is held invalid or unenforceable, that provision will be modified to the extent necessary to render it enforceable without losing its intent. If no such modification is possible, that provision will be severed from the rest of this agreement. However, that does not deem a waiver the creator’s of thecosmiccomic.com right to do so in the future. | http://thecosmiccomic.com/terms-and-conditions/ |
We partnered with Front, a workplace software company that is reinventing the inbox so people can accomplish more together.
In a world of constant pings and competing priorities, it can be tough to stay on track during the workday.
There’s no doubt that lack of focus keeps us from getting work done efficiently, but its negative impacts are extending even further than that: it’s bringing down our mood and outlook in general.
In our recent study, 84 percent of people said that constant interruptions at work are making them less happy.
Next time you feel stress creeping up, try these techniques for staying focused at work. You might find that adding a little structure to the madness is all you need to reach your maximum productivity — and be a little happier.
1. Eisenhower Decision Matrix – Prioritize work by value.
We’ve all experienced that disappointing moment: You complete a task only to realize that it was, well, useless. The Eisenhower Decision Matrix helps you eliminate those tasks from your workday, so you can focus on initiatives that make a big splash. It’s simple, and you can draw it anywhere when you need to prioritize your work.
As Intercom’s Geoffrey Keating explained, “It places anything you could spend your time doing on two spectrums: one going from the most urgent possible task to the least urgent, the other going from critically important to totally inconsequential—and using these as axes, divides your work into four quadrants.”
The goal of this exercise? To keep your focus on high-impact work, and cut the distracting fluff. Spend the vast majority of your time on tasks that land in Quadrants 1 and 2. For work in 3 or 4, see what you can eliminate. Ask yourself: Why is this task necessary?
“By ‘batching’ urgent and important work, we minimise the switching costs involved in moving from different types of work,” Keating said. “It allows us to work on the most valuable initiatives and, more importantly, finish them.”
2. Deep Work – Be honest about which tasks require your full attention.
Some tasks can be done while chatting with your coworkers or sitting in a buzzing open office space. Other tasks require you to draw the figurative blinds and block out the external world for a while. And that’s perfectly okay. Labeling your “deep work” helps you be more self-aware about the environment you need to accomplish certain tasks.
“It’s important to make the differentiation between shallow and deep work when we’re talking about distractions,” HelpDocs Founder Matt Bradford-Aunger noted.
“With shallow work, a few distractions aren’t that big a deal,” he said. “It’s the deep work like development or content creation — where being ‘in the zone’ is critical to getting the job done — we find most hindered by distraction and context switching.”
To try out the deep work technique, spend 10 minutes in the morning going through your to-do list. Label activities as “deep” or “shallow”. Then schedule your calendar accordingly. You can put “deep” initiatives in the morning when you’re most focused, or schedule “shallow” tasks intermittently to give yourself a brain break between tough tasks — just pick a schedule that agrees with your working style.
Last, don’t be afraid to tell your team about your deep work time. Whether it’s putting on headphones, working from home, or setting your Slack status, make it known to your team when you’re cracking down, so they can respect your focus time.
3. Batching – Knock out short tasks together.
Being “on a roll” feels great: you’re checking tasks off your to-do list, left and right. That’s where batching helps: you can get in the groove and figure out the most efficient way to do something.
“Batching is my secret weapon for productivity,” explained Teamweek’s Emily McGee. “With batching, I do the same task or type of task for an extended period of time. For example, instead of doing keyword research and coming up with new blog post ideas every day, I spend two full days at the beginning of the month doing keyword research and filling in the content calendar.”
”Batching requires you to be organized and plan ahead, but it saves you tons of time and ensures you aren’t context switching,” she said. “I batch everything from writing emails to attending meetings, and it makes me much more productive.”
4. Pomodoro – Avoid “Now what?” syndrome.
Ever find that you’re able to focus better when you’re on a tight schedule? The Pomodoro Technique gives you strict timing to help you blast through tasks and avoid distractions.
Pomodoro, named after a tomato-shaped timer, helps you train your brain to stay on track for short periods of time. It capitalizes on a sense of urgency to help you keep plowing through your work.
“It enables you to move through your tasks without having to think about what to do next,” said Sophie Worso from Focus Booster, an app that uses the Pomodoro Technique.
It’s pretty simple:
- Set your timer for 25 mins, and start working.
- When the time’s up, take a short 5-minute break.
- Repeat this for 4 intervals of 25 minutes.
- After the fourth working session, take a longer break, (25 to 30 minutes)
What’s nice about Pomodoro is the feeling of growth and accomplishment: you get better at focusing for the entire 25 minutes over time, and you feel rewarded with a break at the end of each work session.
5. SMART Framework – Set attainable goals.
It’s a lot easier to stay focused when you have a goal that you actually feel like you can achieve. If your goal is vague, can’t be measured, or you don’t have a goal at all, you’re more likely to fall into the context switching trap.
As Max Benz from Filestage points out, the SMART goals template is an old favorite that can help you plan reasonable daily and long term goals.
“Plan out where you need to be a month from now, a year from now or even longer. Put notes or a calendar on your wall so even when you do occasionally get bogged down in the details, you always remember where you’re trying to get to,” Benz said.
Take back your focus
“The best way to become more productive is not to increase your focus, but to decrease your distractions.” – Dylan Fernandez
On days when you’re feeling extra distracted, just remember, sometimes physically removing yourself and taking a walk outside is the easiest way to clear your head.
Looking for ways to help your whole team stay on track? Check out these 10 best tools to keep your team focused.
This article was originally published by Front: 5 techniques for staying focused. | https://lattice.com/library/5-techniques-for-staying-focused-at-work |
A 'Pink Frost' Remake Heralds The Return Of The Chills
In certain corners of the record-collecting world, few rock songs are revered as much as The Chills' "Pink Frost." The tune turns 30 this year, and to celebrate, the band has re-recorded it and included it as the B-side to its first single since 1995.
Before we get any further, if you're not familiar with The Chills or the original "Pink Frost," please stop what you're doing right now and listen to this.
Led by songwriter Martin Phillipps, the indie-rock band from Dunedin, New Zealand, helped kick-off the "kiwi rock" movement — and arguably indie rock as we know it — in the early 1980s. Bands like The Clean, The Verlaines and The Bats took the D.I.Y. punk ethos of the late '70s and applied it to melodies that wouldn't feel out of place on a Kinks record. (A similar musical evolution was happening in the U.S., with R.E.M. leading the way.)
"Pink Frost" was recorded in 1982 but released two years later, after the death of the song's drummer Martyn Bull from leukemia, so Phillipps' distraught lyrics ("She won't move and I'm holding her hand") have always felt especially melancholy. Paired with the song's signature ethereal guitar and motorik drumming, it's a haunting experience that still stops me in my tracks.
Now, 30 years later, The Chills will soon release a new album. The band recorded an updated version of "Pink Frost" while in the studio, to reflect the way the band currently performs it in a live setting.
If you're a Chills fan, what do you think of re-recording a classic like "Pink Frost"? If you're new to the band, which version do you prefer? | |
Submitted by David Emmett on
It has been a strange weekend so far in Barcelona, with changing conditions once again the culprit. First, there was the heavy rain on Wednesday and Thursday, which left the track coated in fine sand and dust blown in from the Sahara. Then there is the rapidly changing weather: temperatures have been rising rapidly every day, with track temperatures 10°C higher on Saturday than they had been on Friday, with a similar increase expected again on Sunday. Track temperatures for the race are expected to be well over 50°C, spelling disaster for grip levels.
Completing the trifecta of problems, the Moto2 race is likely to leave a thick layer of Dunlop rubber on the surface, which will make grip levels even more unpredictable. "After Moto2, it will be worse," Michelin's Two Wheel Motorsports manager Piero Taramasso predicted on Saturday evening. "Many times this problem happens when you have aggressive asphalt, which is the case here, and on a track in very hot conditions, which is also the case. So I think that tomorrow after the Moto2 race, the conditions will be not as good as we would like."
Another day of track action and the running of the Moto2 race may help sweep some of the dust and sand from the track, but the rubber the Moto2 bikes leave behind in the forecast hot and humid conditions will leave the surface greasy and without grip. "The track will be cleaner, but without Michelin rubber on the track," Taramasso said. One step forward, two steps back.
Moving targets
Changing and unexpected conditions have made tire selection exceptionally difficult. Michelin put the tire allocation together to suit a normal Barcelona weekend, covering a range of temperatures and weather conditions. They didn't expect the track to be covered in sand, a freak occurrence that happens very occasionally. So while all six tires – three different compounds on the front, three on the rear – can do race distance, depending on bike and riding style, evaluating the ideal combination has been nearly impossible.
That has made it impossible for the riders to size up the competition, and for the media and other timesheet analysts to figure out what a realistic race pace will be, and what combination of tires they can expect to see their rivals try to race.
"There was a big confusion during the weekend," Andrea Dovizioso commented on Saturday afternoon. "The grip is very low. We are one second slower than last year – everybody is. With this characteristic everybody is struggling a lot to manage the tires. It’s a bit difficult to make the right decision because it was very unusual to see the Free Practice 4 with a lot of new tires and a lot of comparisons. I think until the race nobody will really take the decision."
Starting from the second row, Dovizioso was confident, but with little idea of how the race might actually play out. "I am quite happy with the bike," the factory Ducati rider said. "The speed is there. We are a big group with a similar pace but it’s very difficult to understand the real pace of the race of the competitors because everybody is struggling and not many riders did many laps, a race distance. It’s a bit difficult to understand. The races here are always normally very hard because the grip is very low, especially in the race. If it’s a bit lower than the practice it will be very difficult for everybody. I can’t really know how the race will be."
Topsy turvy
The grip levels have produced some fairly bizarre results along the way. On a low grip surface, it is usually the Hondas and Ducatis which shine, while the Yamahas and Suzukis struggle. Yet all four Yamahas are on the two front rows of the grid, Marc Márquez the only Honda among them. And Alex Rins would have been in the middle of the second row, had he not crashed on his final run. The Suzuki Ecstar rider had the strongest race pace in FP4, showing the strength of the Suzuki around Barcelona.
"I'm a little bit disappointed, because we had a small crash in qualifying, so maybe I was able to start on the first or second row," Rins said." But anyway, the rhythm was not so bad. I think compared to the other guys, with Fabio, maybe Márquez, we are more or less there. We will see what happens."
Rins had felt that he grip was not great, but been able to ride anyway, he said. "The grip level was not so bad for us. For sure a big difference compared to yesterday. The grip for us was very good this morning, this afternoon it was OK. As I said, it's a little bit different, the grip level. It looks like the track has grip, but in the moment that you are at the maximum angle and you try to open to get good traction and you start to slide a little bit, then you pick up the bike a little bit and you go. So you need to find something." The Suzuki had a great base package, which meant it could be fast in hot conditions and in cold, with high grip and with low grip.
Grip or not?
Marc Márquez was equally nonplussed by the way the bikes reacted to the low grip levels. "I don’t know," he said. "In Jerez that was a very high grip. We were very, very fast. Here we are struggling more with the grip. We just didn’t find the lap time in a good way, but in our side we know where the problem is. It’s just a matter of time. I’m struggling, but I’m there. So it’s not like I’m struggling and I’m a half second behind them. I’m struggling but in the race pace is the same. Qualifying practice is the same."
The grip was causing problems for everyone, causing Franco Morbidelli to have an off-throttle highside, and Jack Miller to crash in Q1. "I think definitely the track has changed compared to last year," the Pramac Ducati rider said. "There's a lot less grip. The track is super slippery. Scary in some points to be honest. Because, like you saw with Marc and Frankie this morning, we're having highsides on entry to the corner, which is not normal. And all it is, is rear brake. You touch a bit of rear brake with angle and all of a sudden you are losing the rear."
That was why he had crashed during Q1, Miller explained. "When I went out there we did the first three laps, the corner where Frankie highsided, I had my first moment, turn one I had two moments braking in. Because my style is that I brake-brake-brake and then when I start tipping in to the apex that's when I more or less start to use more of the rear brake to settle the bike. I'm not able to ride it like that here because there is absolutely zero edge grip on the rear coming into the corners. So I start having these highside moments. It's not useful, let's say."
Mystery man
The biggest mystery viewed from the outside is Marc Márquez. The Repsol Honda rider was on course to take pole position on his final flying lap, but a massive slide at Turn 4 saw him nearly off the bike. What was truly impressive, however, was the fact that he set his fastest time through the third sector – the section of the track immediately following the corner where he had to save the bike on his elbow and knee – before finally bailing on the lap in the final sector. Had he not given up on the lap time, he still would have lapped the track in under 1'40.
It is hard to tell exactly what Márquez' times mean for his race pace. The Repsol Honda rider has looked like he has struggled all weekend, spending an unusual amount of time following other riders. When asked repeatedly about this in the press conference, he became visibly peeved, pointing out that he did his fast times in FP4 when he was riding on his own. "I was just getting pushing, just getting my rhythm. We just tried to understand how to work with the front tire, but then suddenly in FP4 when I put the hard front, all the problems disappear and then I’m riding like I want. I did good lap times with race distance tires."
Valentino Rossi felt that Márquez following other riders was his way of sizing them up. "Marquez is not only fast but is always very, very clever during the practice because he always studies his rivals and he knows exactly where… is all under calculation, he knows exactly who to follow," Rossi said.
Márquez acknowledged he had learned a thing or two while following the Monster Energy Yamaha rider. "The Yamahas carry the speed in a very good way," he said. "Here in this circuit you need to carry the speed. They carry the speed. They have good traction. Looks like for the conditions of this weekend everybody expect Ducati versus Honda, Honda versus Ducati, and is opposite. Looks like the bikes are working better is Yamaha and Suzuki that have less torque and less power. Is interesting also to understand because we have one very long straight, but then thirteen corners that they are riding in a very good way."
Down at last
It was Fabio Quartararo who carried the speed best of all through the thirteen corners, the French rookie taking his second pole position from just his seventh race in MotoGP. This pole also came after he discounted being at 100% in Barcelona, as he had arm pump surgery just over a week ago. But here he is, beating the rest and taking pole.
It was not as easy as usual for the Petronas Yamaha SRT rider, however. He had his first ever crash in MotoGP during FP3. It is remarkable that it took so long, to be frank, as he has gone for four MotoGP tests, six full MotoGP weekends, and two and a half practice sessions before going down for the first time. Was it a result of pain from the surgery? Unlikely, if we are to believe Quartararo. "FP1 was a quite strange feeling on the arm. I still have the stitches, so I feel not pain but strange feeling on my arm. I think I never really raced with an injury, but this is not also an injury. This is something that we need to do for the next races. It’s just a matter of getting used to it."
If Quartararo wins on Sunday, he will become the youngest ever winner of a MotoGP race, taking that crown from Marc Márquez. It is his last chance for that record, though he seems to have little interest in it. The Frenchman seems far more focused on Assen, a race he believes he can win, rather than Sunday's race in Barcelona.
But Quartararo certainly has the pace, as does Marc Márquez and Alex Rins. The Suzuki and the Yamaha have less of a disadvantage at Barcelona, less than they had at Mugello, at least. It is a shorter straight which is not quite as fast as Mugello, and easier for the Suzuki at least, according to Alex Rins.
Rins to make it two?
It feels like Rins could get away and win from the front, if he first he can get past the other riders. The Suzukis need a lead of at least a couple of tenths coming onto the front straight, Joan Mir said, if they are to stay ahead of the Ducatis and the Hondas. Rins can get a good start, but can he get ahead of the Honda of Márquez?
The Yamahas will obviously be a factor, withFabio Quartararo starting from pole and obviously having strong pace, and Valentino Rossi on the second row, and always finding a couple of tenths on Sunday. Maverick Viñales had an excellent qualifying session, but lost his front row start when he celebrated too early, not realizing he had crossed the line before the checkered flag came out. Viñales will now start from sixth, promoting Franco Morbidelli to the front row.
That moves Andrea Dovizioso up to fifth on the grid, and a very good starting point. Sunday's race will in all likelihood come down to tire management, and finding a way to go as slow as possible while still stay ahead of your rivals. Dovizioso is a master of that particular dark art, and will be looking to deploy his skill in the race on Sunday.
Above all, the race feels both completely open, and full of potential surprises. Who could win? Any one of three or four riders. Who could podium? Take your pick of the top ten or so. Where will Joan Mir finish? His race pace suggests he could be on for a top six in Barcelona. What about Jorge Lorenzo? The Repsol Honda rider was straight through to Q2 on Saturday, and will start the race from tenth. He still has a lot of work to do, but there are real signs of progress, at last. And watch out for Pol Espargaro and the KTM, a rider who always has something extra at home.
This could be quite a race.
Gathering the background information for detailed articles such as these is an expensive and time-consuming operation. If you enjoyed this article, please consider supporting MotoMatters.com. You can help by either taking out a subscription, by making a donation, or by contributing via our GoFundMe page. You can find out more about subscribing to MotoMatters.com here.
Comments
When rubber isn't rubber
Interesting to know what the difference is between Michelin and Dunlop synthetic rubber that one does not provide grip for th other.
Any rubber boffins out there?
Piero Taramasso.
Piero Taramasso.
He is the man, read his statement in the article. Doesn't really tell us why but it must be the situation. Seems like it's not the same at every circuit.
Michelin versus Dunlop
It's not that one does not provide grip for the other. It's just that Micheling tyres hates the Dunlop rubber. However, the other way round is a lovely match. Moto2's can do the laptime a lot easyer at the beginning of their sessions, when there is still fresh Micheling rubber. | https://motomatters.com/analysis/2019/06/15/barcelona_motogp_saturday_round_up.html |
Introduction
============
Mesenchymal stem cells (MSCs), also referred to as multipotent mesenchymal stromal cells, are present in all the organs throughout the body and play a key role in tissue regeneration. Aside from their ability to orchestrate regeneration processes, MSCs are newly proposed as critical players in host defence and inflammation [@b1].
Under physiological conditions, the oral cavity, gastrointestinal tract and skin home complex ecosystems of commensal bacteria that live in a mutually beneficial state with the host. In case of tissue injury or inflammatory diseases, stem cells are mobilized towards the site of damage, thus coming to close vicinity of bacteria and bacterial components. Bacterial infection of stem cells could even lead to long-term functional consequences for the host [@b2]. Recent reports suggest that MSCs may be able to actively participate in the control of infectious challenges by direct targeting of bacteria and through indirect effects on the host primary and adaptive immune response [@b3].
Oral microflora
===============
The oral cavity, like the skin, the respiratory tract and the gut, is habitat to a plethora of microbiota living in symbiosis with the host [@b4]. The oral microflora is known to contain over 700 species of aerobic and anaerobic bacteria [@b5]. These organisms can be isolated from tooth surfaces, periodontal pockets and other oral sites such as the tongue and oral mucous membranes [@b6]. Oral microbiota grow as complex, mixed, interdependent colonies organized in biofilms [@b7]. Reports in the literature suggest that oral bacterial biofilms may contain more than 10^5^ microorganisms [@b8], while the concentrations and compositions of pathogenic bacteria in the subgingival biofilm vary greatly depending on the local microenvironmental conditions [@b9],[@b10]. The bacterial genera which are mostly represented in the oral cavity include the following: *Gemella*, *Granulicatella*, *Streptococcus*, *Veillonella*, *Neisseria*, *Haemophilus*, *Rothia*, *Actinomyces*, *Prevotella*, *Capnocytophaga*, *Porphyromona*, *Fusobacterium*, *Corynebacterium*, *Cardiobacterium*, *Campylobacter*, *Corynebacterium*, *Atopobium* and *Bergeyella* [@b11]--[@b13]. It should be noticed that almost 60% of the species detected by new molecular methods are not presently cultivable and remain uncharacterized [@b11].
The natural oral microflora is vital for the normal development and physiological integrity of the oral cavity. It also contributes to host defence by excluding exogenous microorganisms [@b14]. It is widely recognized that the maintenance of an ecologically balanced biodiversity of the microflora within the oral cavity is crucial not only to the oral health but also to the general health of the host [@b15]. Microbes have commensal relationships with their co-habitants, while being symbiotic with their host [@b16]. However, ecological shifts may lead to pathological conditions, which alter the relationships between microbes and the host [@b17]. In disease, pathogenic bacteria grow with disregard to their co-habitant bacteria and express their virulence properties, so that the host becomes infected or susceptible to infection [@b16].
Periodontitis
=============
Periodontitis is a bacterially induced inflammatory disease of the supporting tissues of the teeth. It represents one of the major dental diseases that affect human populations worldwide at high prevalence rates and has a huge economic impact on national health care systems [@b18]. In fact, periodontitis is characterized by progressive periodontal tissue destruction that may finally lead to the loosening and subsequent loss of teeth [@b19]. The predominant pathogens involved in periodontitis are *Aggregatibacter actinomycetemcomitans*, *Porphyromonas gingivalis*, *Prevotella intermedia*, *Fusobacterium nucleatum*, *Tannerella forsythia*, and *Eikenella corrodens*, and *Treponema denticola* [@b15].In addition, several forms of uncultivable spirochetes are supposed to play a major role in the pathogenesis of this disease [@b20].
Periodontal pathogens induce tissue destruction by activating the host defence. The infection of periodontal tissues is accompanied by the release of bacterial leucotoxins, collagenases, fibrinolysins and other proteases that break down host tissues and may result in gingival inflammation [@b21]. Specifically, microbial components, like lipopolysaccharide (LPS), have the capacity to activate macrophages and lymphocytes to synthesize and secrete a wide array of molecules including cytokines, prostaglandins, hydrolytic enzymes and tumour necrosis factor alpha, which in turn stimulate the effectors of periodontal tissue breakdown [@b22]. Cell activation occurs mainly through two members of the Toll-like receptor (TLR) family, TLR2 and TLR4 that are documented as predominant signalling receptors for most bacterial components [@b23],[@b24].
Once a periodontal pocket forms and becomes colonized by bacteria, the pathologic situation becomes irreversible [@b18]. The conventional periodontal treatment involves the mechanical removal of the pathogenic dental biofilm. Successful clinical outcomes such as probing depth reduction and gain of clinical attachment after treatment are well documented in a plethora of studies [@b25]--[@b27]. However, histological analyses of healed periodontal tissues reveal in most of the cases the presence of an epithelial lining along the treated root surfaces of the teeth, instead of true periodontal regeneration [@b28].
Dental stem cells
=================
Stem cells are defined by their capacity to self-renew and differentiate into multiple cell lineages. One of the most studied adult stem cell types are MSCs [@b29]. Friedenstein *et al*. first described bone marrow stem cells (BMSCs) as a heterogeneous population of multipotent cells derived from bone marrow aspirates with the ability to adhere to plastic surfaces and form colonies of fibroblast-like cells within the first days of cultivation [@b30],[@b31]. Although MSCs were originally isolated from the bone marrow, similar populations of mesenchymal precursors were isolated from other tissues, including adipose tissue [@b32], amniotic fluid [@b33], foetal liver [@b34] and umbilical cord blood (UCB) [@b35].
During the last decades, rapid progress in dental research has shed light on the molecular and cellular biology of periodontal tissue development. Recently, multipotent cells have been successfully isolated from several dental tissues including dental pulp [@b36], dental follicle [@b37], exfoliated deciduous teeth [@b38] and the root apical papilla [@b39]. Several *in vitro* and *in vivo* studies on dental stem cells (DSCs) provide evidence of their multipotent character and their key role in periodontal regeneration [@b40]. It has been demonstrated that DSCs have a fibroblast-like morphology and are plastic-adherent. Similar to other stem cell populations, DSCs express several surface markers such as CD10, CD13, CD29, CD44, CD53, CD59, CD73, CD90 and CD105, and do not express CD34, CD45 or HLA-DR [@b41],[@b42]. Demonstration of self-renewal ability and multilineage differentiation capacity are additional indications of their stem cell phenotype. Indeed, it has been proven that DSCs are able to form single cell-derived colonies and differentiate into several lineages, when induced by special media *in vitro* [@b43].
Clinical relevance
==================
The identification of DSCs has stimulated interest in the potential use of cell-based therapies as prospective alternatives to existing therapeutic approaches for the repair and regeneration of the periodontium [@b44]. One of the critical requirements for the success of such therapeutic interventions would be the repopulation of the periodontal wound by *ex vivo* expanded progenitor populations or the mobilization of endogenous progenitor cells capable of promoting regeneration [@b45]. Specifically, DSCs grafts may support the restoration of the complex ultrastructure of the periodontal ligament and the dynamic functional relationships of its components. Numerous animal studies have already proved the regenerative potency of these cell populations *in vivo* [@b46].
However, one of the growing concerns in dental research is the exposure of DSCs to the inflamed microenvironment of periodontal pockets [@b47]. This may affect many cell properties such as self-renewal, differentiation potential, production of cytokines and extracellular matrix compounds secretion. Sorrell and Caplan demonstrated that multipotent cell grafts might trigger regenerative processes not only through direct commitment, but also by infiltrating inflammatory or antigen-presenting cells [@b48]. Such a regenerative microenvironment may impel self-regulated regenerative cascades and limit the area of damage in the inflamed adult tissues [@b49]. Hence, a better understanding of cell behaviour at sites of bacterial infection appears to be a key strategy for the development of new approaches for periodontal regeneration.
*In vitro* experimental models
==============================
The microenvironment of a periodontal pocket is characterized by the constant presence of bacterial biofilms. This condition results in a continuous cross-talk of periodontal tissue cells with a wide variety of oral microorganisms. Further, in periodontitis, several types of host immune cells (*e.g*. neutrophils and macrophages) migrate to the site of inflammation [@b50]. The better understanding of the complex cell--bacteria interactions is essential for the development of successful periodontal therapies. While several *in vivo* models have been already used, the design of an *in vitro* model that could sufficiently mimic the *in vivo* situation of inflamed periodontal tissues remains to be developed [@b51].
Till now most of the *in vitro* experimental settings are based on the analysis of the LPS effects on cells. Lipopolysaccharide is a major membrane component of Gram-negative bacteria and can be derived from several bacterial species, *e.g. Escherichia coli* or *P. gingivalis* [@b47],[@b52],[@b53]. The easy isolation method and the fact that LPS is responsible for many of the inflammatory responses and pathogenic effects of Gram-negative bacteria are the main arguments for the use of LPS in numerous *in vitro* experiments. Experimental settings using heat-inactivated or sonicated bacteria have also been proposed as models that may correspond to the *in vivo* condition of bacterial infection [@b54],[@b55]. Further methods used for the analysis of cell--bacteria interactions are based on the fact that periopathogenic bacterial pathogens produce a broad array of potential virulence factors apart from LPS that are released into the gingival crevicular fluid [@b56]. Thus, the culture of cells with bacterial pre-conditioned medium or the co-cultivation of cells and bacteria in transwell systems have been used to evaluate the secretion of soluble factors and the activation of cellular downstream cascades by bacteria [@b57],[@b58]. Although many biological effects can be elicited by non-viable bacteria, it is known that some cell responses require the presence of live bacteria [@b59].
Experimental models utilizing microorganisms in a planktonic state were used to imitate the periodontal infection [@b60]. Nevertheless, such systems may not adequately portray the bacterial challenge conferred by a polymicrobial, biofilm-induced disease, such as periodontitis [@b61]. Thus, *in vitro* multispecies dental biofilm settings have been proposed as laboratory models that better mimic the environment of chronic periodontitis [@b62]--[@b64]. Finally, cell invasion is a common strategy of pathogens that facilitates their escape from host immune system, access to nutrients, persistence and spread into tissues [@b65]. Recent studies using viable bacteria have been demonstrated as models for the analysis of host cell invasion processes such as bacterial adherence and internalization by cells [@b66],[@b67]. However, the subgingival bacteria that are closely correlated with periodontitis are mainly anaerobes. The co-culture of these bacteria with oxygen-requiring cells in conventional systems is not possible [@b68]. Therefore, one weak point of the experimental studies on periodontal infection is the fact that most *in vitro* settings are conducted under aerated conditions. Given the fact that aerotolerance of strictly anaerobic pathogens like *P. gingivalis* is very low, the interpretation of such experimental results may not directly reflect the *in vivo* situation [@b69]. Until now only few models have been proposed utilizing direct contact between live obligate anaerobic bacteria and human cell lines under oxygen-free conditions [@b70],[@b71].
Influence of oral bacteria on stem cells
========================================
Effects on cell viability and proliferation of stem cells
---------------------------------------------------------
Cell proliferation is fundamental in tissue homoeostasis and can be controlled by either physiological or pathological conditions. Previous studies have demonstrated that LPS derived from periopathogenic bacteria may induce controversial effects on the proliferation of periodontal ligament fibroblasts [@b72]--[@b74]. Currently, the possible effect of bacteria on the proliferative rates of multipotent cells is in the focus of interest of several research groups. Kato *et al*. demonstrated that *P. gingivalis* LPS promoted cell proliferation in periodontal ligament stem cells (PDLSCs) [@b53]. Stimulation of TLR2 also led to enhanced proliferation of adult BMSCs [@b75]. Further, Jiang *et al*. and Buchon *et al*. proposed that intestinal stem cells are able to maintain tissue homoeostasis by increasing their proliferation rates to repair tissue damage at sites of infection through the JAK-STAT signalling pathway [@b76],[@b77].
On the contrary, according to an *in vitro* study on canine adipose-derived MSCs (ADSCs), gastrointestinal microbes did not induce cell death nor diminished cell proliferation. Previous studies on dental follicle progenitors also demonstrated that cell viability of both dental follicle progenitor cells (DFPCs) and BMSCs was not affected by *P. gingivalis* LPS treatment [@b47],[@b78]. In addition, TLR ligands such as LPS and flagellin do not alter proliferation rates of a newly identified population of pluripotent UCB cells, which are termed as unrestricted somatic stem cells [@b79]. Nevertheless, LPS and extracts from *Streptococcus mutans* treatment were able to inhibit the proliferation of dental pulp stem cells (DPSCs) *in vitro* [@b80].
These heterogeneous effects of bacteria on the induction or inhibition of cell proliferation across studies could imply the complexity of the underlying mechanisms that rule the interactions between host cells and bacteria. Specifically, cell response to bacterial stimuli seems to be associated with the cell type, bacterial strain and specific bacterial components used in each experimental setting [@b81].
Effects on differentiation capacity of stem cells
-------------------------------------------------
The ability of stem cells to differentiate into multiple lineages is well documented. Especially the differentiation capacity of DSCs across the osteogenic, chondrogenic, adipogenic and neurogenic lineages has been demonstrated from several research groups in the last years [@b82],[@b83]. Nevertheless, the impact of bacteria on the differentiation capacity of stem cells remains to be explored. In a recent study, Ronay *et al*. demonstrated that infected periodontal granulation tissues harbour cells expressing embryonic stem cell markers, and exhibit osteogenic capacities [@b84]. These results are in accordance with other studies demonstrating elevated alkaline phosphatase (ALP) activity, an early marker for osteogenic differentiation, and calcium deposition after *E. coli* LPS treatment of BMSCs [@b52].
Nevertheless, an increased ALP activity after LPS treatment may not always lead to formation of mineralized nodules *in vitro*. It is suggested that LPS may partly block the progression of molecular processes involved in osteogenic differentiation [@b47]. Interestingly, Abe *et al*. demonstrated that low concentrations of *P. gingivalis* extracts improve the osteogenic differentiation of human dental pulp-derived cells while high concentrations may inhibit ALP activity and bone sialoprotein gene expression [@b85]. High concentration of sodium butyrate, a major metabolic by-product of anaerobic Gram-negative periodontopathogenic bacteria, could inhibit the osteoblastic differentiation and mineralized nodule formation in an osteoblastic cell line *in vitro* [@b86]. In accordance with these results *P. gingivalis* LPS was shown to suppress the osteoblastic differentiation in both PDLSCs and DFPCs [@b47],[@b53]. Nomiyama *et al*. suggested that Gram-negative bacterial infection might down-regulate the odontoblastic properties of rat pulp progenitor cells after stimulation with *A. actinomycetemcomitans* LPS [@b87]. Treatment with LPS from *P. gingivalis* was shown to impair both ALP activity and the formation of mineral deposits in DPSCs [@b88]. In this context, TLR ligands have been proposed as possible regulators of stem cell differentiation state *in vitro* [@b78].
Further, *P. gingivalis* fimbriae are proposed as potent inducers of a monocyte/macrophage tumour cell line differentiation, *via* cyclic nucleotide-independent protein kinase C [@b89]. However, *P. gingivalis* fimbriae were proved unable to alter the osteoblastic differentiation and mineralization in long-term mouse calvarial osteoblast cultures [@b90].
Effects on the immunomodulatory properties of stem cells
--------------------------------------------------------
In the last years the immunomodulatory functions of the stem cells have been in the focus of research [@b91]. Accumulating evidence indicated that MSCs may affect neighbouring innate and adaptive immune cells by two main ways: the direct cell--cell contact and the release of a variety of soluble factors [@b92]--[@b97]. Gingiva-derived MSCs (GMSCs) were shown to have immunomodulatory functions. Specifically, GMSCs were able to suppress peripheral blood lymphocyte proliferation and induce expression of a wide panel of immunosuppressive factors including interleukin (IL)-10, indoleamine 2,3-dioxygenase, inducible nitric oxide synthase and cyclooxygenase 2 in response to the inflammatory cytokine, interferon-γ [@b98]. However, the behaviour of cells under the direct influence of bacteria remains less understood.
Reed *et al*. recently demonstrated that human embryonic stem cell-derived endothelial cells (hESC-ECs) are TLR4 deficient but respond to bacteria *via* the intracellular receptor nucleotide-binding oligomerization domain-containing protein 1 (NOD1). The authors suggested that hESC-ECs may be protected from unwanted TLR4-mediated vascular inflammation, thus offering a potential therapeutic advantage [@b99]. On the other side, studies on DFPCs revealed the expression of TLR2 and TLR4 in both mRNA and protein level. Nevertheless, when these cells were treated with *P. gingivalis* LPS no effect on the expression of pro-inflammatory cytokines has been observed [@b47],[@b78]. Further, treatment with TLR4 agonist augmented the suppressive potential of DFPCs and increased the transforming growth factor-beta production [@b100]. In accordance with these results canine ADSCs were shown to enhance immunomodulation after interacting with gastrointestinal microbes *in vitro* [@b101].
It is reported that LPS is able to induce the expression of the nuclear factor κB (NF-κB) -dependent gene IL-8 by DPSCs [@b102]. He *et al*. suggested that LPS-mediated transcriptional and post-translational up-regulation of IL-8 in DPSCs is a process that also involves TLR4, myeloid differentiation primary response gene 88 (MyD88), NF-κB and mitogen-activated protein kinases [@b103]. Further, Mei *et al*. demonstrated that MSCs improve the survival of sepsis by the down-regulation of inflammation-related genes (such e.g. IL-10 and IL-6) and a shift towards the up-regulation of genes involved in promoting phagocytosis and bacterial killing [@b104]. The direct interaction of MSCs with the oral bacteria *F. nucleatum* and *P. gingivalis* led to a lower secretion of IL-8 compared to a differentiated tumour cell line [@b66]. Raffaghello *et al*. support the immunomodulatory function of MSCs showing the inhibition of neutrophil apoptosis because of the secretion of IL-6 by MSCs [@b105]. Nevertheless, these results should be interpreted carefully as it is speculated that the cytokine induction profile of stem cells is dependent on the cell type, bacterial species and methodology used (*e.g*. period of stimulation) [@b79],[@b106].
Current challenges and future perspectives
==========================================
The rapid advancements in the field of dental research over the last few years could realize the promise of tissue regeneration through stem cells. Specifically, the demand for novel therapies against inflammatory diseases, like periodontitis, has created the need for a better understanding of the behaviour of the multipotent cells at sites of infection. Stem cells are supposed to support tissue homoeostasis by providing soluble factors, transdifferentiation or cell fusion [@b107]. Hence, studies demonstrating stem cell responsiveness to bacteria raise questions on the possible contribution of multipotent cells to both tissue regeneration and outbreak of inflammation. Selected reports on the impact of bacteria on stem cells are listed in Table[1](#tbl1){ref-type="table"}.
######
Selected reports on the impact of bacteria on stem cells
Biological impact Cell populations Bacterial species Experimental model Reference
------------------- ------------------------------------------------------------- ----------------------------------------- -------------------- -----------
Cell viability DFPCs *P. gingivalis* Treatment with LPS [@b47]
PDLSCs *P. gingivalis* Treatment with LPS [@b53]
DFPCs, BMSCs *P. gingivalis* Treatment with LPS [@b78]
USSCs Undefined Treatment with LPS and flagellin [@b79]
DPSCs *S. mutans* Treatment with LPS [@b80]
Differentiation DFPCs *P. gingivalis* Treatment with LPS [@b47]
BMSCs *E. coli* Treatment with LPS [@b52]
PDLSCs *P. gingivalis* Treatment with LPS [@b53]
USSCs Undefined Treatment with LPS and flagellin [@b79]
DPPCs *A.actinomyce-temcomitans* Treatment with LPS [@b87]
DPSCs *P. gingivalis* Treatment with LPS [@b88]
Immunomodulation DFPCs *P. gingivalis* Treatment with LPS [@b47]
BMSCs *P. gingivalis*, *F. nucleatum*, *A.actinomyce-temcomitans* Co-culture model [@b69]
DFPCs, BMSCs *P. gingivalis*, *F. nucleatum* Co-culture model [@b70]
DFPCs, BMSCs *P. gingivalis* Treatment with LPS [@b78]
USSCs Undefined Treatment with LPS and flagellin [@b79]
ESC-ECs Undefined Treatment with LPS and *C12*-*iE*-*DAP* [@b99]
DFSCs, DPSCs Undefined Treatment with LPS [@b100]
AMSCs *S. typhimurium*, *L. acidophilus* Co-culture model [@b101]
DPSCs *P. gingivalis*, *E. coli*, *P. endodontalis* Treatment with LPS [@b102]
DPSCs Undefined Treatment with LPS [@b103]
MSCs Undefined Polymicrobial model of sepsis [@b104]
Stem cells: AMSCs, adipose-derived mesenchymal stem cells; BMSCs, bone marrow stem cells; DFPCs, dental follicle progenitors cells; DPPCs, dental pulp progenitor cells; DPSCs, dental pulp stem cells; ESC-ECs, human embryonic stem cell-derived endothelial cells; MSCs, mesenchymal stem cells; PDLSCs, periodontal ligament stem cells; USSCs, unrestricted somatic stem cells. Bacteria: *A. actinomycetemcomitans*, *Aggregatibacter actinomycetemcomitans*; *E. coli*, *Escherichia coli*; *F. Prausnitzii*, *Faecalibacterium prausnitzii*; *L. acidophilus*, *Lactobacillus acidophilus*; *P. endodontalis*, *Porphyromonas endodontalis*; *P. gingivalis*, *Porphyromonas gingivalis*; *S. mutans*, *Streptococcus mutans*; *S. typhimurium*, *Salmonella typhimurium*.
The notion that bacteria may stimulate and drive the regenerative potential of stem cells should be further explored. Till now, data from *in vitro* studies utilizing single populations of cells challenged with bacterial components or mono-infections of planktonic bacteria may not adequately portray human periodontal diseases. Another significant parameter, which should be taken into consideration, is the oxygen concentration of *in vitro* models, as most of the pathogenic species implicated in the pathogenesis of periodontitis are obligate anaerobes. It is also remarkable that only few studies have used PDLSCs, which are the main population of multipotent cells residing within the periodontium. Thus, the development of new experimental settings to better resemble the *in vivo* periodontal milieu seems to be crucial.
A better understanding of the beneficial effects of bacteria on stem cells may allow future interventions based on cell priming with bacterial components prior to transplantation in sites of tissue destruction. Even the colonization of inflamed tissues with specific bacterial species that promote the mobilization of tissue-resident multipotent cell populations could be part of new therapeutic approaches. On the other side, the extent of stem cells' involvement in immunomodulation remains to be clarified. Both immunosuppression and stimulation of host immune responses regulated by stem cells could be used as advanced tools against bacterially induced inflammation. In conclusion, the identification of intracellular signalling pathways regulating multipotency and immunomodulation of stem cells being exposed to bacteria may enable the development of successful therapeutic interventions in inflammatory diseases.
Conflicts of interest
=====================
The authors confirm that there are no conflicts of interest.
Author contributions
====================
Kyriaki Chatzivasileiou and Katja Kriebel contributed substantially to the conception and design of the study and wrote the paper. Hermann Lang, Bernd Kreikemeyer and Gustav Steinhoff contributed to the conception and critical revision of the article and provided the final approval of the version to be published.
| |
Over the last decade, Brazil has become the world's sixth largest economy, third largest democracy, and one of the key players on the global stage. It is the world's fifth largest nation in physical size, exceeded only by Russia, China, the United States, and Canada. By far the largest country in Latin America; Brazil occupies nearly half the land mass of South America and borders every South American country except Chile and Ecuador. Brazil is a fascinating nation of contrasts and contradictions—of poverty and wealth, of the privileges and the deprivations of race and class, and of economic leaders employing cutting-edge technology while many labor under difficult conditions. After 20 years of authoritarian rule following the military coup of 1964, social movements, opposition politicians, and some social and political elites forced a negotiated end to the dictatorship and wrote the democratic constitution of 1988. The once-imprisoned labor leader, Luis Inácio Lula da Silva, and his Worker’s party, were then elected into two successful terms of presidency to the Republic. We will see that the realities of business, society and politics in Brazil are complex and fascinating as we explore the realities of this endearing nation.
This course module is a component of all other courses offered during our programs in Brazil. For example, students enrolled in our our "International Business" will take this module included in their course (please see the Daily Schedule above). This means that no matter which course students select, they will always have this "Introduction to Brazil" component included. Our objective is to ensure that all students that attend our study abroad programs will return home with a deep insight into the region in which they are studying, understanding its culture, history, and current business and political environment.
Students are not assumed to have background familiarity with the history and geography of Latin America. It is not required for students to take any prerequisite courses before taking this class. Knowledge of Spanish or Portuguese is NOT required for this course.
Students will learn about Brazil both through professional and cultural visits. Experiences from these visits will then be discussed during our daily "Introduction to Brazil" sessions.
This material explores the country of Brazil, with an overview of culture, society, politics, business, economics and development. Special attention will be paid to the rapid growth of Brazil on the global stage. Topics may include: Brazil and the International Business Environment; Foreign Investment; Growth of the Middle Class; BRICS; Mercosul; Brazil and its Neighbors; Relations with China and other Emerging Markets; G-20 and International Politics; Preparations for the upcoming World Cup and Olympics; Intellectual Property Rights and Development; Security, Safety and the War on Drugs; Sustainable Development; Transitions from Authoritarian Rule; Civil-Military Relations; Transition to Democracy; Parties and Elections in Brazil; Religion; Political Mobilization; and Civil Society.
What has Brazil done well in their transition from dictatorship to democracy? What lessons can be learned? Our class will look at the factors that have led to Brazil's successful transition to emerging BRIC star, and has allowed democracy to flourish.
Cultural aspects of Brazilians - music, traditions, dance, culture, “Jeito Brasileiro”, pragmatism, optimism, and more.
More issues to include: Language, Religion, Media, Technology, Race, Crime, Security, Inequality, Poverty, and more.
Historical importance of Sao Paulo, Belo Horizonte, Rio de Janeiro and the new capital Brasilia.
Off-shore “Pre-Sal” energy discoveries and its implications on Brazil and world energy markets.
Students may take either one or two courses.
After class students are free to lunch on their own, with friends or with professors. After lunch, students will have free time and will periodically attend cultural and professional visits. | http://www.summitstudyabroad.com/introduction-to-brazil---course-module.html |
Photo: “Election MG 3455“, by Rama, licensed under CC BY-SA 2.0 FR, Hue modified from the original
Rosenfeld, Bryn. “State Dependency and the Limits of Middle Class Support for Democracy.” Comparative Political Studies (2020): 0010414020938085.
Abstract
Scholars have long viewed the middle class as an agent of democratization. This article provides the first rigorous cross-national analysis of middle class regime preferences, systematically investigating the importance of an authoritarian state’s economic relationship with the middle class. Using detailed survey data on individual employment histories from 27 post-communist countries, I show that, under autocracy, state-sector careers diminish support for democracy, especially among middle class professionals. The results are robust to changes in the measurement of both the middle class and democracy support. I also show that neither selection nor response bias, redistributive preferences, communist socialization, or transition experiences can explain the results. The findings imply that a state-supported middle class may, in fact, delay democratization. | https://www.illiberalism.org/state-dependency-and-the-limits-of-middle-class-support-for-democracy/ |
Life Matters Coaching Ltd is based in Nottingham and offers positivity and clarity through coaching. We support the development of individuals and organisations throughout the East Midlands.
We work with organisations who are looking to develop their brightest and best and who view coaching as a powerful development tool. Coaching demonstrates you value your staff, and empowers them to make great decisions at work and in their own lives.
At Life Matters we collaborate with individuals to develop and achieve goals in their work and personal lives. This leads to increased satisfaction, well-being and productivity. Coaching offers ‘me time’ to explore life’s challenges, bringing about change in a client’s current and future behaviour. We achieve this by providing focus and awareness in our interactions between coach and client.
Get in touch to chat today. | http://www.lifematterscoaching.co.uk/ |
Quantification of geogenic and anthropogenic levels of rare earth elements and yttrium (REY) in mussels and fish (and duckweed) from major European rivers and selected lakes, and evaluation of the bioavailability of anthropogenic REY in river and lake waters.
Context: Rare earth elements and yttrium (REY) are crucial to a wide range of modern technologies with regard to their chemical, optical, electro-optic, and paramagnetic properties. The increasing demand with respect to global REE production makes them technologically ‘critical elements’. Their widespread use in agriculture (e.g., fertilizers), animal production (e.g., micronutrients), technology (e.g., fluid cracking catalysts), and medicine (e.g., contrast-agents) leads to increased emissions into the environment. As a consequence, REY are now considered to be ‘emerging contaminants’ for which negative effects on environmental health have been suggested. It is, therefore, important to evaluate the current state of European rivers and lakes with regard to geogenic and anthropogenic REY. Under environmental conditions, natural REY tend to bind to surfaces of particles and nanoparticles/colloids, resulting in (ultra)low dissolved concentrations in natural waters. However, very little is known of the particle-reactivity of anthropogenic REY and their (eco)toxicity and potential impact on aquatic organisms.
Objectives and approach: Trace metals have been shown to bioaccumulate in organisms such as plants, mussels, shellfish, fish or mammals and this bioaccumulation of trace metals increases along the food chain. Toxic effects may occur when organisms are exposed to elevated concentrations, which is usually due to their anthropogenic release into the environment. An evaluation of the REY distribution in the tissues, organs, shells and bones of mussels and fish (and in duckweed) in European rivers and selected lakes will show if and how these organisms accumulate geogenic and anthropogenic REY and how these elements transfer along the food chain. As humans typically form the end of the food chain and anthropogenic REE have already been identified in drinking water and beverages, these results will also provide vital information on how the potentially toxic REE behave in organisms and may give new insights into potential effects of long-term exposure to elevated concentrations of (anthropogenic) REY. This project (ESR 5) is closely related to and will investigate organisms from the same rivers and lakes the waters of which are studied with a focus on their geogenic and anthropogenic REY inventory in an accompanying project (ESR 4) at Jacobs University.
Presentation of the research project (cooperative aspect)
This PhD position is within the framework of a European ITN project named PANORAMA: “EuroPean trAining NetwOrk on Rare eArth elements environMental trAnsfer: from rock to human” involving 15 PhD positions.
Under the supervision of Prof. Dr. Michael Bau (and colleagues at the different institutions), the PhD student will perform:
– sampling, on-site sample preparation and measurements at major rivers and selected lakes within the EU (field work, JUB, IST);
– sample preparation and subsequent chemical analyses of mussels (incl. shells), fish and duckweed by ICP-OES and ICP-MS using trace element separation and pre-concentration methods if necessary (JUB, IST);
– definition of baselines for REY in freshwater mussels, fish and duckweed (JUB);
– identification of “hot spots” for anthropogenic REY (micro)contamination of organisms in EU rivers and lakes (JUB, HAW);
– incubation/growth experiments using duckweed and (micro)contaminated river and lake waters (JUB, HAW).
The project involves a strong collaboration with the following institutions, including mandatory 2-months research stays (secondments): Instituto Superior Técnico (IST), Lisbon/Portugal, and University of Applied Science Hamburg (HAW), Hamburg/Germany.
The PhD student will be also involved in scientific/soft-skills meetings and in research activities conducted in other laboratories/companies from Europe and associated countries.
An important component of the training will be the participation to 4 main major training events:
WS1-(December 2020) REE as emerging contaminants: Properties, uses and dissemination –Germany-fundamental REE biogeochemistry and currently known anthropogenic REE inputs into the environment
SS1 (May 2021) – AMD and REE contamination mitigation – Portugal-Management and remediation solutions of AMD in old mining areas and Management of WEEE, recycling areas
WS2 – Colloids and nanoparticles as REE vectors -France- Structural characterization of colloids and nanoparticles by innovative and fine spectroscopic and scattering techniques: X-Ray absorption fluorescence and scattering, light scattering. REE interactions with bearing phases.
SS2 – (Eco)toxicology of REE –Germany- (Eco)toxicological concepts and approaches, Physico-chemical properties of REE for bioavailability, ecotoxicity and environmental risk
In addition to these major milestones of the program, the PhD students will 1) continuously develop their core research skills via their own research project locally and within the network while at secondments and conferences, 2) receive a mandatory amount of hard and soft-skills training specific to their own doctoral school, along with mentoring by joint supervising bodies, 3) use conferences both as dissemination events for ESRs results and network events for progress reports and evaluations, and 4) collaborate into practical activities aimed at network-structuring legacy deliverables.
PANORAMA’s research objective is to elucidate the man-induced environmental dissemination of REE and the associated effects on the environmental health. For that purpose, interdisciplinary approaches are required combining geochemistry, ecotoxicology, hydrology, chemical analysis and coupling field monitoring, original in and ex situ experimental set-up and modelling from the element speciation to the environmental impact
PANORAMA’s key aim is to set-up an optimal scientific and non-scientific training to the understanding and forecasting of the environmental impacts of new emerging pollutants such as REE. | https://www.joshswaterjobs.com/jobs/22067/ |
RumaJapara creative workshop was established in 2015 with a mission to preserve the legacy of Indonesian local wisdom that has been living in Jepara for hundred years. Jepara, a small city in Central Java, is well known as the centre of wood furniture both locally and internationally.
The craftsmanship inherited from generation to generation is inseparable from Jepara people’s way of life. But, recently this craftsmanship has slowly begun to fade. RumaJapara together with local craftsman ,mastering the knowledge of woods and carving, started the humble workshop to share the knowledge for people who loves working and designing with carving with the traditional skill and motif or go further to contemporary approach.
Generally we have 2 types of courses. The first one is ‘INGGIL ‘. This course is designed for people who want to bring the heritage as part of their design works. There are two themes, Traditional and Contemporary. Each theme is divided into 3 levels: Basic, Intermediate and Advance.
The second one, is ‘SENANG’, this course is designed for people who want to learn the heritage at a glance. There is one day creative workshop and 4 days creative workshop. | http://www.rumajapara.com/about-us |
Automatic speech recognition tools have strong potential for facilitating language documentation. This blog note reports on highly encouraging tests using automatic transcription in the documentation of Yongning Na, a Sino-Tibetan language of Southwest China. After twenty months of fieldwork (spread over twelve years, from 2006 to 2017), 14 hours of speech had been recorded, of which 5.5 hours were transcribed (200 minutes of narratives and 130 minutes of morphotonology elicitation sessions). Oliver Adams, the author of Persephone, an open-source software tool for developing multilingual acoustic models, volunteered to experiment with these data. He trained a single-speaker automatic speech transcription tool over the transcribed materials and applied it to untranscribed audio files. The error rate is low: on the order of 17% of errors in phoneme identification. This makes the automatic transcriptions useful as a canvas for the linguist, who corrects mistakes and produces the translation in collaboration with language consultants.
Today I am posting my initial report to Oliver Adams, written in the field in Yunnan, on May 12th, 2017. I consider this date as a landmark in my work documenting Yongning Na! This document remained confidential until today because its online availability would have de-anonymized a submission that was under double-blind review for a workshop in Australia (the Australasian Language Technology Association Workshop, Dec. 6-8, 2017). This paper, which presents Oliver’s work to an audience of language technology specialists, is now available here. | https://himalco.hypotheses.org/date/2017/11 |
Onions are highly sensitive to underwatering and overwatering. Research has shown that a single episode of moderate water stress any time from the four-leaf stage to the eight-leaf stage can result in reduced bulb single-centeredness. With these facts in mind, it is tempting to err on the side of overwatering – however, overwatering can rot the crop, either in the field or on the shelf.
The checkbook method to watering your onions
Think of watering your onions like a checkbook. The soil is the bank account and water is either added or taken away. Rain and irrigation are deposits while water used by the crop and water evaporated from the soil through evapotranspiration are withdrawals. The goal with the Checkbook Method Watering Method is to estimate the amount of water in the crop root zone and prevent the crop from experiencing water stress.
Water depletion varies with the soil texture. Silt and clay are fine-textured soils and hold more water than coarse-textured soils such as sand. Sandy soils require more frequent irrigation. The amount of evapotranspiration (ET) is variable depending on the amount of solar radiation, wind, air temperature and humidity. ET data is often available from weather stations in specific production regions. Once you know how much you are losing to ET, you can resupply this loss with either irrigation or rainfall.
For most areas in the country, the daily forecasted ET range in the 0.35-0.45 range during the summer months (July-August) During the spring (May-June) the ET range will be much smaller in the 0.1-0.3 range. In the winter, the ET range can be extremely low in the 0.01-0.1 range. That is what needs to be replaced by watering. Here’s a link to Daily FRET (Forecast Reference EvapoTranspiration).
When to water your onions
Onions should be watered immediately after planting. During the first month after planting, water demand is fairly low. Onions are developing new roots and most these initial roots are shallow compared to other crops. Most of the roots are within the top 10″ of soil. Light, frequent irrigations should be applied.
During the second month of planting the root system of the onion begins to expand in to the top 20″ of the soil. As the onion grows, foliage becomes denser and leaf area increases, which leads to more transpiration and increased irrigation needs.
The bulbing stage in onion growing
During the bulbing stage, water demand gradually increases to fill the rings with water and complete the bulbing process. In addition, this usually equates to warmer weather when the ET will be at its highest. Frequent, heavy irrigations will be required until the bulbs reach the marketable size and the tops start to fall over. At that time, irrigation should cease to allow the crop to dry in the field and increase shelf life.
Have more questions about watering your onions? Contact Customer Service at (830) 876-2430 or email [email protected]. We look forward to another season of providing the highest quality onion plants for you!
Tags In
Related Posts
4 Comments
-
kola starnes February 23, 2021 at 5:59 pm
Hello, I’m in California. I’m a first time onion grower. I’m growing short day onions that I purchased from you. I planted the onions in a raise bed in November 2020. I notice that my onion leaves are leaning forward and not straight up and some are to the side. Is this normal??? . Should I be concerned? Please advice
-
Emily February 28, 2021 at 10:44 pm
You can email a picture of them to us to look over if you’d like. From what I can read, they may have a bit too much water. When they have too much water, the tops can get heavy and fall prematurely due to the weight of the water content.
-
-
Harry J. Lyness March 31, 2021 at 2:17 pm
Can I use corn gluten as a weed preventer? Can I sprinkle on ground after putting onion in the ground?
-
Emily March 31, 2021 at 8:02 pm
Yes, that will work a pre-emergent herbicide. That’s what in our Natural Weed and Feed product. | https://www.onionpatch.dixondalefarms.com/checkbook-method-to-watering-onions/ |
Officially founded in 1984 as the 84th city in Los Angeles County. West Hollywood is bordered on the north by the Hollywood Hills neighborhood of Los Angeles,on the east by the Hollywood District of Los Angeles, on the west by the city of Beverly Hills,and on the south by the Fairfax District of Los Angeles. The irregular border of the city is highlighted in the city logo and was largely formed from the unincorporated Los Angeles County area that had not become part of the surrounding cities. West Hollywood benefits from a very dense, compact urban form with small lots, a mix of land uses, and a walkable street grid. Commercial corridors include the nightlife and dining focused on the Sunset Strip, along Santa Monica Boulevard, and the Avenues of Art & Design along Robertson, Melrose, and Beverly near the Pacific Design CenterWest Hollywood is a 1.9 square mile city situated between glitzy Beverly Hills on the west and Hollywood to the east; a destination often referred to as the playground of the stars due to its entertainment industry influences and its southern-position at the base of the Hollywood Hills, a hilly area dotted with celebrity homes. | http://luxurylahomes.com/areas/west-hollywood/ |
The utility model provides a herbal tea preparation system. The system comprises a purification system, a filling device and a stacking device which are connected in sequence, wherein a bottle taking claw for grabbing bottles is arranged in the stacking device, the purifying system comprises a main filtering device, the main filtering device comprises a pre-filter and a final filter which are arranged side by side, the final filter is connected with a cleaning device, and the main filtering device is further connected with a stock solution inlet, a pure water inlet and a filling system. Through effective purification, the purity and quality of herbal tea production are improved, and the herbal tea is safe to drink. | |
The Apoctolith is a craftable Hardmode rogue weapon that throws a gravity-affected abyssal brick that shatters on contact with blocks and enemies, and inflicts the Crush Depth debuff. Whenever the brick critically hits an enemy, the struck target loses 15 defense.
Performing a stealth strike with the Apoctolith will cause the next brick to stun enemies with the Eutrophication debuff.
Its best modifier is Flawless.
Crafting[edit | edit source]
Recipe[edit | edit source]
|Crafting Station|
|Mythril Anvil|
or
Orichalcum Anvil
|Ingredient(s)||Amount|
|Throwing Brick||100|
|Voidstone||20|
|Lumenyl||8|
|Result|
|Apoctolith||1|
Notes[edit | edit source]
- This weapon does not benefit from the effects of the Invisibility Potion or the Shadow Potion. | https://calamitymod.fandom.com/wiki/Apoctolith |
Think about your city. What if the needs of young children and their caregivers were prioritised (not just considered) in city decision making? How would it look different from what it is today?
Decades of city planning that has prioritised car mobility over the health and happiness of people has created built environments that are often hostile to young children and their caregivers.
Cracks in a sidewalk, long wait times for buses, or streets without benches are inconvenient for everyone, but can severely limit the freedom of mobility for young children, caregivers with strollers, or older adults with mobility devices. Similarly, a lack of good public spaces and gathering places limits opportunities for connection and can increase social isolation (a significant problem among older adults and first time parents).
When we create streets that are safe for children to toddle, walk, bike, skip, or pause to explore we create a public asset that works for all residents. When we create parks and public spaces that are inviting for young children and their caregivers, we not only provide direct health benefits to those users, but we improve the overall quality of life for all people living in the city.
Credits: Gyorgy Papp Photography (http://www.papphoto.com/)
How can our cities do better? How can we plan, design, build, manage, and govern our cities so that their youngest citizens can thrive and meet their full potential? One very simple step is to give them the opportunity to participate and be heard.
At 8 80 Cities, we wanted to understand how cities around the world are tackling this challenge to ensure the voices of young children and caregivers are represented in their city-building processes. Through our own work as community engagement specialists, we know there is a critical gap in knowledge and research on these topics.
Partnering with the Bernard van Leer Foundation earlier this year, we set out to find cities and communities that are leading the charge when it comes to engaging young children and caregivers. After all, research tells us that experiences in the earliest years of life have the most profound effect on a person’s long-term healthy development.
We compiled background research and conducted interviews with leading practitioners in the field. We uncovered stories and ideas from cities around the world that demonstrate creative and effective methods for engaging pregnant women, young children, and caregivers in diverse projects related to the built environment as well as for the delivery of services. With 21 case studies from 16 different countries, we were successful in identifying innovative and effective techniques for engaging this important but underrepresented demographic.
While engagement of these groups is crucial, it is not a measure of success in and of itself. It’s what cities do with that engagement that counts. Engagement is a step in the process towards creating cities that work for everyone, including our youngest and most vulnerable.
We were disappointed to find that there were no model cities embedding the larger principles of inclusive civic engagement and the perspectives of young children and their caregivers into broader decision-making processes. No matter how innovative or successful the case studies are, the impact they had was limited to the scope of whatever project they were a part of. They are inspiring and effective examples that other cities can and should learn from, but they are not scaled to the needs and pressures that cities face today.
This report is a starting point, an important reminder of how far we still need to go. Since most children on this planet are now living in cities, isn’t it time we give them a voice in shaping the cities they want to grow up in? | https://www.880cities.org/lets-hear-children-engaging-young-children-parents-caregivers-city-building-means-creating-better-cities/ |
The Visual Effects Producer is at the heart of a production alongside the Visual Effects Supervisor. You are required to manage all aspects of a show, typically involving: planning and scheduling of resources, client management, keeping track of scope/budget changes and ensuring the project comes in on time and in budget, whilst maintaining the highest quality of work.
VFX Producer Duties/Responsibilities
• Work with Exec producer in keeping the client up to date with scope changes and bids.
• Plan and schedule resources with the Visual Effects Supervisor and HOD’s to create an effective approach to the visual effects work
• Track costs and progress of project by reporting and send weekly to Head of Production & Executive management team
• Supervise assigned production staff
• Responsible for ensuring the project delivers on time and to agreed margin
• Key frontline contact to client
• Streamlining production processes and workflow
• Ensure the progression of work to the client’s satisfaction
• Ensure all facility departments are aware of the production’s requirements, for example disk space and rendering
Required skills and experience
• Strong understanding of the VFX workflow
• Proven experience in similar role
• Advanced knowledge of shotgun software
• Self-driven, good communicator and a great team player
• Excellent organisational skills
• Experience in multi-tasking and problem solving
• Must be fluent in English, spoken and written
• Calm, considerate, and friendly
Axis is committed to equal opportunities. Click here to find out more. | https://axisstudiosgroup.com/careers/jobs/vfx-producer/ |
Shannon is famous for having founded information theory with one landmark paper published in 1948. But he is also credited with founding both digital computer and digital circuit design theory in 1937, when, as a 21-year-old master's student at MIT, he wrote a thesis demonstrating that electrical application of Boolean algebra could construct and resolve any logical, numerical relationship. It has been claimed that this was the most important master's thesis of all time. Shannon contributed to the field of cryptanalysis during World War II and afterwards, including basic work on code breaking.
Contents
- 1 Biography
- 2 Other work
- 3 Awards and honors list
- 4 See also
- 5 References
- 6 Further reading
- 7 Shannon videos
- 8 External links
Biography
Shannon was born in Petoskey, Michigan. His father, Claude Sr (1862–1934), a descendant of early New Jersey settlers, was a self-made businessman and for a while, Judge of Probate. His mother, Mabel Wolf Shannon (1890–1945), daughter of German immigrants, was a language teacher and for a number of years principal of Gaylord High School, Michigan. The first 16 years of Shannon's life were spent in Gaylord, Michigan, where he attended public school, graduating from Gaylord High School in 1932. Shannon showed an inclination towards mechanical things. His best subjects were science and mathematics, and at home he constructed such devices as models of planes, a radio-controlled model boat and a wireless telegraph system to a friend's house half a mile away. While growing up, he worked as a messenger for Western Union. His childhood hero was Thomas Edison, who he later learned was a distant cousin. Both were descendants of John Ogden, a colonial leader and an ancestor of many distinguished people.
Boolean theory
In 1932 he entered the University of Michigan, where he took a course that introduced him to the works of George Boole. He graduated in 1936 with two bachelor's degrees, one in electrical engineering and one in mathematics. Later he began his graduate studies at the Massachusetts Institute of Technology (MIT), where he worked on Vannevar Bush's differential analyzer, an analog computer.
While studying the complicated ad hoc circuits of the differential analyzer, Shannon saw that Boole's concepts could be used to great utility. A paper drawn from his 1937 master's thesis, A Symbolic Analysis of Relay and Switching Circuits, was published in the 1938 issue of the Transactions of the American Institute of Electrical Engineers. It also earned Shannon the Alfred Noble American Institute of American Engineers Award in 1940. Howard Gardner, of Harvard University, called Shannon's thesis "possibly the most important, and also the most famous, master's thesis of the century."
Victor Shestakov, at Moscow State University, had proposed a theory of electric switches based on Boolean logic earlier than Shannon, in 1935, but the first publication of Shestakov's result took place in 1941, after the publication of Shannon's thesis.
In this work, Shannon proved that Boolean algebra and binary arithmetic could be used to simplify the arrangement of the electromechanical relays then used in telephone routing switches, then expanded the concept and also proved that it should be possible to use arrangements of relays to solve Boolean algebra problems. Exploiting this property of electrical switches to do logic is the basic concept that underlies all electronic digital computers. Shannon's work became the foundation of practical digital circuit design when it became widely known among the electrical engineering community during and after World War II. The theoretical rigor of Shannon's work completely replaced the ad hoc methods that had previously prevailed.
Vannevar Bush suggested that Shannon, flush with this success, work on his dissertation at Cold Spring Harbor Laboratory, funded by the Carnegie Institution headed by Bush, to develop similar mathematical relationships for Mendelian genetics, which resulted in Shannon's 1940 PhD thesis at MIT, An Algebra for Theoretical Genetics.
In 1940, Shannon became a National Research Fellow at the Institute for Advanced Study in Princeton, New Jersey. At Princeton, Shannon had the opportunity to discuss his ideas with influential scientists and mathematicians such as Hermann Weyl and John von Neumann, and even had the occasional encounter with Albert Einstein. Shannon worked freely across disciplines, and began to shape the ideas that would become information theory.
Wartime research
Shannon then joined Bell Labs to work on fire-control systems and cryptography during World War II, under a contract with section D-2 (Control Systems section) of the National Defense Research Committee (NDRC).
He met his wife Betty when she was a numerical analyst at Bell Labs. They married in 1949.
For two months early in 1943, Shannon came into contact with the leading British cryptanalyst and mathematician Alan Turing. Turing had been posted to Washington to share with the US Navy's cryptanalytic service the methods used by the British Government Code and Cypher School at Bletchley Park to break the ciphers used by the German U-boats in the North Atlantic. He was also interested in the encipherment of speech and to this end spent time at Bell Labs. Shannon and Turing met at teatime in the cafeteria. Private archives from Bell Labs suggest that a Visual Binary encoding system was developed via their collaboration at this time.
Turing showed Shannon his seminal 1936 paper that defined what is now known as the "Universal Turing machine" which impressed him, as many of its ideas were complementary to his own.
In 1945, as the war was coming to an end, the NDRC was issuing a summary of technical reports as a last step prior to its eventual closing down. Inside the volume on fire control a special essay titled Data Smoothing and Prediction in Fire-Control Systems, coauthored by Shannon, Ralph Beebe Blackman, and Hendrik Wade Bode, formally treated the problem of smoothing the data in fire-control by analogy with "the problem of separating a signal from interfering noise in communications systems." In other words it modeled the problem in terms of data and signal processing and thus heralded the coming of the information age.
His work on cryptography was even more closely related to his later publications on communication theory. At the close of the war, he prepared a classified memorandum for Bell Telephone Labs entitled "A Mathematical Theory of Cryptography," dated September, 1945. A declassified version of this paper was subsequently published in 1949 as "Communication Theory of Secrecy Systems" in the Bell System Technical Journal. This paper incorporated many of the concepts and mathematical formulations that also appeared in his A Mathematical Theory of Communication. Shannon said that his wartime insights into communication theory and cryptography developed simultaneously and "they were so close together you couldn’t separate them". In a footnote near the beginning of the classified report, Shannon announced his intention to "develop these results ... in a forthcoming memorandum on the transmission of information."
While at Bell Labs, he proved that the one-time pad is unbreakable in his World War II research that was later published in October 1949. He also proved that any unbreakable system must have essentially the same characteristics as the one-time pad: the key must be truly random, as large as the plaintext, never reused in whole or part, and kept secret.
Postwar contributions
In 1948 the promised memorandum appeared as "A Mathematical Theory of Communication", an article in two parts in the July and October issues of the Bell System Technical Journal. This work focuses on the problem of how best to encode the information a sender wants to transmit. In this fundamental work he used tools in probability theory, developed by Norbert Wiener, which were in their nascent stages of being applied to communication theory at that time. Shannon developed information entropy as a measure for the uncertainty in a message while essentially inventing the field of information theory.
The book, co-authored with Warren Weaver, The Mathematical Theory of Communication, reprints Shannon's 1948 article and Weaver's popularization of it, which is accessible to the non-specialist. Shannon's concepts were also popularized, subject to his own proofreading, in John Robinson Pierce's Symbols, Signals, and Noise.
Information theory's fundamental contribution to natural language processing and computational linguistics was further established in 1951, in his article "Prediction and Entropy of Printed English", proving that treating whitespace as the 27th letter of the alphabet actually lowers uncertainty in written language, providing a clear quantifiable link between cultural practice and probabilistic cognition.
Another notable paper published in 1949 is "Communication Theory of Secrecy Systems", a declassified version of his wartime work on the mathematical theory of cryptography, in which he proved that all theoretically unbreakable ciphers must have the same requirements as the one-time pad. He is also credited with the introduction of sampling theory, which is concerned with representing a continuous-time signal from a (uniform) discrete set of samples. This theory was essential in enabling telecommunications to move from analog to digital transmissions systems in the 1960s and later.
He returned to MIT to hold an endowed chair in 1956.
Hobbies and inventions
Outside of his academic pursuits, Shannon was interested in juggling, unicycling, and chess. He also invented many devices, including rocket-powered flying discs, a motorized pogo stick, and a flame-throwing trumpet for a science exhibition. One of his more humorous devices was a box kept on his desk called the "Ultimate Machine", based on an idea by Marvin Minsky. Otherwise featureless, the box possessed a single switch on its side. When the switch was flipped, the lid of the box opened and a mechanical hand reached out, flipped off the switch, then retracted back inside the box. Renewed interest in the "Ultimate Machine" has emerged on YouTube and Thingiverse. In addition he built a device that could solve the Rubik's cube puzzle.
He is also considered the co-inventor of the first wearable computer along with Edward O. Thorp. The device was used to improve the odds when playing roulette.
Legacy and tributes
Shannon came to MIT in 1956 to join its faculty and to conduct work in the Research Laboratory of Electronics (RLE). He continued to serve on the MIT faculty until 1978. To commemorate his achievements, there were celebrations of his work in 2001, and there are currently six statues of Shannon sculpted by Eugene L. Daub: one at the University of Michigan; one at MIT in the Laboratory for Information and Decision Systems; one in Gaylord, Michigan; one at the University of California, San Diego; one at Bell Labs; and another at AT&T Shannon Labs. After the breakup of the Bell system, the part of Bell Labs that remained with AT&T was named Shannon Labs in his honor.
Robert Gallager has called Shannon the greatest scientist of the 20th century. According to Neil Sloane, an AT&T Fellow who co-edited Shannon's large collection of papers in 1993, the perspective introduced by Shannon's communication theory (now called information theory) is the foundation of the digital revolution, and every device containing a microprocessor or microcontroller is a conceptual descendant of Shannon's 1948 publication: "He's one of the great men of the century. Without him, none of the things we know today would exist. The whole digital revolution started with him."
Shannon developed Alzheimer's disease, and spent his last few years in a Massachusetts nursing home. He was survived by his wife, Mary Elizabeth Moore Shannon; a son, Andrew Moore Shannon; a daughter, Margarita Shannon; a sister, Catherine S. Kay; and two granddaughters.
Shannon was oblivious to the marvels of the digital revolution because his mind was ravaged by Alzheimer's disease. His wife mentioned in his obituary that had it not been for Alzheimer's "he would have been bemused" by it all.
Other work
Shannon's mouse
Theseus, created in 1950, was a magnetic mouse controlled by a relay circuit that enabled it to move around a maze of 25 squares. Its dimensions were the same as an average mouse. The maze configuration was flexible and it could be modified at will. The mouse was designed to search through the corridors until it found the target. Having travelled through the maze, the mouse would then be placed anywhere it had been before and because of its prior experience it could go directly to the target. If placed in unfamiliar territory, it was programmed to search until it reached a known location and then it would proceed to the target, adding the new knowledge to its memory thus learning. Shannon's mouse appears to have been the first artificial learning device of its kind.
Shannon's computer chess program
In 1950 Shannon published a groundbreaking paper on computer chess entitled Programming a Computer for Playing Chess. It describes how a machine or computer could be made to play a reasonable game of chess. His process for having the computer decide on which move to make is a minimax procedure, based on an evaluation function of a given chess position. Shannon gave a rough example of an evaluation function in which the value of the black position was subtracted from that of the white position. Material was counted according to the usual relative chess piece relative value (1 point for a pawn, 3 points for a knight or bishop, 5 points for a rook, and 9 points for a queen). He considered some positional factors, subtracting ½ point for each doubled pawns, backward pawn, and isolated pawn. Another positional factor in the evaluation function was mobility, adding 0.1 point for each legal move available. Finally, he considered checkmate to be the capture of the king, and gave the king the artificial value of 200 points. Quoting from the paper:
- The coefficients .5 and .1 are merely the writer's rough estimate. Furthermore, there are many other terms that should be included. The formula is given only for illustrative purposes. Checkmate has been artificially included here by giving the king the large value 200 (anything greater than the maximum of all other terms would do).
The evaluation function is clearly for illustrative purposes, as Shannon stated. For example, according to the function, pawns that are doubled as well as isolated would have no value at all, which is clearly unrealistic.
The Las Vegas connection: information theory and its applications to game theory
Shannon and his wife Betty also used to go on weekends to Las Vegas with M.I.T. mathematician Ed Thorp, and made very successful forays in blackjack using game theory type methods co-developed with fellow Bell Labs associate, physicist John L. Kelly Jr. based on principles of information theory. They made a fortune, as detailed in the book Fortune's Formula by William Poundstone and corroborated by the writings of Elwyn Berlekamp, Kelly's research assistant in 1960 and 1962. Shannon and Thorp also applied the same theory, later known as the Kelly criterion, to the stock market with even better results. Over the decades, Kelly's scientific formula has become a part of mainstream investment theory and the most prominent users, well-known and successful billionaire investors Warren Buffett, Bill Gross and Jim Simons use Kelly methods. Warren Buffett met Thorp the first time in 1968. It's said that Buffett uses a form of the Kelly criterion in deciding how much money to put into various holdings. Also Elwyn Berlekamp had applied the same logical algorithm for Axcom Trading Advisors, an alternative investment management company, that he had founded. Berlekamp's company was acquired by Jim Simons and his Renaissance Technologies Corp hedge fund in 1992, whereafter its investment instruments were either subsumed into (or essentially renamed as) Renaissance's flagship Medallion Fund. But as Kelly's original paper demonstrates, the criterion is only valid when the investment or "game" is played many times over, with the same probability of winning or losing each time, and the same payout ratio.
The theory was also exploited by the famous MIT Blackjack Team, which was a group of students and ex-students from the Massachusetts Institute of Technology, Harvard Business School, Harvard University, and other leading colleges who used card-counting techniques and other sophisticated strategies to beat casinos at blackjack worldwide. The team and its successors operated successfully from 1979 through the beginning of the 21st century. Many other blackjack teams have been formed around the world with the goal of beating the casinos.
Claude Shannon's card count techniques were explained in Bringing Down the House, the best-selling book published in 2003 about the MIT Blackjack Team by Ben Mezrich. In 2008 the book was adapted into a drama film titled 21.
Shannon's maxim
Shannon formulated a version of Kerckhoffs' principle as "the enemy knows the system". In this form it is known as "Shannon's maxim".
Awards and honors list
- Alfred Noble Prize, 1939
- Morris Liebmann Memorial Prize of the Institute of Radio Engineers, 1949
- Yale University (Master of Science), 1954
- Stuart Ballantine Medal of the Franklin Institute, 1955
- Research Corporation Award, 1956
- University of Michigan, honorary doctorate, 1961
- Rice University Medal of Honor, 1962
- Princeton University, honorary doctorate, 1962
- Marvin J. Kelly Award, 1962
- University of Edinburgh, honorary doctorate, 1964
- University of Pittsburgh, honorary doctorate, 1964
- Medal of Honor of the Institute of Electrical and Electronics Engineers, 1966
- National Medal of Science, 1966, presented by President Lyndon B. Johnson
- Golden Plate Award, 1967
- Northwestern University, honorary doctorate, 1970
- Harvey Prize, the Technion of Haifa, Israel, 1972
- Royal Netherlands Academy of Arts and Sciences (KNAW), foreign member, 1975
- University of Oxford, honorary doctorate, 1978
- Joseph Jacquard Award, 1978
- Harold Pender Award, 1978
- University of East Anglia, honorary doctorate, 1982
- Carnegie Mellon University, honorary doctorate, 1984
- Audio Engineering Society Gold Medal, 1985
- Kyoto Prize, 1985
- Tufts University, honorary doctorate, 1987
- University of Pennsylvania, honorary doctorate, 1991
- Basic Research Award, Eduard Rhein Foundation, Germany, 1991
- National Inventors Hall of Fame inducted, 2004
See also
- Shannon–Fano coding
- Shannon–Hartley theorem
- Nyquist–Shannon sampling theorem
- Noisy channel coding theorem
- Rate distortion theory
- Information theory
- Channel Capacity
- Confusion and diffusion
References
- ^ James, I. (2009). "Claude Elwood Shannon 30 April 1916 -- 24 February 2001". Biographical Memoirs of Fellows of the Royal Society 55: 257–265. doi:10.1098/rsbm.2009.0015.
- ^ a b c d e Bell Labs website: "For example, Claude Shannon, the father of Information Theory, had a passion..."
- ^ a b Poundstone, William: Fortune's Formula : The Untold Story of the Scientific Betting System That Beat the Casinos and Wall Street
- ^ a b MIT Professor Claude Shannon dies; was founder of digital communications, MIT - News office, Cambridge, Massachusetts, February 27, 2001
- ^ CLAUDE ELWOOD SHANNON, Collected Papers, Edited by N.J.A Sloane and Aaron D. Wyner, IEEE press, ISBN 0-7803-0434-9
- ^ Robert Price (1982). "Claude E. Shannon, an oral history". IEEE Global History Network. IEEE. http://www.ieeeghn.org/wiki/index.php/Oral-History:Claude_E._Shannon. Retrieved 14 July 2011.
- ^ Claude Shannon, "A Symbolic Analysis of Relay and Switching Circuits," unpublished MS Thesis, Massachusetts Institute of Technology, Aug. 10, 1937.
- ^ C. E. Shannon, "An algebra for theoretical genetics", (Ph.D. Thesis, Massachusetts Institute of Technology, 1940), MIT-THESES//1940–3 Online text at MIT
- ^ Erico Marui Guizzo, “The Essential Message: Claude Shannon and the Making of Information Theory” (M.S. Thesis, Massachusetts Institute of Technology, Dept. of Humanities, Program in Writing and Humanistic Studies, 2003), 14.
- ^ a b Shannon, Claude Elwood (1916-2001)
- ^ a b Hodges, Andrew (1992), Alan Turing: The Enigma, London: Vintage, pp. 243–252, ISBN 978-0099116417
- ^ Turing, A.M. (1936), "On Computable Numbers, with an Application to the Entscheidungsproblem", Proceedings of the London Mathematical Society, 2 42: 230–65, 1937, doi:10.1112/plms/s2-42.1.230
- ^ Turing, A.M. (1938), "On Computable Numbers, with an Application to the Entscheidungsproblem: A correction", Proceedings of the London Mathematical Society, 2 43: 544–6, 1937, doi:10.1112/plms/s2-43.6.544
- ^ David A. Mindell, Between Human and Machine: Feedback, Control, and Computing Before Cybernetics, (Baltimore: Johns Hopkins University Press), 2004, pp. 319-320. ISBN 0-8018-8057-2.
- ^ David Kahn, The Codebreakers, rev. ed., (New York: Simon and Schuster), 1996, pp. 743-751. ISBN 0-684-83130-9.
- ^ quoted in Kahn, The Codebreakers, p. 744.
- ^ quoted in Erico Marui Guizzo, "The Essential Message: Claude Shannon and the Making of Information Theory," unpublished MS thesis, Massachusetts Institute of Technology, 2003, p. 21.
- ^ Shannon, Claude (1949). "Communication Theory of Secrecy Systems". Bell System Technical Journal 28 (4): 656–715.
- ^ The Invention of the First Wearable Computer Online paper by Edward O. Thorp of Edward O. Thorp & Associates
- ^ Shannon Statue Dedications
- ^ C. E. Shannon: A mathematical theory of communication. Bell System Technical Journal, vol. 27, pp. 379–423 and 623–656, July and October, 1948
- ^ a b Bell Labs digital guru dead at 84 — Pioneer scientist led high-tech revolution (The Star-Ledger, obituary by Kevin Coughlin 27 February 2001)
- ^ Claude Elwood Shannon April 30, 1916
- ^ Hamid Reza Ekbia (2008), Artificial dreams: the quest for non-biological intelligence, Cambridge University Press, p. 46, ISBN 9780521878678
- ^ American Scientist online: Bettor Math, article and book review by Elwyn Berlekamp
- ^ John Kelly by William Poundstone website
- ^ Elwyn Berlekamp (Kelly's Research Assistant) Bio details
- ^ William Poundstone website
- ^ Zenios, S. A.; Ziemba, W. T. (2006), Handbook of Asset and Liability Management, North Holland, ISBN 978-0444508751
- ^ Pabrai, Mohnish (2007), The Dhandho Investor: The Low-Risk Value Method to High Returns, Wiley, ISBN 978-0470043899
- ^ "Ed Thorp's Genius Detailed In Scott Patterson's The Quants", book review by Bill Freehling for gurufocus.com, February 5, 2010
- ^ Thorp, E. O. (September 2008), "The Kelly Criterion: Part II", Wilmott Magazine
- ^ J. L. Kelly, Jr, A New Interpretation of Information Rate, Bell System Technical Journal, 35, (1956), 917–926
- ^ "IEEE Morris N. Liebmann Memorial Award Recipients". IEEE. http://www.ieee.org/documents/liebmann_rl.pdf. Retrieved February 27, 2011.
- ^ "IEEE Medal of Honor Recipients". IEEE. http://www.ieee.org/documents/moh_rl.pdf. Retrieved February 27, 2011.
- ^ "Award Winners (chronological)". Eduard Rhein Foundation. http://www.eduard-rhein-stiftung.de/html/Preistraeger_e.html. Retrieved February 20, 2011.
Further reading
- Claude E. Shannon: A Mathematical Theory of Communication, Bell System Technical Journal, Vol. 27, pp. 379–423, 623–656, 1948.
- Claude E. Shannon and Warren Weaver: The Mathematical Theory of Communication. The University of Illinois Press, Urbana, Illinois, 1949. ISBN 0-252-72548-4
- Rethnakaran Pulikkoonattu - Eric W. Weisstein: Mathworld biography of Shannon, Claude Elwood (1916–2001)
- Claude E. Shannon: Programming a Computer for Playing Chess, Philosophical Magazine, Ser.7, Vol. 41, No. 314, March 1950. (Available online under External links below)
- David Levy: Computer Gamesmanship: Elements of Intelligent Game Design, Simon & Schuster, 1983. ISBN 0-671-49532-1
- Mindell, David A., "Automation's Finest Hour: Bell Labs and Automatic Control in World War II", IEEE Control Systems, December 1995, pp. 72–80.
- David Mindell, Jérôme Segal, Slava Gerovitch, "From Communications Engineering to Communications Science: Cybernetics and Information Theory in the United States, France, and the Soviet Union" in Walker, Mark (Ed.), Science and Ideology: A Comparative History, Routledge, London, 2003, pp. 66–95.
- Poundstone, William, Fortune's Formula, Hill & Wang, 2005, ISBN 978-0-8090-4599-0
- Gleick, James, The Information: A History, A Theory, A Flood, Pantheon, 2011, ISBN 9780375423727
Shannon videos
- Shannon's video machines
- Shannon - father of the information age
- AT&T Tech Channel's Tech Icons - Claude Shannon
External links
- C. E. Shannon, An algebra for theoretical genetics, Massachusetts Institute of Technology, Ph.D. Thesis, MIT-THESES//1940–3 (1940) Online text at MIT
- Shannon's math genealogy
- Shannon's NNDB profile
- Works by or about Claude Shannon in libraries (WorldCat catalog)
- A Mathematical Theory of Communication
- Communication Theory of Secrecy Systems
- Communication in the Presence of Noise
- Summary of Shannon's life and career
- Biographical summary from Shannon's collected papers
- Video documentary: "Claude Shannon - Father of the Information Age"
- Mathematical Theory of Claude Shannon In-depth MIT class paper on the development of Shannon's work to 1948.
- Retrospective at the University of Michigan
- Shannon's University of Michigan profile
- Notes on Computer-Generated Text
- Shannon's Juggling Theorem and Juggling Robots
- Color Photo of Shannon, Juggling
- Shannon's paper on computer chess, text
- Shannon's paper on computer chessPDF (175 KiB)
- Shannon's paper on computer chess, text, alternate source
- A Bibliography of His Collected Papers
- A Register of His Papers in the Library of Congress
- The Technium: The (Unspeakable) Ultimate Machine
- The Most Beautiful Machine. (aka the "Ultimate Machine") It's a communication based on the functions ON and OFF.
- Guizzo, "The Essential Message: Claude Shannon and the Making of Information Theory"
- Claude Shannon, Edward O. Thorp, Fortune's Formula
- Claude Shannon : Founding Father of Electronic Communication age,Dream 2047, December,2006, Shivaprasad Khened
IEEE Medal of Honor 1951–1975
Vladimir Zworykin (1951) · Walter R. G. Baker (1952) · John M. Miller (1953) · William L. Everitt (1954) · Harald T. Friis (1955) · John V. L. Hogan (1956) · Julius Adams Stratton (1957) · Albert Hull (1958) · Emory Leon Chaffee (1959) · Harry Nyquist (1960) · Ernst A. Guillemin (1961) · Edward Victor Appleton (1962) · George C. Southworth (1963) · Harold A. Wheeler (1964) · Claude Elwood Shannon (1966) · Charles H. Townes (1967) · Gordon K. Teal (1968) · Edward Ginzton (1969) · Dennis Gabor (1970) · John Bardeen (1971) · Jay W. Forester (1972) · Rudolf Kompfner (1973) · Rudolf Kalman (1974) · John Robinson Pierce (1975)
Complete roster: 1917–1925 · 1926–1950 · 1951–1975 · 1976–2000 · 2001–present Systems and systems science Systems categoriesSystems theory · Systems science · Systems scientists (Conceptual · Physical · Social) SystemsBiological · Complex · Complex adaptive · Conceptual · Database management · Dynamical · Economical · Ecosystem · Formal · Global Positioning System · Human anatomy · Information systems · Legal systems of the world · Systems of measurement · Metric system · Multi-agent system · Nervous system · Nonlinearity · Operating system · Physical system · Political system · Sensory system · Social structure · Solar System · Systems art Theoretical fields Systems scientists
Russell L. Ackoff · William Ross Ashby · Béla H. Bánáthy · Gregory Bateson · Richard E. Bellman · Stafford Beer · Ludwig von Bertalanffy · Murray Bowen · Kenneth E. Boulding · C. West Churchman · George Dantzig · Heinz von Foerster · Jay Wright Forrester · George Klir · Edward Lorenz · Niklas Luhmann · Humberto Maturana · Margaret Mead · Donella Meadows · Mihajlo D. Mesarovic · James Grier Miller · Howard T. Odum · Talcott Parsons · Ilya Prigogine · Anatol Rapoport · Claude Shannon · Francisco Varela · Kevin Warwick · Norbert Wiener · Anthony Wilden · Charles A S Hall
United States National Medal of Science laureates Behavioral and social science1960s1980s
1986: Herbert A. Simon · 1987: Anne Anastasi · George J. Stigler · 1988: Milton Friedman1990s
1990: Leonid Hurwicz · Patrick Suppes · 1991: Robert W. Kates · George A. Miller · 1992: Eleanor J. Gibson · 1994: Robert K. Merton · 1995: Roger N. Shepard · 1996: Paul Samuelson · 1997: William K. Estes · 1998: William Julius Wilson · 1999: Robert M. Solow2000s
2000: Gary Becker · 2001: George Bass · 2003: R. Duncan Luce · 2004: Kenneth Arrow · 2005: Gordon H. Bower · 2008: Michael I. Posner · 2009: Mortimer Mishkin
Biological sciences1960s
1963: C. B. van Niel · 1964: Marshall W. Nirenberg · 1965: Francis P. Rous · George G. Simpson · Donald D. Van Slyke · 1966: Edward F. Knipling · Fritz Albert Lipmann · William C. Rose · Sewall Wright · 1967: Kenneth S. Cole · Harry F. Harlow · Michael Heidelberger · Alfred H. Sturtevant · 1968: Horace Barker · Bernard B. Brodie · Detlev W. Bronk · Jay Lush · Burrhus Frederic Skinner · 1969: Robert Huebner · Ernst Mayr1970s
1970: Barbara McClintock · Albert B. Sabin · 1973: Daniel I. Arnon · Earl W. Sutherland, Jr. · 1974: Britton Chance · Erwin Chargaff · James V. Neel · James Augustine Hannon · 1975: Hallowell Davis · Paul Gyorgy · Sterling Brown Hendricks · Orville lvin Vogel · 1976: Roger C.L. Guillemin · Keith Roberts Porter · Efraim Racker · E. O. Wilson · 1979: Robert H. Burris · Elizabeth C. Crosby · Arthur Kornberg · Severo Ochoa · Earl Reece Stadtman · George Ledyard Stebbins · Paul Alfred Weiss1980s
1981: Philip Handler · 1982: Seymour Benzer · Glenn W. Burton · Mildred Cohn · 1983: Howard L. Bachrach · Paul Berg · Wendell L. Roelofs · Berta Scharrer · 1986: Stanley Cohen · Donald A. Henderson · Vernon B. Mountcastle · George Emil Palade · Joan A. Steitz · 1987: Michael E. Debakey · Theodor O. Diener · Harry Eagle · Har Gobind Khorana · Rita Levi-Montalcini · 1988: Michael S. Brown · Stanley Norman Cohen · Joseph L. Goldstein · Maurice R. Hilleman · Eric R. Kandel · Rosalyn Sussman Yalow · 1989: Katherine Esau · Viktor Hamburger · Philip Leder · Joshua Lederberg · Roger W. Sperry · Harland G. Wood1990s
1990: Baruj Benacerraf · Herbert W. Boyer · Daniel E. Koshland, Jr. · Edward B. Lewis · David G. Nathan · E. Donnall Thomas · 1991: Mary Ellen Avery · G. Evelyn Hutchinson · Elvin A. Kabat · Salvador Luria · Paul A. Marks · Folke K Skoog · Paul C. Zamecnik · 1992: Maxine Singer · Howard M. Temin · 1993: Daniel Nathans · Salome G. Waelsch · 1994: Thomas Eisner · Elizabeth F. Neufeld · 1995: Alexander Rich · 1996: Ruth Patrick · 1997: James D. Watson · Robert A. Weinberg · 1998: Bruce Ames · Janet Rowley · 1999: David Baltimore · Jared Diamond · Lynn Margulis2000s
2000: Nancy C. Andreasen · Peter H. Raven · Carl Woese · 2001: Francisco J. Ayala · Mario R. Capecchi · Ann M. Graybiel · Gene E. Likens · Victor A. McKusick · Harold Varmus · 2002: James E. Darnell · Evelyn M. Witkin · 2003: J. Michael Bishop · Solomon H. Snyder · Charles Yanofsky · 2004: Norman E. Borlaug · Phillip A. Sharp · Thomas E. Starzl · 2005: Anthony Fauci · Torsten N. Wiesel · 2006: Rita R. Colwell · Nina Fedoroff · Lubert Stryer · 2007: Robert J. Lefkowitz · Bert W. O'Malley · 2008: Francis S. Collins · Elaine Fuchs · J. Craig Venter · 2009: Susan L. Lindquist · Stanley B. Prusiner
Chemistry1980s
1982: F. Albert Cotton · Gilbert Stork · 1983: Roald Hoffmann · George C. Pimentel · Richard N. Zare · 1986: Harry B. Gray · Yuan Tseh Lee · Carl S. Marvel · Frank H. Westheimer · 1987: William S. Johnson · Walter H. Stockmayer · Max Tishler · 1988: William O. Baker · Konrad E. Bloch · Elias J. Corey · 1989: Richard B. Bernstein · Melvin Calvin · Rudoph A. Marcus · Harden M. McConnell1990s
1990: Elkan Blout · Karl Folkers · John D. Roberts · 1991: Ronald Breslow · Gertrude B. Elion · Dudley R. Herschbach · Glenn T. Seaborg · 1992: Howard E. Simmons, Jr. · 1993: Donald J. Cram · Norman Hackerman · 1994: George S. Hammond · 1995: Thomas Cech · Isabella L. Karle · 1996: Norman Davidson · 1997: Darleane C. Hoffman · Harold S. Johnston · 1998: John W. Cahn · George M. Whitesides · 1999: Stuart A. Rice · John Ross · Susan Solomon2000s
2000: John D. Baldeschwieler · Ralph F. Hirschmann · 2001: Ernest R. Davidson · Gabor A. Somorjai · 2002: John I. Brauman · 2004: Stephen J. Lippard · 2006: Marvin H. Caruthers · Peter B. Dervan · 2007: Mostafa A. El-Sayed · 2008: Joanna S. Fowler · JoAnne Stubbe · 2009: Stephen J. Benkovic · Marye Anne Fox
Engineering sciences1960s
1962: Theodore von Kármán · 1963: Vannevar Bush · John Robinson Pierce · 1964: Charles S. Draper · 1965: Hugh L. Dryden · Clarence L. Johnson · Warren K. Lewis · 1966: Claude E. Shannon · 1967: Edwin H. Land · Igor I. Sikorsky · 1968: J. Presper Eckert · Nathan M. Newmark · 1969: Jack St. Clair Kilby1970s
1970: George E. Mueller · 1973: Harold E. Edgerton · Richard T. Whitcomb · 1974: Rudolf Kompfner · Ralph Brazelton Peck · Abel Wolman · 1975: Manson Benedict · William Hayward Pickering · Frederick E. Terman · Wernher von Braun · 1976: Morris Cohen · Peter C. Goldmark · Erwin Wilhelm Müller · 1979: Emmett N. Leith · Raymond D. Mindlin · Robert N. Noyce · Earl R. Parker · Simon Ramo1980s
1982: Edward H. Heinemann · Donald L. Katz · 1983: William R. Hewlett · George M. Low · John G. Trump · 1986: Hans Wolfgang Liepmann · T. Y. Lin · Bernard M. Oliver · 1987: R. Byron Bird · H. Bolton Seed · Ernst Weber · 1988: Daniel C. Drucker · Willis M. Hawkins · George W. Housner · 1989: Harry George Drickamer · Herbert E. Grier1990s
1990: Mildred S. Dresselhaus · Nick Holonyak Jr. · 1991: George Heilmeier · Luna B. Leopold · H. Guyford Stever · 1992: Calvin F. Quate · John Roy Whinnery · 1993: Alfred Y. Cho · 1994: Ray W. Clough · 1995: Hermann A. Haus · 1996: James L. Flanagan · C. Kumar N. Patel · 1998: Eli Ruckenstein · 1999: Kenneth N. Stevens2000s
2000: Yuan-Cheng B. Fung · 2001: Andreas Acrivos · 2002: Leo Beranek · 2003: John M. Prausnitz · 2004: Edwin N. Lightfoot · 2005: Jan D. Achenbach · Tobin J. Marks · 2006: Robert S. Langer · 2007: David J. Wineland · 2008: Rudolf E. Kálmán · 2009: Amnon Yariv
Mathematical, statistical, and computer sciences1960s
1963: Norbert Wiener · 1964: Solomon Lefschetz · H. Marston Morse · 1965: Oscar Zariski · 1966: John Milnor · 1967: Paul Cohen · 1968: Jerzy Neyman · 1969: William Feller1970s
1970: Richard Brauer · 1973: John Tukey · 1974: Kurt Gödel · 1975: John W. Backus · Shiing-Shen Chern · George Dantzig · 1976: Kurt Otto Friedrichs · Hassler Whitney · 1979: Joseph Leo Doob · Donald E. Knuth1980s
1982: Marshall Harvey Stone · 1983: Herman Goldstine · Isadore Singer · 1986: Peter Lax · Antoni Zygmund · 1987: Raoul Bott · Michael Freedman · 1988: Ralph E. Gomory · Joseph B. Keller · 1989: Samuel Karlin · Saunders MacLane · Donald C. Spencer1990s
1990: George F. Carrier · Stephen Cole Kleene · John McCarthy · 1991: Alberto Calderón · 1992: Allen Newell · 1993: Martin David Kruskal · 1994: John Cocke · 1995: Louis Nirenberg · 1996: Richard Karp · Stephen Smale · 1997: Shing-Tung Yau · 1998: Cathleen Synge Morawetz · 1999: Felix Browder · Ronald R. Coifman2000s
2000: John Griggs Thompson · Karen K. Uhlenbeck · 2001: Calyampudi R. Rao · Elias M. Stein · 2002: James G. Glimm · 2003: Carl R. de Boor · 2004: Dennis P. Sullivan · 2005: Bradley Efron · 2006: Hyman Bass · 2007: Leonard Kleinrock · Andrew J. Viterbi · 2009: David B. Mumford
Physical sciences1960s
1963: Luis W. Alvarez · 1964: Julian Schwinger · Harold Clayton Urey · Robert Burns Woodward · 1965: John Bardeen · Peter Debye · Leon M. Lederman · William Rubey · 1966: Jacob Bjerknes · Subrahmanyan Chandrasekhar · Henry Eyring · John H. Van Vleck · Vladimir K. Zworykin · 1967: Jesse Beams · Francis Birch · Gregory Breit · Louis Hammett · George Kistiakowsky · 1968: Paul Bartlett · Herbert Friedman · Lars Onsager · Eugene Wigner · 1969: Herbert C. Brown · Wolfgang Panofsky1970s
1970: Robert H. Dicke · Allan R. Sandage · John C. Slater · John A. Wheeler · Saul Winstein · 1973: Carl Djerassi · Maurice Ewing · Arie Jan Haagen-Smit · Vladimir Haensel · Frederick Seitz · Robert Rathbun Wilson · 1974: Nicolaas Bloembergen · Paul Flory · William Alfred Fowler · Linus Carl Pauling · Kenneth Sanborn Pitzer · 1975: Hans A. Bethe · Joseph Hirschfelder · Lewis Sarett · E. Bright Wilson · Chien-Shiung Wu · 1976: Samuel Goudsmit · Herbert S. Gutowsky · Frederick Rossini · Verner Suomi · Henry Taube · George Uhlenbeck · 1979: Richard P. Feynman · Herman Mark · Edward M. Purcell · John Sinfelt · Lyman Spitzer · Victor F. Weisskopf1980s
1982: Philip W. Anderson · Yoichiro Nambu · Edward Teller · Charles H. Townes · 1983: E. Margaret Burbidge · Maurice Goldhaber · Helmut Landsberg · Walter Munk · Frederick Reines · Bruno B. Rossi · J. Robert Schrieffer · 1986: Solomon Buchsbaum · Horace Crane · Herman Feshbach · Robert Hofstadter · Chen Ning Yang · 1987: Philip Abelson · Walter Elsasser · Paul C. Lauterbur · George Pake · James A. Van Allen · 1988: D. Allan Bromley · Paul Ching-Wu Chu · Walter Kohn · Norman F. Ramsey · Jack Steinberger · 1989: Arnold O. Beckman · Eugene Parker · Robert Sharp · Henry Stommel1990s
1990: Allan M. Cormack · Edwin M. McMillan · Robert Pound · Roger Revelle · 1991: Arthur L. Schawlow · Ed Stone · Steven Weinberg · 1992: Eugene M. Shoemaker · 1993: Val Fitch · Vera Rubin · 1994: Albert Overhauser · Frank Press · 1995: Hans Dehmelt · Peter Goldreich · 1996: Wallace S. Broecker · 1997: Marshall Rosenbluth · Martin Schwarzschild · George Wetherill · 1998: Don L. Anderson · John N. Bahcall · 1999: James Cronin · Leo Kadanoff2000s
2000: Willis E. Lamb · Jeremiah P. Ostriker · Gilbert F. White · 2001: Marvin L. Cohen · Raymond Davis Jr. · Charles Keeling · 2002: Richard Garwin · W. Jason Morgan · Edward Witten · 2003: G. Brent Dalrymple · Riccardo Giacconi · 2004: Robert N. Clayton · 2005: Ralph A. Alpher · Lonnie Thompson · 2006: Daniel Kleppner · 2007: Fay Ajzenberg-Selove · Charles P. Slichter · 2008: Berni Alder · James E. Gunn · 2009: Yakir Aharonov · Esther M. Conwell · Warren M. WashingtonCategories:
- 1916 births
- 2001 deaths
- American atheists
- American engineers
- American mathematicians
- Computer pioneers
- American computer scientists
- American electrical engineers
- Control theorists
- Deaths from Alzheimer's disease
- American people of German descent
- IEEE Medal of Honor recipients
- Information theorists
- Internet pioneers
- Massachusetts Institute of Technology alumni
- Massachusetts Institute of Technology faculty
- National Inventors Hall of Fame inductees
- National Medal of Science laureates
- Pre-computer cryptographers
- Scientists at Bell Labs
- Fellows of the Royal Society
- Systems scientists
- Modern cryptographers
- University of Michigan alumni
- Researchers in stochastics
- Probability theorists
- Communication theorists
- Unicyclists
- Foreign Members of the Royal Society
Wikimedia Foundation. 2010. | https://en-academic.com/dic.nsf/enwiki/3094 |
Pat Brister, St. Tammany Parish President announced the adoption of 43 animals from the St. Tammany Parish Department of Animal Services shelter during their participation in the nationwide 2018 Clear the Shelters event, on Saturday, August 18, 2018. During this event, all adoption fees were waived. According to www.cleartheshelters.com, over 90,000 pets were adopted nationwide on this day.
“In 2017, the Department of Animal Services adopted out nearly 1000 animals, and returned 362 lost pets to their owners,” said Pat Brister, St. Tammany Parish President. “We can only do this through the help of our neighbors and friends — our residents — who choose to adopt an animal in need. We encourage everyone to consider adoption first when choosing a pet.”
All animals adopted from the Department of Animal Services are socialized, spayed or neutered, up-to-date on their vaccines, and microchipped. Animals available for adoption can be viewed at www.stpgov.org/departments/animal-services, click on the Pet Harbor and Pet Finder links. Animal Services is located at 31078 Highway 36 in Lacombe, with adoption hours from 8 a.m. – 4p.m. Monday through Saturday.
See a photo gallery of animals who found their forever homes on Saturday, here.
The mission of Animal Services is to balance the health, safety and welfare needs of people and animals in St. Tammany Parish by: protecting the rights of people from dangers and nuisances caused by uncontrolled animals, ensuring the legal protection of animals from mistreatment, and promoting, motivating and enforcing responsible pet ownership. The Animal Shelter provides a safe and clean facility for stray animals or unwanted pets. Follow us on Facebook at www.facebook.com/animalservices. | http://stpgov.org/residents/news/item/3761-43-pets-from-the-department-of-animal-services-find-families-during-the-2018-clear-the-shelters-event |
Twenty-six people, including two children, were bitten by a stray dog at Vallam village in Chengalpattu, reports the New Indian Express.
The victims were treated at the Government Hospital in Chengalpattu Medical College hospital but the incident has reportedly created a rabies scare in the area.
Dr Palani, deputy Director of Health Service, Chengalpattu, told TNIE, " The dog has bitten 26 people including 7 women. However officials are yet to confirm whether the dog was infected with rabies. The field medical officers will identify the dog tomorrow and follow up actions will be taken."
The victims of the attack have all reportedly been administered anti-rabies injection. Most of them, according to the report, suffered injuries on their hands and legs.
Meanwhile, the dog that went on a biting spree was caught by a team of dog catchers from Blue Cross on Wednesday evening, as per a report in The Times of India.
The last such reported incident in Tamil Nadu was in 2014, when a stray dog went on a biting spree injuring 45 people at different places in Rajapalayam town in the district of Virudhunagar.
In the neighbouring state of Kerala however, the stray dog menace has led to several deaths and even became a matter of debate in the Assembly.
A Supreme Court appointed panel found 5,948 cases of stray dog attacks in the capital city of Thiruvananthapuram in 2015-16 alone.
In addition to a lack of expert dog catchers, trained vets and poor infrastructural support, the panel found that lack of waste management was the main contributing factor to the rise of stray dogs. | https://www.thenewsminute.com/article/26-people-tn-s-chengalpattu-area-bitten-stray-dog-57658 |
Despite recent electoral defeats for the left social protest and mobilisation in Argentina suggest the Pink Tide’s decline may be overstated
The resurgence of the right in Latin America – from the recent electoral victories of the MUD alliance in Venezuela to the attempted impeachment (and possible constitutional coup) against Dilma Rousseff and the PT in Brazil to the presidential victory of perennial right-wing politician Mauricio Macri in Argentina – has upended politics in the region and left many decrying (or proclaiming) the death of the “Pink Tide”, which has seen left-wing governments elected across the continent since the turn of the century. It is on the latter case – to the recent and ongoing changes in Argentina – that this post concentrates.
Over the past decade, Latin America has been a beacon of hope for those on the left – from the radical anti-imperialist and socialist rhetoric and (to a lesser extent) political and economic transformation led by Hugo Chávez in Venezuela and Evo Morales in Bolivia, to the relative successes of significant social spending and state-sponsored industrial and export-led development implemented by Lula and Rousseff in Brazil and Nestor Kirchner and Cristina Fernández de Kirchner in Argentina.
But for critics, it has been a period that mirrors the worst excesses of the populism of the twentieth century, with inefficient state patronage driving unsustainable, inefficient, and ultimately corrupt patterns of economic growth.
The new right has used this perception to its advantage, gaining domestic and international support amongst constituencies that oppose the redistributive programmes and nationalisations that have constituted the most high-profile changes brought about by the leftist governments of various stripes.
And, for now at least, it is a strategy that appears to be working.
But does this rapid rise of the right across the region constitute an end to progressive politics in Latin America? Or does such a fatalistic analysis – a view found on both the left and right – misunderstand the Pink Tide itself?
To understand these transformations requires stepping away from the institutional transformations at the level of the state and public policy that have been so prominent. It requires an understanding of the purpose(s) and practice(s) of resistance by the array of social movements, labour activists and organisations, peasant movements, land and workplace occupations, and even insurgent movements that made this progressive moment (and perhaps also the backlash to it we are now seeing) possible.
Taking Argentina as an example, the explosion of social protest that met the 2001 debt crisis laid the foundations for a seemingly dramatic shift in the political economy of development. From decades of neoliberal transformation under first the military dictatorship of the 1970s and 1980s and then the civilian governments of Raul Alfonsín and Carlos Menem, the 2003 election of Nestor Kirchner appeared to be a direct response to the demands of the mobilised masses of the unemployed, the newly-impoverished middle classes, and the popular sectors of the left for a reversal of privatisation, fiscal austerity, and state retrenchment.
Under Kirchner (and later under Cristina Fernández) employment programmes increased exponentially, with poverty levels declining from 57 percent in 2003 to 30 percent in 2007 and unemployment from 20 percent to 8 percent in the same period. Alongside this minimum wage legislation shifted it from representing only 29.9 percent of basic basket of goods in 2003 to 100.4 percent in 2007.
This social policy shift, moreover, was made possible by the context of rapid growth – particularly in the recovering industrial manufacturing sector. As demonstrated by Christopher Wylde, targeted policy measures after 2003 focused on dynamic industrial manufacturing, including a surge in manufactured exports to 31 percent of GDP, have been at the crux of new employment generation and associated poverty reduction.
Yet whilst addressing many of the immediate concerns and demands of the protesters, it has become increasingly clear that this new model for development was limited in its scope.
What marked out the protests and mobilisations that occurred around and after 2001 was the deliberate effort to confront and transform not just immediate issues of poverty and unemployment, but the deeper structural conditions that brought these about.
From unemployed workers’ movements to worker-recuperated factories, new realities were conceived of based on an alternative vision of society and, in many cases, went beyond simple defensive mobilisations designed to ameliorate the worst effects of the crisis.
In response, the Kirchners have led a de-politicisation of mobilisation by radical unemployed workers’ organisations and the worker-recuperated factories through social policy programmes. An expansion of the role of the National Institute for Associative Activities and the Social Economy (INAES) and the Ministry for Social Development brought social demands under the remit of the state and institutionalising them such as, for example, by confining access to state funds only to organisations legally registered as an NGO.
So what does this process in Argentina tell us about the Pink Tide? And what are the implications for the emerging fatalistic prognosis of progressive governance in the region?
Importantly it is clear that this doesn’t echo many of the critiques from the left that the Pink Tide (and “Kirchnerismo” in particular) was a betrayal of the protesters and accompanying social movements, nor was it simply a continuation of the neoliberal political economy masked beneath limited social reform.
Instead, it demonstrates that despite the veracity of protest, mobilisation, and autonomous organising after 2001, there was never sufficient scope within existing institutions for the emergence of a concrete alternative.
Taking MacDonald and Ruckert’s definition of the emerging post-neoliberal consensus as a starting point, the Pink Tide represented a significant discontinuity of the progressive political economy strategies within the widely acknowledged macroeconomic continuities with neoliberalism.
What defines the Pink Tide – and perhaps can offer hope in spite of the dramatic resurgence of the right in the region – is that this continuity of protest and mobilisation (albeit in discrete and fragmented forms) and the persistent attempts to suppress it – either by the Kirchners or Macri – represents a continuity within this discontinuity itself.
This continuity can be seen, for example, with the continuing growth of the empresas recuperadas – the “reclaimed factories” of worker self-managed enterprises that, springing up in response to the 2001 crisis, have developed a momentum of their own. The Open Faculty Programme reports that of the 311 establishments currently occupying 13,462 workers, 63 occupying over 3,000 workers were established between 2010 and 2013.
As Dinerstein argues in the case of changing social mobilisation in Argentina:
“nothing ‘went wrong’ with revolution in Argentina in 2001 as many left activists asked themselves at party meetings…QSVT [que se vayan todos] transformed autonomous organising into the art of organising hope…this is a demand that contains the not yet within it, according to the concrete and material conditions provided by the context and relations that produce the utopian demand. Concrete utopias cannot remain intact as abstract utopias do, for they belong to the material world and are constantly reshaped by struggles”
It is the persistence – and now emerging failure and replacement – of efforts to contain and appropriate these struggles from below that were the most important features of the Pink Tide and that, in turn, undermine the fatalistic prognosis of its demise.
Mobilisations have been and continue to be the source of genuine, progressive – even radical – change and with their continuity, to echo Dinerstein once more, there remains hope. | http://speri.dept.shef.ac.uk/2016/05/02/looking-beyond-the-end-of-the-latin-american-pink-tide/ |
3d-drucker.me 9 out of 10 based on 600 ratings. 100 user reviews.
Quick reference fo a 50 amp rv plug wiring diagram. ... What i would like is a schematic from the breaker to the juntion box to the rv.
Installing the 50 amp 120 240 volt 3 pole 4 wire grounding Service ... It is a misconception that the 50 amp RV service is something special.
The 50 amp 120 240 volt 3 pole 4 wire grounding Service: ... Almost ALL 50 amp wired RV's use both sides of the service separately as 120 volt on each leg. | http://3d-drucker.me/post/50-amp-rv-schematic-wiring |
Calligraphy For Beginners
Sunday, 12 April 2015
In this post, I have put together a simple project about
how to write upper case Gothic style letters using a calligraphy pen. There are many variations of Gothic styles, and
here is an example, shown in the YouTube clip.
As you can see, the letterforms have extra details added to
enhance the appearance of the letters. I
used a broad nibbled calligraphy pen, black ink and I drew some guidelines to
keep the letters even. The letters were
about 6 nib widths high.
5 Steps
to Write Impressive Fancy Letters
Gothic Black Letters or Majuscule style letters are quite
tall, normally they are taller than the ascenders. In this example, I have drawn diamond shapes
using a broad nibbed pen within the letters, which are created by drawing short
strokes using a broad nibbed calligraphy pen.
They help to decorate the letters and make them appear grand. I have also drawn vertical thinner lines, decorations,
flairs and flicks drawn within the letters.
I held the pen nib at a constant 45 degree angle.
1)First,
map your work out by drawing faint straight lines on some paper using a ruler
and a hard nibbed sharp pencil
2)Draw
the outline of your capital letters with a faint pencil for guidelines
3)Holding
your broad nibbed calligraphy pen at a constant 45 degree angle, carefully draw
the letters as illustrated in the clip and draw the ticks at the vertical edges
of the letterforms
5)Draw
the diamond shapes carefully by keeping your calligraphy pen nib at a 45 degree
angle and draw very short lines so they appear like diamond shapes. Draw them by the thin vertical lines within
the letters
Gothic Upper-case Alphabet
Majuscule
Letters in Gothic Calligraphy
Some letters are quite wide, such as the letters S, Z, H,
A, and D. Other letterforms are based on
the shape of the letter O. They are quite
wide and fit within a square. These
letters are O, Q, C, G, and T. Narrow
letter are B, F, J, N, U, V, L and Y.
Sunday, 29 March 2015
For this project, I have drawn a simple Celticknot design, using some inexpensive art materials, some paper and a gold edging
marker pen.
The Celtic Knot Shape
Celtic knots resemble interwoven ribbon designs,
that go around forever and they can easily be incorporated into any calligraphy
project or on a gift card. Celtic knots
are commonly used in tattoo designs and printed on fabric.
My Celtic Knot Project 2
I started this project by drawing a Celtic knot
by freehand, using a hard nibbed 2H pencil and layout paper. The materials I used were a black pen, some
pencils for shading and a gold pen.
I first drew the outline of the Celtic knot in
fine pencil, then I drew the outline with the black pen and then shaded the
Celtic knot using a 2H and 2B pencil.
The 2B pencil was used for the darker shading. You can use any medium for this project, such
as charcoal or coloring pencils. You can
get watercolor pencils from any art shop, and they are great for setting off
some color to your artwork. You can dip
the pencil in some water and apply the pencil to paper and this will give an
appearance of watercolor paints.
When mapping out the Celtic knot shape, be
mindful that you keep the ‘ribbon’ width consistent throughout.
A Simple Celtic Knot DesignCeltic Knots and Calligraphy
Celtic knots are used for ornamentation, monuments
and manuscripts and were used extensively in the 8th century Gospels
and the Book of Kells. There are an
endless variety of Celtic knot designs and they can look like basket weave
knots or ribbons. Celtic knots can be
used in all sorts of calligraphy projects and are a great way to introduce
designs in any calligraphy work.
Celtic knots look complex and appealing but they
are easy to draw and look great in any project, with introductions to color and
shading to really set off your work.
They are like plaits on paper and have an interlaced looking
structure. They appear to have ropes
crossing over each other, over and under, like ribbons, for ever.
I will be posting other Celtic knot designs so
they can be introduced and used for other calligraphy projects. Celtic knots are relaxing and fun to draw.
I love Celtic knots because they are quite simple
to draw, but the look beautiful and sets off any artwork really well.
Saturday, 28 March 2015
For this project, I have created a couple of Easter time creations. The first one is a simple Easter card that I written in calligraphy using red ink a fine calligraphy pen and a small nib. I used a basic calligraphy style. I also drew a cross using a fine black biro. I first traced out an outline before drawing the cross.
Easter, also
known as Resurrection Sunday, is one of the most important religious date in
the Christian calendar. Jesus was
crucified on Good Friday and his body was taken from the cross and placed in a
tomb and on the third day, he rose again. The week leading up to Easter is called Holy
Week. Maundy Thursday is the Thursday
before Easter Day and Palm Sunday is the Sunday before Easter day. Easter holidays are enjoyed by many and is a
good time to have a go at some Easter art.
A Simple
Easter Project for Easter Cards
For this Easter
project, I have created a couple of Easter creations. The first one is a simple Easter card that I
written in calligraphy using red ink, a fine calligraphy pen and a small
nib. I used a basic calligraphy
style. I also drew a cross using a fine
black biro. I first traced out an
outline before drawing the cross. To do
this, I used some plain paper, a hard nibbed pencil, such as a 2H, and a black
pen. For the ‘Easter time’ lettering, I
used a broader nibbed calligraphy nib and ink. For the
cross, I simply mapped out a picture of the cross and drew it with black ink.
Many people
celebrate Easter by giving chocolate Easter eggs or other gifts, but many would
rather give out an Easter card during this time. Easter is also associated with Easter
chickens, or the Easter Bunny. The
Easter Chicken are associated with Easter because this came from Pagan days,
when they were a sign of New Life. Later
on, the Christians took this meaning as a symbol of remembering the Resurrection
of Jesus and a New Life. Decorated eggs
came from an older culture, where eggs were colored red to signify the blood of
Christ at his crucifixion. This
tradition was adopted by Christianity.
The painted eggs are now replaced by chocolate eggs.
Other Easter
Designs
There are
many ideas that you can use for Easter, such as painting a picture of an Easter
egg, an Easter Bunny, some daffodils or a religious theme. Yellow is associated with Easter, and this
color is related with the spring and longer days.
Designs for Easter Holidays
Other Colors
of Easter for your Designs
There are
other colors you can use for your designs in Easter time, such as Red, which
represents the blood of Christ, Gold, which is associated with celebration and
richness, and white, which represents purity.
Thursday, 26 March 2015
For this project, I used a Pilot Parallel Pen, 2.4mm nib
size and green cartridges. There are an
assortment of cartridges in the pack and it makes for great color choices. You can create a gradient of colors by touching
one nib with another one of a different color and this will create a gradual
change of color as you write, which make some beautiful effects.
Calligraphy Is Fun
Parallel Pen Nib Sizes
The Pilot Parallel pen comes in four nib sizes, which are
0.5mm, 2.4mm, 3.8mm and the broadest being 6.0mm. The colors available are black,
blue, green, red, yellow and purple. The great thing about these pens is that
you can create very fine lines by holding the pen sideways so the nib is
vertical.
The Pilot Parallel pens nibs are sharper than conventional
fountain pens. The nibs consists of two
parallel plates, which help to hold the ink and they also have sharp corners so
the writing appears cleaner, sharper and crisp, and the writing more
impressive. The parallel plate is a
unique design, and you can achieve sharper, smoother and neater calligraphy
handwriting, better than conventional calligraphy pens. The ink flow is excellent and you can pick
the pen up again after months of not being used, and the ink will flow easily
as if it were only used yesterday. There
is a converter in the pack, which is used to clean and flush the pen out after
use or for ink refill. The pack has a
shim, which is used to clean the nib between the plates.
Paper For Pilot Parallel Pens
The paper I used was A4 sized sketch paper, which was of a
high quality 130gsm white cartridge paper.
However, I found that the paper could not cope with the amount of ink
used on the paper. The ink started to
soak in the ink and started to bleed and became feathery. This was rather disappointing but I do like
the colors. For this project, I also
used a fine nibbed conventional fountain pen and deep red ink.
I was excited with the new pens, but it is important to be
mindful of the quality of paper used. If
feel that hot pressed paper of a higher quality is better for these pens because
of the amount of ink that comes out of the nibs. These pens can use a lot of ink, so it may be
a cheaper alternative to keep the ink cartridges and refill them with good
quality ink, but be careful when choosing ink as some may clog the pen.
Saturday, 21 March 2015
Mapping out a menu is a great way to practice calligraphy
and laying out your work. It is a simple
project that is productive and also fun to do.
You can use this technique for writing menus on chalkboard.
For this project, I wrote a menu by mapping out the menu
layout first, using a faint pencil and a ruler.
1.I mapped out my work using a hard nibbed pencil
and a ruler. I drew straight lines and
marked out the centre of the paper by dividing the paper width measurement into
two and drew a line in the centre of the paper.
I used a set square to ensure the measurements are straight and at a
right angle.
2.I mapped out the letterings by first of all by counting
each letter and letter-spacing of each line.
I then divided the total number of letters / letter-spacings into two
and worked out the letter or letter-spacing that is central.
3.I write this central letter or letter-spacing
on the central vertical line drawn on the paper, and then I carefully map out
the other letters before and after this central letter to ensure the sentences
are central to the paper. I write the
letters from the centre, outwards.
4.I mapped out all the letters carefully, using a
faint pencil as guidelines before writing the letters with a calligraphy pen.
5.In the example shown, I used black and red ink
and a fine nibbed calligraphy pen
6.I introduced some swirls, using a broader
nibbed calligraphy pen
7.I always remember to check the spelling!
Other
Hints and Tips
Don’t forget to check your calligraphy pen nib size so the
letterings are not too big or small, and fits comfortably onto the page.
Writing menus are a fun way of practising how to layout
your work and practice centring the letterings.
How to
Practise Writing Menus
You can make up your own menu design and practice writing
the layout of your menu on some stiff card or fancy paper. You can also introduce some swirls, patterns
or even some Celtic knots to customise your menu. You can draw some frames or margins to exemplify
your work. It is special to make your
own home-made custom menus and you can give it that individual and unique touch.
Putting
Color into Menus
Why not introduce some pastel shading, watercolors or color
to your design with pencils, or even draw some illustrations of food to enhance
your menu.
Thursday, 19 March 2015
Finding
every opportunity to practising to write in a calligraphy style is a great way
to keep your skill alive and refining your technique. You can find lots of opportunities to do
this, for example, when writing out notes, writing a shopping list, a menu, a task
list, even doing a crossword puzzle or Arrow Words! Anything that needs handwriting can be
written in a calligraphy style. You will
find that your calligraphy will look better with time and it will become second
nature and you may also find that you can write calligraphy faster than before.
In the
example shown below, is an example of how you can use calligraphy in everyday
situations, such as a simple shopping list.
The YouTube clip shown illustrates how to write a short shopping list
written in Gothic style, using black ink and a broad nibbed calligraphy
pen. Of course, if you are in a rush,
then it is not practical to do this. I
used a Manuscript calligraphy pen and the nib size used was a 4B or 2.8, which
is one of the broader sized nib. You can
write out a list of anything you like, such as a grocery list, things to do or
other lists in general.
A collage of calligraphy writing
Keep Practising Calligraphy for a
Professional Look
Writing gift
cards, Christmas cards, birthday cards, festive events and any other special occasional
cards, are a great way to practice gorgeous handwriting. Writing a letter for a special occasion, and
writing on an envelope with beautiful styles is a lovely and creative personal
touch for that someone special. Writing calligraphy
is very relaxing if you know how and writing a list is a great start in practising calligraphy styles but it is important to enjoy it too.
Monday, 2 March 2015
Mother’s Day
or Mothering Sunday has been celebrated for many years around the world, and the
tradition originated from North America.
It is a celebration of motherhood and family bonds. In the UK, Mother’s Day moves ever year, usually
around the spring time. Families traditionally
invite families around for Mother’s Day and treat their mothers with gifts, days
out, chocolates and cards. In the US,
Mothering Sunday is a national holiday and is a celebration of the importance
of mothers.
In the UK,
this day is celebrated on the fourth Sunday in Lent and the dates are not
fixed. Celebrating Mother’s Day came
earlier than in the USA and England was the first country to celebrate Mother’s
Day. Mother’s Day cake is traditionally
a Simnel Cake, which is an almond cake.
How to
Write a Mother’s Day Card Using a Calligraphy Pen
Here is an example of how to write a Mother’s
day card using a broad nibbed calligraphy pen, black ink and writing in Gothic style. I used black ink cartridges and a
Manuscript calligraphy pen. The broad
nib size used was a 4B or 2.8. When
writing on card, be careful to choose a card that is not too shiny or rough, as
this will affect your calligraphy work.
Shiny card will make the ink smudge and run and rough surfaces will make
your nib snag and will affect your calligraphy work. Before writing your card, think of a short
and simple poem or think about what you’d like to write in the card. Draw some faint pencil lines with a ruler for
guidelines before you write in your card.
YouTube
Clip of How to Write in Calligraphy
This video
clip gives an example of how to write out a Mother’s Day card using a Gothic
style, but you can choose another style, such as Italic or Cursive. You can draw patterns or swirls to set your
work off nicely and introduce some colors to create some stunning designs. You can also use fancy calligraphy writing
for gift tags too.
Practice
your writing on a scrap bit of paper before writing in the Mother’s Day card,
and make sure your work is evenly spaced and pleasing to the eye.
| |
From Karim and Maya are lovers. They share a home, they worry about money, and then Maya falls pregnant. But Karim is still finishing his film degree, pushing against his tutors’ insistence that his art must be Arab like him. And Maya, working a zero-hours job and fretting about her family, can’t find the time to quit smoking, let alone have a child.
Framed with fragments and peppered with footnotes Exquisite Cadavers is at once a bricolage of influence, and a love story that knows no borders.
About the author:
Meena Kandasamy is a poet, fiction writer, translator and activist. Most of her works are centered on feminism and the anti-caste Caste Annihilation Movement of the contemporary Indian milieu. Exquisite Cadavers is her third fiction novel. Her second novel ‘When I Hit You: Or, A Portrait of the Writer as a Young Wife’ was shortlisted for Women’s Prize (2018).
Review:
How much of an author is in a book? Does literature and art take shape from and form from the realities around the writer/artiste? How much does an author’s real life experiences or belief systems influence her book? Can an author writing literature walk away from it all? These are the questions that Exquisite Cadavers leaves the readers with.
There is no way else to say this but Meena Kandasamy’s latest fiction is a demanding book: it asks you to be attuned to the socio political situation of the country, it asks if you to question literary forms and structure. If you are willing to give in to the demands, this intense rumination over where an artistic creator hovers around his/her work is addressed in two tracks: the story of Karim, an Arab film maker and his wife Maya, who is caught in between an unstable job being one track and the author’s freewheeling (or is it selective thoughts?) written on the margins of the story telling the reader a bit of her world and the thoughts she is writing with and why she has structured her characters and settings in the milieu they are set. Are the two tracks parallel or do they meet at some point or wait, are they entirely separate? Are Karim and Maya the way they are because of the author’s own life and beliefs? Meena Kandasamy throws these questions as sharp gauntlets to the reader and literary critics and in the process weaves a spell.
The writing is sharp and cleaves at you with clinical precision and poetic elegance even as the author teases the reader with the way the main characters have been fashioned out – as outsiders with the baggage of their roots sometimes questioned, sometimes judged and sometimes made exotic. And then in the margin, you read about the author being asked by a white woman whether her son (born to a white man) is indeed her child. Karim’s character set in the mould of a film maker who must put in applications for film grants and the author’s thoughts asking how she is expected to write only about the many ills that the country of her roots (India) ails from is all sorts of astute commentary on the expectations of consumers of art. It tells you how the market demands artistes to peddle certain narratives, thereby curtailing one’s freedom and artistic creativity.
The writing structure is arranged such that there is a ‘main story’ that the author creates for readers (that of Karim and Maya) while the writing in the margins of the pages are the expressions from the author herself: what was happening around her, what she had been reading that made her think. It is a writing device that is both novel and makes the reading just a more intimate and personal. Readers who like to stick to conventional text presentation might well find the blend of fiction and the author insights just a disarming and even distracting for yes, you do wonder how to read them both. I ended up reading the ‘main story’ from end to beginning and then reading the author’s thoughts on the page’ margin and then re read the entire book chapter by chapter by flipping the sequence. It ended up adding more nuance to the reading experience.
Reading Exquisite Cadavers is like being pulled into the soul of a painting and seeing what colours and forms on the canvas has come to be and with which thoughts from the painter. Read it if you are prepared to be rattled. | https://imphalreviews.in/a-book-which-makes-readers-wonder-if-the-artists-can-be-absent-from-their-arts/ |
On October 9th, 2012, TSSS began its 6th season with an animated presentation and discussion with Dr. Matthew Kiernan, President and CEO of Inflection Point Capital Management and author of Investing in a Sustainable World and The 11 Commandments of 21st Century Management.
Environmentally irresponsible investing
Dr. Kiernan challenged the audience to imagine a world where investors universally recognize the merits of an approach that considers environmental and social governance (ESG) as one that is unambiguously the right thing for the environment, for people and for investments. A vision of such a world, at this point, requires significant powers of imagination – but why? Why are trillions of dollars invested in world markets with little if any consideration of environmental or social concerns? Why does the UN Staff Pension Fund, the World Bank Staff Pension Fund, the Gates Foundation, and even the Nature Conservancy invest their dollars with absolutely no criteria for sustainable investing? When research studies, including one by established and traditional Deutsche Bank, consistently show conclusive evidence linking superior company sustainability performance with superior financial returns, why does the investment world continue to steadfastly cling to outdated models of analysis that haven’t changed in any significant way in over 30 years?
Cognitive impairments and myths
The answers to these questions lie in a series of cognitive impairments and myths that were summarized by Dr. Kiernan. Despite research to the contrary, there persists a view in the investment community that ESG considerations are at best immaterial, and at worst actually injurious, to financial returns. Investment professionals will thus claim that to consider such factors would actually be incompatible with their fiduciary responsibility (despite clear evidence to the contrary, as evidenced in the 2005 Freshfields Report which looked at case law and legislation determining there is no dereliction of fiduciary responsibility in looking at these issues as they can (and do!) influence risk/return.)
Another persistent myth is that ESG research is inevitably more imprecise and unreliable than mainstream investment research.
Another persistent myth is that ESG research is inevitably more imprecise and unreliable than mainstream investment research. And yet, while financial analysts consistently claim “Management Quality” to be the primary driver of company performance, they are also consistently hard pressed to define how management quality is measured – this does make one wonder about the question of ‘imprecise’ research, doesn’t it? This double standard is not the only one that exists when comparing traditional mainstream investment analysis with one that considers sustainability – “sustainable” strategies have to work ALL of the time or they tend to be discredited while mainstream approaches generally work 30% of the time, and that is considered acceptable performance.
So, we must ask ourselves, how do we combat these double standards, myths and cognitive impairments? First, we must recognize that investors are well aware that if they do fail, it is better to do so using conventional analysis techniques – the truth is that often people are far more concerned with managing career risk than investment risk. We must find a way to shift the investment community perception of reality to one where the career risk lies not in the choice to consider sustainability concerns but rather in the choice to ignore them.
21st century global megatrends
There is no doubt that the 21st century presents a new global reality for investors. Kiernan discussed global population growth (90% of that growth in Global Emerging Markets (GEMs)) as the world economic centre of gravity shifts to GEMs while mature economies face stagnation and low growth. There is increasing demand for scarce raw materials, significant energy demand growth, growing demand for food and pressure on land resources and biodiversity, dramatically increasing urbanization and infrastructure demand, tightening regulatory and taxation regimes for pollution and climate change, changing consumer demographics with an ever expanding middle class, and significant global healthcare burdens. These powerful global megatrends are creating an entirely new set of ‘non-traditional’ risk and return drivers for companies that we can term ‘sustainability’ factors. Kiernan described a binary decision facing investors: “Check your watch. It’s the 21st century. Do you want to invest considering that fact or not?”
Kiernan’s answer for investing in the 21st century is to embrace “Strategically Aware Investing” (SAI). He coined this term to both overcome the “cornucopia of acronyms that afflicts us” (such as ESG, SCR, SRI, etc.) and to reframe the debate to be one that goes beyond the myths and cognitive impairments that have plagued the push for ‘sustainable’ investing.
Strategically Aware Investing
Strategically Aware Investing recognizes that investors, their research and their analysis techniques must understand and reflect today’s world. SAI recognizes that investment professionals must be aware of the modern reality of growing stakeholder influence, greater investor awareness, a new fiduciary paradigm with rising stakeholder expectations, greater information transparency and real-time global communications. SAI clearly considers mainstream investment principles, such as financial results, but also recognizes Kiernan’s assertion that 75-80% of companies’ true risk profile and value potential lies below the surface, and cannot be captured by traditional financial analysis. SAI analyzes all factors that contribute to risk/potential: innovation capacity, adaptability and responsiveness, environmental sustainability, human capital and organizational capital. These are the factors that must be considered if investors want to define the out-performing companies of the future. And, not surprisingly, many of these factors will show high correlation with management quality, already defined by those in the mainstream as the primary driver of company performance.
A paradigm shift
How can we push the investment community to embrace SAI? Kiernan explained that we must restore the integrity of the “investment food chain” – the owners of the assets should be at the TOP! We must educate, train and empower fund trustees and investment consultants, and if money managers are not willing to be SAI savvy, then fire them! We must create incentive structures that encourage innovation and experimentation, and we must radically restructure business and management education (e.g. MBA, CFA) to integrate ESG models.
As our world changes and evolves, so must our investment strategies.
As our world changes and evolves, so must our investment strategies. Both the complexity and velocity of change in companies’ competitive environments are accelerating dramatically. New skills and mindsets are required to compete successfully. Traditional financial analysis is of only limited use in helping investors identify those companies that are the most innovative, agile, and forward-looking in this emerging environment. By contrast, companies’ ability to manage sustainability-driven risks and opportunities better than their competitors is proving to be an increasingly robust proxy and leading indicator for these new skills and mindsets.
Consider Kiernan’s assertion that current research and analysis shows significant variation among company-specific exposures to emerging megatrends-driven risk/return factors (as much as a factor of 30 within companies in the same industry!). There is little doubt that any responsible investor must consider a holistic SAI approach.
The beauty of Kiernan’s SAI approach lies in its full integration of forward-looking sustainability insights with more traditional fundamental and technical financial analysis – from the very outset. Investors would be wise to heed Dr. Kiernan’s call for change – or start looking for a new line of work, where their reluctance to evolve to a changing world might be more appreciated. Speaking of the dinosaurs…
To view the event twitterchat summary – please click here
____________________________
Toronto Sustainability Speaker Series (TSSS) is widely recognized as Canada’s premiere forum for dialogue and problem solving among sustainability professionals. Each year over 1000 sustainability change agents attend TSSS events to exchange ideas, to network and to be inspired by leading companies that have integrated sustainability into their business practices. Please click here to learn more. | http://tsss.ca/channels/esg-investing/event-summary-its-the-21st-century-time-to-stop-investing-like-its-1960 |
Study indicates snacking is a "missed opportunity"
From animal crackers to gummy fruit snacks and calorie-laden juice drinks, kids in child care are not getting the nutrition they need from daily snacks, according to a new study from Cincinnati Children's Hospital Medical Center published online in the journal Childhood Obesity.
The study - the first of its kind to compare meals to snacks - shows that despite efforts to improve the diets of children in child care settings, meals - and particularly snacks - still lack nutritional quality. Snacks, while smaller than meals, are an integral part of preschool-aged children's diets, typically comprising 26 percent of their daily calorie intake.
Researchers from Cincinnati Children's reviewed menus at 258 child care centers in southwestern Ohio, analyzing the average weekly frequency for servings of fruits, vegetables, lean meats, juice (100 percent) and sweet or salty foods. They found that the composition of lunches differed from snacks in all food categories.
Fruits, vegetables and meats were rarely included in snacks, but were listed almost daily as a component of lunches. Conversely, 87 percent of centers served sweet and salty foods - such as gummy snacks, pretzels and crackers - at snack time more than three times per week, but rarely at lunch.
The study also found that:
- 87 percent of centers rarely listed non-starchy vegetables for snacks, but 67 percent included them at lunch more than three times per week.
- 100 percent fruit juice was listed as a component of snack at least three times per week in over a third of the centers surveyed, but rarely with lunch.
- 60 percent of the centers reported serving 2 percent milk to children older than age three; 31 percent served whole milk.
The USDA is expected later this month to release new guidelines for meals and snacks served in child care programs under the Child and Adult Care Food Program. These guidelines are expected to call for increased variety in the types of beverages and foods served at snack times among participating programs, and specifically including more fresh fruits and vegetables and meat or meat alternatives as snacks.
"Our findings suggest that these guidelines would represent a sharp departure from what is typically served in these programs at snack times, which for centers in our study was typically juice and a refined grain such as crackers," said Dr. Copeland.
"We may need to think about the messages we are teaching children by serving these types of foods at snacks. If bitter-tasting vegetables are reserved for meals, but snacks are filled with tasty crackers and sweets, it's no wonder that children may start to prefer to eat at snack times rather than meals."
"Theoretically, there is no reason for the nutritional value of snacks to differ from lunches," said Dr. Copeland.
Highlighting others' research that shows that the preschool years are a time for establishing eating habits, she said, "Parents and child-care providers play an important role in shaping the eating habits and food preferences of young children. We might want to rethink how we're looking at snacks, both as a source of nutrition and for promoting healthy eating habits." | https://www.medicalnewstoday.com/releases/260319 |
represents the gradient of the output of a net with respect to the value of the specified input port.
represents the gradient of the output with respect to a learned parameter named param.
represents the gradient with respect to a parameter at a specific position in a net.
net[data,NetPortGradient[iport]] can be used to obtain the gradient with respect to the input port iport for a net applied to the specified data.
net[<|…,NetPortGradient[oport]ograd|>,NetPortGradient[iport]] can be used to impose a gradient at an output port oport that will be backpropogated to calculate the gradient at iport.
NetPortGradient can be used to calculate gradients with respect to both learned parameters of a network and inputs to the network.
For a net with a single scalar output port, the gradient returned when using NetPortGradient is the gradient in the ordinary mathematical sense: for a net computing , where is the array whose gradient is being calculated and represents all other arrays, the gradient is an array of the same rank as , whose components are given by . Intuitively, the gradient at a specific value of is the "best direction" in which to perturb if the goal is to increase , where the magnitude of the gradient is proportional to the sensitivity of to changes in .
For a net with vector or array outputs, the gradient returned when using NetPortGradient is the ordinary gradient of the scalar sum of all outputs. Imposing a gradient at the output using the syntax <|…,NetPortGradient[oport]ograd|> is equivalent to replacing this scalar sum with a dot product between the output and ograd.
Using NetPortGradient to calculate the gradient with respect to the learned parameters of the net will return the sum of the gradient over the input batch. | https://reference.wolfram.com/language/ref/NetPortGradient.html |
Horvatić, Barbara (2014) Forms of social behavior of ants. Bachelor's thesis, Faculty of Science > Department of Biology.
|
PDF
|
Restricted to Registered users only
Language: Croatian
Download (77kB) | Request a copy
Abstract
Ants are a large group of insects which live in colonies, which may consist of one or more queens, soldiers and workers. Communication is very well developed and it is a major factor in their successfulnesspertaining the foundation of a colony, finding food or colony defence.A brief description of ant biology, life style and communication are presented. Some different forms of social behaviour of warrior ants, weaver ants, desert ants and honeypot ants are also given. | http://digre.pmf.unizg.hr/3237/ |
Lighting up a room in not just a matter of plugging in a lamp, getting the right lighting for your room is important because one mistake can leave your place feeling off with dark spots all over the room. So what should you consider ensuring that you have enough lighting for your room? How do you know the number of LED spotlights your indoor needs? You may have beautiful furniture and exquisite interior decor but with poor lighting, all the beauty in the room is lost. Do you know how many spotlights you need to brighten your room?
Choosing how many LEDs downlights are needed for creating the effect you’re looking for in any room can be a daunting task.
What is enough light?
The question seems easy enough but when faced with calculating the lighting needed to create a well-lit space; it becomes more complicated. | http://elsidany.com/category/general-articles |
Soul is a warm-hearted and compelling film with delightful visual animation. It is very enjoyable to watch, but lacks a coherent and cohesive storyline, which may leave both adults and children wondering what the film is actually about. Apart from a couple of dramatic and emotional scenes, there is not much in this film that small children would be scared by, however parents should be aware that this film explores quite mature concepts that may confuse some children and require an explanation. It is therefore not suitable for children under 8 and parental guidance is recommended to 10.
The main messages from this movie are that it is important not to take our life for granted, to appreciate the simple pleasures of life and not be caught in the trap of ambition.
Values in this movie that parents may wish to reinforce with their children include:
- Slowing down to appreciate beauty and importance in simple things.
- Living in the moment.
- Finding meaning and purpose in life.
This movie could also give parents the opportunity to discuss with their children attitudes and behaviours, and their real-life consequences, such as:
- In this film, there are some ‘lost souls’ who have lost sight of the true meaning in life. What happens in real life when some people get ‘lost’?
- When soul number 22 becomes a lost soul, Joe hears all the negative thoughts she has inside her. Parents may like to discuss the concept of negative self-talk with their children, and ways that we can counter those thoughts when they happen in our own lives (feelings of worthlessness, depression etc.). | https://www.childmags.com.au/movie-review-soul/ |
Statement necklaces worn by hosts and models.
I was watching Slinky offerings this weekend and and admiring some of the necklaces and pendants worn by people on set. Are they available for purchase?
- This topic was modified 1 year, 3 months ago by yoga88. | https://community.hsn.com/forums/jewelry/statement-necklaces-worn-by-hosts-and-models/727845/ |
The findings come from an excavation site on the Peruvian coast.
John Verano, an anthropology professor at Tulane University, has spent his summers at digs in Peru for the last 30 years. But his most recent project might have his most surprising discovery yet — 600-year-old sacrifices of children and llamas.
"This is unusual, and not what we've seen before, especially on the coast of Peru," Verano told Phys.org. "What it means exactly, I'm not sure. But it is an exciting discovery."
Verano, along with Peruvian archaeologist Gabriel Prieto, first found evidence of child sacrifices in Huanchaquito, a coastal village in Peru, in 2011. They discovered the remains of 42 children and 76 llamas, which they suspect were sacrificed during a religious ceremony.
Verano and Prieto completed their study of the 2011 finds this year. They also expanded their dig, Phys.org reports, and the new excavation revealed more sacrificial victims. Huanchaquito is in an area once dominated by the Chimu state, from 1100 to 1470 C.E., until it was conquered by the Inca Empire, and Verano noted that it's an unlikely location for the finds.
Phys.org reports that the new findings will "allow for a more detailed reconstruction of this unusual event." The researchers speculate that the children may have been sacrificed as an offering to the sea after El Nino flooding, and the llamas were "intended to transport the victims to the afterlife." | https://theweek.com/speedreads/442985/archaeologists-discover-unusual-sacrifices-children-llamas |
- Published:
Urban health indicators and indices—current status
BMC Public Health volume 15, Article number: 494 (2015)
-
7641 Accesses
-
24 Citations
-
9 Altmetric
-
Abstract
Though numbers alone may be insufficient to capture the nuances of population health, they provide a common language of appraisal and furnish clear evidence of disparities and inequalities. Over the past 30 years, facilitated by high speed computing and electronics, considerable investment has been made in the collection and analysis of urban health indicators, environmental indicators, and methods for their amalgamation. Much of this work has been characterized by a perceived need for a standard set of indicators. We used publication databases (e.g. Medline) and web searches to identify compilations of health indicators and health metrics. We found 14 long-term large-area compilations of health indicators and determinants and seven compilations of environmental health indicators, comprising hundreds of metrics. Despite the plethora of indicators, these compilations have striking similarities in the domains from which the indicators are drawn—an unappreciated concordance among the major collections. Research with these databases and other sources has produced a small number of composite indices, and a number of methods for the amalgamation of indicators and the demonstration of disparities. These indices have been primarily used for large-area (nation, region, state) comparisons, with both developing and developed countries, often for purposes of ranking. Small area indices have been less explored, in part perhaps because of the vagaries of data availability, and because idiosyncratic local conditions require flexible approaches as opposed to a fixed format. One result has been advances in the ability to compare large areas, but with a concomitant deficiency in tools for public health workers to assess the status of local health and health disparities. Large area assessments are important, but the need for small area action requires a greater focus on local information and analysis, emphasizing method over prespecified content.
Introduction
“When we look at health problems on a world scale, we see bewildering diversity.” John Bryant’s classic 1969 work, Health and the Developing World, begins with a dictum no less true today. Early in the book, he cites a composite index of human resource development based entirely on two measures of education (enrollment in the second level of education plus enrollment at the third level of education times five ), and stresses that “no weight should be put on the precise location of any one country in this ranking.” Thus, nearly 50 years ago, some of the chief problems with indicators and indices were well understood.
Against a backdrop of chaos and development, improvement in data systems and technology made health data more available in the ensuing decades, but the problems of summarization and interpretation persist. Scale is one of the critical factors in developing indicators and indices. The type and number of indicators, how they are presented, transformed and combined, the size of the targeted area, the relative placement of geographic units—all are scalable factors in the construction of an overall assessment. Indeed, the audience for the assessment is also scalable—from neighborhood groups to global agencies. Issues of scale, and the tension between multiple indicators and single statistics, suggest the need for a variety of alternative approaches.
A recent compendium of composite measures of human progress provides an exhaustive listing of extant indices from many areas of human endeavor . This review, whose content overlaps in part with that compendium, will focus on indicators and indices that are relevant to urban health and urban health disparities in countries with advanced economies as well as low and middle income countries (LMICs). The emphasis will be on the types of urban metrics that are extant, the measures and methodologies used to assess health disparities, the comparability of these measures, and the extent to which single (vs. multiple vs. parsimonious) measures have been used to assess urban health and health disparities. Though of substantial importance in the construction of metrics, the statistical methodology has been discussed and reviewed in detail, and will not be a major focus here. Nor will specific disparities be featured. But in light of the urban orientation of the review, measures of health and environment will be paramount, together with mechanisms that have been used to amalgamate them.
Measurement of health and disparities
A convenient framework for classifying the available measures establishes three levels of measurement: Rubrics, Domains, and Indicators (Fig. 1). The descriptive names used here—there are many valid alternatives—are a convenience for stressing the distinction among logical types. Rubrics represent societal-level factors that affect health, either directly, or as determinants. Domains are specific factors within a Rubric for which measurements are available. For example, the Rubric “Environment” includes the Domain “Air Quality” that contains a set of Indicators (e.g. “Proportion of households living within 300 m of major industrial stationary sources of air pollution”) from which potential disparities can be derived. To complete the vocabulary, for this review we will use “Index” to refer to a single measure, figure or picture that is constructed from Indicators. The term “metric” is used generically to refer to any measurement.
General properties of indicators
Soon after the Millennium Declaration, a Health Metrics Network, funded by the Gates Foundation, was established to assist member nations in developing and interpreting health data. This partnership has recognized the key relationship between an indicator of health and an indicator of health disparity, and has provided leadership in deriving the latter from the former. An important contribution of those involved in the Network is a concise summary of the methodologies available to examine indicators and assess inequalities (first published in Spanish and subsequently in English; [6–8] this discussion was based on Mackenbach and Kunst; see also Houweling et al. ).
The authors point out that the vast majority of Indicators are based on data aggregated by geopolitical unit. A subset are cumulative markers (Gross National Product (GNP), percent of literacy, unemployment rate) that lack meaning at the individual level . It is readily inferred that most analysis of Indicators is ecological, and thus constrained by the statistical limitations of correlational analysis. For example, in constructing an Index, indicators that co-vary do not necessarily increase the amount of information in the Index (though in some instances, such as a latent variable construct, they may). But conversely, indicators that are not correlated render interpretation of the Index problematic, since the level and trajectory of the Index then results from the complex interaction of disparate measures.
Capacity for demonstrating disparity
Perhaps more important, the authors point out that indicator selection should be predicated on the ability to demonstrate disparity. By using information on the total population, and being sensitive to the size and distribution of the population along socioeconomic groupings, an overall indicator (say, GNP) can be transformed into a measure of inequality. In these articles, the authors then describe the major ways in which disparities are expressed (see Table 1). In fact, all of these are some form of ratio or difference, manipulated to highlight certain aspects of the contrast. For example, the measures that are based on the slope of a regression line (the ratio of a change in a variable compared to a unit change in another) simply provide a model-based contrast as opposed to the simple empirical observation of say, the ratio of highest percentile to lowest. The Lorenz Curve and the Concentration Curve are more complicated ratio measures, and have been shown to be specific examples of the class known as Relative Distribution Measures. Such measures raise a fundamental question for all methods of representing disparities. A simple ratio is dimensionless, and can thus be used for the comparison of many populations. A simple difference is in units of the underlying measures (money, frequency, incidence, area, etc.); some of these units permit direct comparability and others do not. Other measures of disparity compare observed data to an absolute standard (such as one of no disparity, as with the Lorenz curve and Gini coefficient). Still others use a standard embedded in the total data (say, highest or mean value). Relative distribution measures, [11, 12] on the other hand, compare two distributions directly so that, for example, the level of disparity in two urban areas can be directly described. Other measures permit this, but may require extra steps. In addition, the common practice of rank ordering, if performed without reporting the actual disparities, would not be sufficient to provide the actual difference between two areas since the space between ranks is not uniform.
More recently, Talih introduced the symmetrized Renyi Index, based on prior work using entropy measures to assess disparities. An important advantage of this measure is its invariance with respect to a reference group (say, the population average or the least well off group). In addition, population-weighted and equal-weighted versions can be calculated, and an “aversion” parameter can be included that reflects the investigator’s judgment as to values that society attributes to inequality .
Criteria for indicators
The choice of indicators, from the myriad available, should be predicated on some agreed upon set of criteria. Flowers et al. provide a checklist of 20 facets of a proposed indicator: several are descriptive (title, origin, rationale, routine or special collection, frequency); others deal with general characteristics (strengths, weaknesses, perverse incentives, influence on practice or behavior). A simpler, and perhaps more forceful summary of the ideal characteristics of indicators is provided by Etches et al. whose keywords bear repeating: consensual, conceptual, valid, sensitive, specific, feasible, reliable, sustainable, understandable, timely, comparable, and flexible. These authors stress as well the need for a conceptual framework , Fig. 1, p.34 from which the appropriate indicators can be drawn and indices can be constructed. Such a framework can be the basis for multilevel modeling and for causal analysis . Though neither of these approaches is necessarily involved in the formation of indices, they are part of the intellectual basis for prior and subsequent analysis.
Pitfalls and problems
Several statistical problems bedevil indicators. The Will Rogers phenomenon, for example, is the paradox observed “when moving an item from one set to another moves the average values of both sets in the same direction,” , p.243 and refers to migration of an item to a group vastly different from its own. Such a situation obtains when the highest value in one population is less than the lowest value in another, so that movement of the highest item lowers the mean of both groups. Indicators are also subject to regression to the mean, a phenomenon that reflects the random distribution of measurement error. A more extreme value will likely be followed by one less extreme because, based on typical distributions, the error in measurement of the second value is likely to be less extreme than that of the first value. As noted earlier, indicators or indices are often presented as ranks, which are ordinal rather than interval or ratio quantities despite the use of integers. Ranks convey a sense of better or worse that may not be merited by the underlying data. In addition, entities are not equally separated, and some may be bunched so that ties are resolved by resorting to a non-meaningful number of significant digits. Most indicator assessments and ranking procedures do not contain an appropriate estimate of uncertainty, and an assumed difference may be spurious. Flowers et al. suggest the use of such devices as funnel plots (a standard part of the meta-analysis armamentarium) to detect aberrations in the distribution of values that may point to real differences.
Aggregation of indicators
As described by Saltelli et al. indices are composite statistics that have generated polarized views of their value: either a mashed together collection of unrelated numbers or a usable distillation of reality. But these authors go on to point out that such statistics are really mathematical models developed through a social process: the community of scientists, policy makers, and practitioners must largely agree on their makeup and utility. The European Commission Joint Research Centre group on Composite Indicators has explored the mathematical, political, social, and economic aspects of composite indicators in detail [19, 21–26]. This complex analysis provides a rigorous basis for combining criteria and for legitimate ranking schemes. As these and other investigators point out, linear aggregation, with either equal weighting or some other weighting scheme, is simplest, most commonly used, and often least reproducible, in that it does not derive from pre-established criteria, but rather from experience and negotiation. Geometric aggregation, usually by multiplying the nth root of n items, has been used successfully by the UNDP Human Development Index but does require higher technical capacity. The most complicated of the approaches—multi-criteria analysis [28, 29] —is less adaptable for use on the local level, but a toolbox of techniques has been developed, [21, 22] and the use of this approach is a good example of the potential value of an academic and public health partnership .
The major compilations of indicators
The current large collections of indicators differ substantially in genesis and purpose (Table 2). WHO’s Urban HEART, the Michigan Critical Health Indicators, and San Francisco’s Healthy Development Measurement Tool (now renamed the Sustainable Communities Index) were all constructed, in part, to permit local areas to assemble and assess their own data. The United States’ Healthy People 2020 was constructed as a mechanism for tracking progress toward national health goals, and focuses predominantly on individual risk. The Community Health Status Indicators are an interactive tool for localities to assess their situation. Cities Environment Reports on the Internet (CEROI), the CDC’s Environmental Health Indicators, and California’s Environmental Health Indicators focus primarily on environmental measures, many of which are urban. Women’s Health Indicators are a compendium from many sources whose focus is how the indicators apply to women. Similarly, UNICEF’s compilation applies to children, and is a tool for tracking Millennium Developmental Goals . The WHO Indicator Compendium (on a large scale), and the Social Health of the States (on a smaller scale), are general sets of measures that includes elements of both personal and environmental health. The World Bank’s World Development Indicators is primarily economic and political in orientation but has considerable information on health and urban development. Global Cities Indicators, a set of measures on 20 themes that measure city services and quality of life, have been developed by the Global Cities Institute of the University of Toronto, and is available to member cities only .
With such diversity of purpose, it is no surprise that there is little concordance in the naming of Rubrics, Domains, or Indicators, or in the number of indicators. World Development Indicators, for example, has collected a set of 508 indicators on 217 countries for the period 1960 to 2013. Seventy-six of these relate directly to health and the urban environment. At the other end of the spectrum, the UN Habitat Agenda Indicators number 26, and provide a good example of the type of informed choices that are made. Under a Domain heading that they call “Social Development and the Eradication of Poverty,” they choose six Indicators in order to capture the essence of the Domain: Under-5 mortality, Homicides, HIV prevalence, Literacy rates, School enrollment, and Women Councilors. In a similar vein, the WHO Kobe Center’s Urban HEART lists 12 “core” indicators, and 18 “strongly recommended” measures. Its rough analogy to the UN Habitat Agenda Indicators is a Domain called “Core indicators: health determinants” that contains: access to safe water; access to improved sanitation; completion of primary education; skilled birth attendance; fully immunized children; prevalence of tobacco smoking; unemployment; and government spending on health. Both lists of indicators are worthy, but they clearly take different routes to a similar goal. A cursory look at the remaining Indicator projects reinforces the sense of plethora rather than parsimony. But a more detailed look suggests a somewhat different picture. If similar or identical Domains in each major compilation are given a common name, a pattern of concordance emerges. Nine Domains appear in more than half of the aggregations, and three of them (health care, infant mortality, and education) appear in more than two-thirds. The qualitative impression is that there is a vast array of specific indicators, with little commonality among projects, but a relatively limited number of Domains that appear in many, if not most projects. These Domains deal largely with health care outcomes, though several social determinants of health (for example, education, poverty and environment) are represented as well. Thus, despite disagreement about detail, there is some evidence of agreement about basic content. This observation augurs well for the construction of more flexible indices that permit interchangeability of indicators.
The properties of indices
There are only a few indices that are specifically urban in orientation, but a substantial number of congeners have been developed for other purposes. Consideration of the range of indices provides some insight into the appropriate methodologies for construction and validation (Table 3).
Simple indices
In its simplest form, an index is constructed from a set of indicators that have been transformed (standardized, normalized, scaled) so that they are directly comparable, and then added together. Simple arithmetic combination, often mistakenly called “unweighted,” implies that each indicator is given the same unit weight. The resulting Index may be bounded (such as a proportion or percent) or unbounded at one or both ends. A simple example is the Social Health of the States, a long-running Index from the Institute for Innovation in Social Policy. It combines 16 indicators that have been scaled and averaged so that the worst possible score is 50 (smaller is better). The difference between a state’s actual average and 50 is then expressed as a percentage of 50. The states are rank-ordered and grouped in quintiles (1–10 are excellent; 41–50 are poor). Similarly, the Michigan Index of Urban Prosperity —one of the specifically urban indices—combines nine indicators from multiple sources (crime rate; property value change; median household income; employment rate; employment change; graduation rate; Michigan Education Assessment Program passing rate; young adults; population change). It uses the ratio of each site-specific indicator to the overall state indicator (actually, to the overall mean) and averages them, deriving a number in the vicinity of 1.0. A somewhat more complicated urban metric is the Index of Resident Economic Well Being, which combines indicators from five Domains (unemployment rate; poverty rate; labor force participation; median household income; per capita income) by using a linear combination of N-scores (deviations from the median, as opposed to z-scores, which are standard deviations from the mean).
More complex indices
Perhaps the most important of these is the Human Development Index (HDI) now in its 25th year, published by the United Nations Development Program (UNDP). The measure is constructed from life expectancy at birth, measures of schooling and expected years of schooling, and gross national income per capita. Each of these is standardized by taking the country value as a percent of the range of the most extreme values for any participating country over the past 20 years compared to subsistence value: ([country value – subsistence value]/[maximum value – subsistence value]). The resulting value is a proportion between 0.0 and 1.0. The two education values are combined by taking their arithmetic mean and combining them with the other two measures using their geometric mean ([life expectany1/3 x schooling1/3 x income1/3]). A country’s HDI is then rank-ordered among all the others, and its place over time can provide the size and direction of relative progress (“relative” because a country’s change in rank may not reflect its change in absolute values). Since the standards for “best” and the “worst” are fixed, and each nation’s values are placed on a scale with that range, the concept of disparity is an integral part of the measure. Though urbanicity is not the focus of the HDI, its approach and methodology are suited to the development of a measure of urban health and disparities. In addition, the UNDP introduced an inequality-adjusted HDI, a measure that accounts for inequality by adjusting each indicator’s value by its level of inequality, based on work by Atkinson wherein he used the analogy between ranking inequality distributions and ranking probability distributions based on utility. Each of the three indicators is adjusted by the ratio of the geometric mean of the distribution to its arithmetic mean. Using a similar statistical approach, they have also introduced a Gender Inequality Index that captures the difference in reproductive health, empowerment and the labor market for men and women. A third measure—the Multidimensional Poverty Index—diverges from the Human Development Index by using microdata from household surveys. Each person is classified as poor or non-poor based on his or her family deprivation and the data are aggregated to form a national index. The actual computation bears considerable resemblance to previously discussed aggregations of indicators (weighted linear combinations), though the mechanism for combining information on the 10 indicators used is complex.
Another example of a more complex measure, the Bertelsmann Transformation Index (BTI) takes a wholly different approach. In their process, 17 criteria (“Domains”) are represented by 52 questions (“Indicators”) that are answered in a report completed by 128 participating nations. The answers go through two levels of review and calibration (not further defined) by experts in the responding country and by the BTI board. Scores are combined by linear aggregation (not further defined), and an overall score and sub-domain scores are calculated. The approach may be described as a modified, interactive, Delphi technique that is heavily dependent on expert opinion, and may or may not be reproducible. Such an approach, however, recognizes, and in fact embraces, the political process that is an important part of index development.
A third approach is typified by the Corruption Perception Index produced by Transparency International that ranks countries by the perception of corruption in their public sector. They collect information from a variety of sources (of which the BTI is one) and use at least three different sources for each country. This approach represents a substantial divergence from most of the others in that a uniform data set is not used for each country. Rather, they subject available information to substantial mathematical manipulation: data are standardized by using matching percentiles (reminiscent of the relative distribution methods), then undergo beta transformation, and a linear average of the transformed values is taken. The final index and ranking are substantially removed from the raw information. This approach acknowledges presumed exchangeability of indicators after mathematical manipulation.
Still another approach might be termed the “organic” Index, one that grows, shape-shifts, and is tested for its credibility and consistency. An example is the Deprivation Index, first proposed by Townsend in 1987 and Carstairs in 1989. These were constructed as the sum of four standardized variables. The Townsend Deprivation Index used percentage of unemployed people in the active population, percentage of not-owner-occupied households, percentage of households without a car, and percentage of overcrowded households. The Carstairs Index replaced no-owner-occupied households with the percent of low social class persons (a measure available in England based largely on occupation). In a subsequent review of Deprivation Indices , Carstairs describes other variations, such as the Jarman Underprivileged Score , constructed from rankings by general practitioners and subsequently used as part of a reimbursement scheme. Carstairs demonstrates that the Deprivation Index, as she developed it, was strongly correlated with measures such as overall mortality and cancer registrations.
In more recent years, the Deprivation concept has been retained, but the details altered. Sivakuman proposed a Human Deprivation Index based on percent below the poverty line, infant mortality, and illiteracy rate. One-third of each is added together to form the Index. Messer et al. constructed a Deprivation Index based on five sociodemographic domains: income/poverty, education, employment, housing, and occupation. They used principal components analysis, taking the first principal component as representative of neighborhood deprivation, an assertion supported by the consistency of component loading across study areas. Rey et al. explored the properties of their previously developed Index, FDep99, which had been constructed from: median household income; the percentage high school graduates in the population aged 15 years and older; the percentage blue collar workers in the active population; and the unemployment rate. This measure was also constructed using principal components analysis, and the first principal component accounted for 68 % of total variation in mortality. The authors provided an empirical analysis that purported to show that FDep99 was superior to their slightly altered versions of the Townsend and Carstairs Indices.
The aforementioned Indices are instructive in providing a typology, but only touch on the extant composites that have been developed. In a systematic review, Kaltenthaler and colleagues described 18 health indices culled from the literature from 1966 to 2000, and summarized information on their origin (US, UK, Canada, and Europe), characteristics, purpose, types of indicators, methods of aggregation, data sources, and validation. Several major points emerged. First, only four of the indicators had been validated, two by professional judgment, two by inference. Second, the user groups were not clearly defined, so that the target geopolitical level was not always clear. Reasons for choice of indicator were opaque. Weights appeared to be arbitrary, or at least not justified by standard criteria. The data upon which many of these indices were based were not always publically or universally available. The authors concluded that this set of indices would not be suitable for health policy makers in the United Kingdom (the place of origin of the study). Nonetheless, the authors reaffirm “the need for a population-based health index at either national or local level.” , p. 254.
Unfortunately, the literature on Indices that reflect urban health specifically is sparse. Those that include both urbanicity and health tend to focus on the former. As an example, Shane and Graedel propose a set of indicators that includes a measure each for air, water, soils, transportation, energy, resource use, population, urban ecology, livability, and general environmental management. They do not include health measures per se, but do use the Human Development Index as an environmental measure. Instead of a composite index, they propose a novel graphic: a triangle made of four layers (planning, waste, resource, human factors). Each of the 10 metrics is represented in the triangle by a grey scale corresponding to its adequacy (high, middle or low rating). The resultant “picture” can be compared to triangles from other areas, and can be used as a marker for evaluation over time.
An exercise in index construction was conducted by Stephens et al. who used a workshop environment to build an Index of Deprivation that compared Accra, Ghana with Sao Paolo, Brazil. Interestingly, groups working on the two areas devolved on the same Domains (income, education of head of household, number of persons per room, sanitation, and safe water access), but had to use different Indicators within those Domains. The collected data produced an overall picture that concealed substantial differences between the two areas. Those differences were demonstrated, however, by a simple choropleth map comparing the two cities by using four levels of socio-environmental conditions. Nonetheless, the authors felt that the data and resulting indices did not fully capture the political and social complexity of the cities. They do, however, cite several positive policy changes that resulted from the exercise. An important message from the study is the need for greater flexibility in the choice of indicators that make up an Index, since their true function may be as a catalyst for local change.
We have recently published an Urban Health Index (UHI) that focused more on method than content. Adopting approaches used for the Human Development Index, [27, 47] the UHI permits construction of a variety of composite indices related to urban health, urban health disparities, and health determinants, and is coupled to a technique for mapping that provides visual display of disparities for contiguous small areas. Indices are standardized by transforming the values for each small area into a proportion of the range for the overall location, and are then combined by taking their geometric mean. The method, still under empirical investigation, may be of use in demonstrating health disparities and the geographic distribution of inequalities. It is an example of the reorientation of composite indices from methods for ranking to flexible tools for use by local public health workers to assess health status, needs, and disparities. In addition, it highlights the need to collect data as an integral part of the construction of indices. Small area data—differing only in scale from the more routinely collected large area data—are critical for understanding the urban microenvironment.
Measuring the urban environment
The urban milieu has produced its own set of indicators, many of which are tied to health determinants. They are in a separate sphere of research, however, largely because of a differing measurement methodology, but also because of the well-known complexities of associating specific environmental hazards to health . Recently, researchers estimated that almost 25 % of all disease burden can be attributed to the environment. The burden is estimated to be even greater—34 %--in children under 15 years of age, and to be of far greater consequence in LMICs compared to more developed countries [63, 64]. There is a growing need to be able to measure and use indicators of environmental health since they are a crucial link in the data and decision-making process , Ch. 3. The purpose of the indicators is to express linkage between an environmental condition and health effect relevant at the policy level which may then facilitate effective decision-making.
Two general types of environmental health indicators have been described: exposure-based indicators and effect-based indicators Ch. 3. Exposure-based indicators measure environmental exposures with established health effects such as particulate matter with respiratory disease. Effect-based indicators typically measure a health effect that is commonly associated with an environmental exposure: for example, diarrheal disease and drinking water quality. Corvalan and colleagues have suggested that environmental indicators must meet a dual standard: to be scientifically valid, and politically relevant. The latter would include being related to conditions that can be changed, easy to understand, acceptable to all stakeholders, and temporally cogent.
A variety of frameworks has been developed to assist with indicator creation and use. The most commonly cited framework for environmental health indicators is the “Driving forces, Pressures, State, Exposures, Effects and Action” or DPSEEA framework . While based in part on the simpler pressure-state-response framework, this modified version has expanded to include the role of driving forces which are thought to be the key components that push environmental processes forward. As presented by Briggs (Fig. 1 in his publication ) the framework can provide a guide for the development of appropriate environmental health indicators for a range of situations. It also provides a tool to consider the various levels of environmental health interventions and how they may have impact on the different components of the model as provided in the “Action” component of the framework.
Over the last thirty years multiple projects have been undertaken to develop environmental health indicators. A composite set of indicators has not been developed although as evidenced by the comparisons of the other indicator sets, often many sets of indicators overlap. Even where these indicators overlap, few have been specific to urban environments. Lawrence reviewed the body of work on environmental health indicators (Table 4) with a specific emphasis on those that have focused on cities. Lawrence puts forward a new research agenda for urban health indicators. He suggests that researchers “use indicators to identify sets of contextually defined components of each human settlement and its neighborhoods.” He also recommends identifying comparable sets of indicators that are useful for comparison across different types of “human settlements.” Finally, he stresses the need for spatial and temporal measures at the local level.
The compilation of environmental urban indicators has many features in common with the corresponding health indicators. There are many variations (Indicators) on several themes (Domains). A host of individual indicators have been considered, but many of them are potentially interchangeable. Little empirical information, however, is available on their co-variation or their exchangeability. The exact balance of environmental indicators, social and economic determinants, and health outcomes in creating Indices is still an open issue though there appears to be general agreement that all should be part of such an Index.
Looking ahead—geospatial measurement of health and health disparity
A complementary approach to the assessment of health disparities is the burgeoning field of geovisualization. The growing armamentarium of data and geographic tools has given rise to alternate methods for measuring disparities, some of which can be married to the just described indicators and indices. For example, measures of urban design (enclosure, scale, transparency, complexity) can be obtained directly from digital sources and used to define urban space that may house the disadvantaged . Remote sensing has been coupled with GIS methods (in Bangladesh, for example ) to demonstrate concentrations of poverty and the heterogeneity within impoverished areas. Techniques for assessing access to parks and other environmental landmarks have been used to provide measures of the availability of health activities within an urban space . Google StreetView makes possible measures of local food availability with easy connection to population density and other factors that may affect disparities .
A series of studies from Australia demonstrate the potential melding of health and environmental indicators and geovisualization. Badland and colleagues identified 11 domains for “liveability” that included 61 usable indicators, and developed a framework that connected these indicators to social determinants of health . They applied this concept to demonstrate the connection between Public Open Spaces (POS) and the mechanisms by which they influence health . Similarly, a set of public transport indicators were developed, and their pathways of connection to population health explored . This work-in-progress promises to bring environmental factors (open spaces, transport) together with health determinants in real space, and to serve as a complex metric for identifying health disparities.
Conclusions
Despite the plethora of domains and indicators there are substantial commonalities among the major projects that have attempted to characterize health and disparities. These domains deal largely with health, irrespective of geo-location, and are usually at the regional or national level. Those that focus on urban issues often include environmental markers that affect health as well. The commonalities suggest that investigators share a common set of priorities but differ over the available welter of detail. An important area for further investigation is to explore that common ground, and determine—empirically, if not theoretically—the extent of correlation and exchangeability among indicators. Local urban areas would then have flexibility in the formation of indices based on locally available data.
There are commonalties among the approaches to Index formation as well. Several techniques for amalgamation of indicators are available, from simple linear combination to more sophisticated mathematical transformations and combinations. Many of these methods are transparent, and would be available to practitioners at the local level as well.
Measures that use indicators and indices to demonstrate disparities have been more elusive. Though considerable statistical development has gone into measures of disparity (see Table 1), those measures are largely a calculation created after the fact. (An example of an exception would be the inequality-weighted Human Development Index, a valid and sophisticated measure, but one whose complexity hides raw differences.) The issue of demonstrating disparity re-invokes the question of scale. When applied globally, the disparity implicit in rank ordering of nations simply reports the difference between rich and poor. Attention to the detail within such ranking ignores Bryant’s admonition from 50 years ago. It ignores, as well, the spectrum of data and approaches required by the continuum from affluence to indigence. Issues of consummate importance for the latter (environmental quality, resource availability, public services, basic sanitation) have less immediacy for developed urban areas, though the microenvironment of some presumably affluent urban areas may well be substantially disadvantaged. Perhaps the real power of Indicators and Indices is to demonstrate disparity on the local level—a place where significant change may be possible. Locally collected data and simple, flexible tools for amalgamation, rather than fixed packages, may be a fruitful approach to understanding health disparity.
Abbreviations
- CEROI:
-
Cities environment reports on the internet
- GNP:
-
Gross National Product
- HDI:
-
Human development index
- HEART:
-
Health equity assessment and response tool
- UNDP:
-
United Nations Development Program.
References
- 1.
Bryant J. Health and the Developing World. Ithaca and London: Cornell University Press; 1969.
- 2.
Harbison F, Myers CA. Education, manpower, and economic growth. New York: McGraw-Hill; 1964.
- 3.
Yang L. An inventory of composite measures of human progress. http://hdr.undp.org/sites/default/files/inventory_report_working_paper.pdf, 2015 (Accessed April 12, 2015).
- 4.
Vidaurre-Arenas M, Martinez-Piedra R. Health metrics network: a global partnership to improve access to information for health care practitioners and policy makers. Epidemiol Bull. 2005;26(2):1–8.
- 5.
Schneider MC, Castillo-Salgado C, Bacallao J, Loyola E, Mujica OJ, Vidaurre-Arenas M, et al. Métodos de medición de las desigualdades de salud. Rev Panam Salud Publica. 2002;12(6):398–415.
- 6.
Schneider MC, Castillo-Salgado C, Bacallao J, Loyola E, Mujica OJ, Vidaurre-Arenas M, et al. Methods for measuring health inequalities (Part I). Epidemiol Bull. 2004;25(2):12–4.
- 7.
Schneider MC, Castillo-Salgado C, Bacallao J, Loyola E, Mujica OJ, Vidaurre-Arenas M, et al. Methods for measruing health inequalities (Part II). Epidemiol Bull. 2005;26(1):5–10.
- 8.
Schneider MC, Castillo-Salgado C, Bacallao J, Loyola E, Mujica OJ, Vidaurre-Arenas M, et al. Methods for measuring health inequalities (Part III). Epidemiol Bull. 2005;26(2):12–5.
- 9.
Mackenbach JP, Kunst A. Measuring the magnitude of socio-economic inequalities in health: and overview of available measures illustrated with two examples from Europe. Soc Sci Med. 2010;44(6):757–71.
- 10.
Houweling TAJ, Kunst AE, Mackenbach JP. World Health Report 2000: inequality index and socioeconomic inequalities in mortality. Lancet. 2001;357:1671–2.
- 11.
Handcock M, Morris M. Relative distribution methods. Sociol Methodol. 1998;28:53–97.
- 12.
Handcock M, Morris M. Relative distribution methods in the social sciences. New York: Springer-Verlag, Inc.; 1999.
- 13.
Talih M. A reference-invariant health disparity index based on Renyi divergence. Ann Appl Stat. 2013;7(2):1217–43.
- 14.
Rossen LM, Talih M. Social determinants of disparitiesin weight among US children and adolescents. Ann Epidemiol. 2014;24:705–13.
- 15.
Flowers J, Hall P, Pencheon D. Public health indicators. Public Health. 2005;119:239–45.
- 16.
Etches V, Frank J, DiRuggiero E, Manuel D. Measuring population health: a review of indicators. Annu Rev Public Health. 2006;27:29–55.
- 17.
Diez-Roux A. Bringing context back into epidemiology: variables and fallacies in multilevel analysis. Am J Public Health. 1998;88(2):216–22.
- 18.
Fleischer NL, Diez-Roux A. Using directed acyclic graphs to guide analyses of neighbourhood health effects: an introduction. J Epidemiol Community Health. 2007;62:842–6.
- 19.
Saltelli A, Nardo M, Saisana J, Tarantola S. Composite indicators: the controversy and the way forward. (http://www.oecd.org/dataoecd/40/50/33841312.doc). 2005.
- 20.
European Commission Joint Research Centre. Composite indicators. https://ec.europa.eu/jrc/en/coin, 2013 (Accessed May 4, 2014).
- 21.
Nardo M, Saisana M, Saltelli A, Tarantola S, Hoffman A, Giovannini E. Handbook on constructing composite indicators: methodology and user guide. OECD Statistics Working Papers, 2005/3, OECD Publishing 2005.
- 22.
Nardo M, Saisana J, Saltelli A, Tarantola S. Tools for composite indicators building. from European Commission: Joint Research Centre. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.114.4806&rep=rep1&type=pdf, 2005 (Accessed March 10, 2015).
- 23.
Saisana J, Saltelli A, Tarantola S. Uncertainty and sensitivity analysis techniques as tools for the quality assessment of composite indicators. J R Stat Soc A. 2005;168(Part 2):307–23.
- 24.
Saltelli A, Funtowicz S. The precautionary principle: implications for risk management strategies. Int J Occup Med Environ Health. 2004;17(1):47–57.
- 25.
Tarantola S, Saltelli A. Composite indicators: the art of mixing apples and oranges. http://kolloq.destatis.de/2007/tarantola.pdf, 2007 (Accessed March 10, 2015).
- 26.
Paruolo P, Saisana M, Saltelli A. Ratings and rankings: voodoo or science? J R Stat Soc. 2012;176(3):609–34.
- 27.
United Nations Development Programme. Human Development Report 2013. http://hdr.undp.org/en/2013-report, 2013 (Accessed March 10, 2015).
- 28.
Munda G. Social multi-criteria evaluation: methodological foundations and operational consequences. Eur J Oper Res. 2004;158:662–77.
- 29.
Saltelli A, Tarantola S, Chan K. A role for sensitivity analysis in presenting the results from MCDA studies to decision makers. J Multi Crit Decis Anal. 1999;8(3):139–45.
- 30.
Murray CJL, Lopez AD. Production and snalysis of health indicators: the role of Academia. PLoS Med. 2010;7(11):e1001004.
- 31.
World Health Organization: Centre for Health Development. Urban Health Equity Assessment and Response Tool (Urban HEART). http://www.who.int/kobe_centre/measuring/urbanheart/en/, 2014 (Accessed March 10, 2015).
- 32.
Michigan Department of Community Health. Michigan Critical Health Indicators. http://www.michigan.gov/mdch/0,4612,7-132-2944_5327_47055---,00.html, 2011 (Accessed March 10, 2015).
- 33.
Wall M, Farhang L, Bhatia R. Sustainable Communities Index. http://www.sustainablecommunitiesindex.org/, 2012 (Accessed March 10, 2015).
- 34.
Office of Disease Prevention and Health Promotion, US Department of Health and Human Services. Health People 2020. http://www.healthypeople.gov/2020/default.aspx, 2015 (Accessed March 10, 2015).
- 35.
US Department of Health and Human Services. Community Health Status Indicators. http://wwwn.cdc.gov/communityhealth/, 2015 (Accessed March 10, 2015).
- 36.
United Nations Environment Program (UNEP). Cities Environment Reports on the Internet (CEROI). www.unep.org/ieacp/files/pdf/Geo_Cities_Manual_ECCA.pdf, 2014 (Accessed May 2, 2015).
- 37.
U.S. Department of Health and Human Services, Centers for Disease Control and Prevention. Environmental Hazards and Health Effects: Environmental Public Health Indicators Project. http://ephtracking.cdc.gov/showIndicatorsData.action, 2013 (Accessed March 10, 2015).
- 38.
California Department of Health Services, Environmental Health Investigations Branch. California Environmental Health Indicators. http://www.ehib.org/papers/health_indicators.pdf, 2002 (Accessed March 10, 2015).
- 39.
Office on Women’s Health, US Department of Health and Human Services. Quick Health Data Online. http://www.healthstatus2020.com/index.html, 2014 (Accessed March 10, 2015).
- 40.
UNICEF. Data: Monitoring the situation of Children and Women. http://data.unicef.org/index.php?section=unicef_aboutus, 2014 (Accessed March 10, 2015).
- 41.
World Health Organization. World Health Statistics 2013. Indicator compendium. http://www.who.int/gho/publications/world_health_statistics/WHS2013_IndicatorCompendium.pdf, 2013 (Accessed March 10, 2015).
- 42.
Opdycke S, Miringoff M-L. The Social Health of the States 2008. http://iisp.vassar.edu/SocialHealthofStates.pdf, 2008 (Accessed March 10, 2015).
- 43.
The World Bank. World Development Indicators. http://data.worldbank.org/products/wdi, 2014 (Accessed March 10, 2015).
- 44.
Global Cities Institute. Global City Indicators. http://www.cityindicators.org/Default.aspx#, 2014.
- 45.
Michigan Department of Community Health. Index of Urban Prosperity. http://blogpublic.lib.msu.edu/index.php/state_of_michigan_cities_an_index_of_urb?blog=5, 2015 (Accessed March 10, 2015).
- 46.
Wolman HL, Ford CC, Hill E. Evaluating the success of urban success stories. Urban Studies. 1994;31(6):835–50.
- 47.
United Nations Development Programme. Human Development Indices: technical notes. http://hdr.undp.org/sites/default/files/hdr14_technical_notes.pdf, 2015 (Accessed March 10, 2015).
- 48.
Atkinson AB. On the measurement of inequality. J Econ Theory. 1970;2:244–63.
- 49.
Bertelsmann Stiftung. Bertelsmann Transformation Index. http://www.bertelsmann-transformation-index.de/en/, 2015 (Accessed March 10, 2015).
- 50.
Transparency International. Corruption Perceptions Index 2010. http://www.transparency.org/cpi2014/in_detail, 2015 (Accessed March 10, 2015).
- 51.
Townsend P. Deprivation. Int Soc Pol. 1987;16(2):125–48.
- 52.
Carstairs V, Morris R. Deprivation: explaining differences in mortality between Scotland and England and Wales. Br Med J. 1989;299(6704):886–9.
- 53.
Carstairs V. Deprivation indices: their interpretation and use in relation to health. J Epidemiol Community Health. 1995;49(Suppl2):S3–8.
- 54.
Jarman B. Identification of underprivileged areas. Br Med J. 1983;286:1705–9.
- 55.
Sivakuman M. Human Deprivation Index: a measure of multidimensional poverty. MPRA Paper No. 22337, posted 26. April 2010 / 13:27. http://mpra.ub.uni-muenchen.de/22337/, 2010 (Accessed March 10, 2015).
- 56.
Messer LC, Laraia BA, Kaufman JS, Eyster J, Holzman C, Culhane JEI, et al. The development of a standardized neighborhood deprivation index. J Urban Health. 2006;83(6):1041–62.
- 57.
Rey G, Jougla E, Fouillet A, Hemon D. Ecological association between a deprivation index and mortality in France over the period 1997 – 2001: variations with spatial scale, degree of urbanicity, age, gender and cause of death. BMC Public Health. 2009;9. doi:10.1186/1471-2458-9-33.
- 58.
Kaltenthaler E, Maheswaran R, Beverley C. Population-based health indexes: a systematic review. Health Policy. 2004;68(2):245–55.
- 59.
Shane AM, Graedel TE. Urban environmental sustainability metrics: a provisional set. J Environ Plann Manage. 2000;43(5):643–63.
- 60.
Stephens C, Akerman M, Avle S, Maia PB, Companario P, Doe B, et al. Urban equity and urban health: using existing data to understand inequalities in health and environment in Accra, Ghana and Sao Paulo, Brazil. Environ Urban. 1997;9:181–202.
- 61.
Rothenberg RB, Weaver SR, Dai D, Stauber C, Prasad A, Kano M. A flexible urban health index for small area disparities. J Urban Health. 2014;91(5):823–35.
- 62.
Spielman SE, Yoo EH. The spatial dimensions of neighborhood effects. Soc Sci Med. 2009;68:1098–105.
- 63.
Pruss-Ustun A, Corvalan C. How much disease burden can be prevented by environmental interventions? Epidemiology. 2007;18(1):167–78.
- 64.
Pruss-Ustun A, Bonjour S, Corvalan C. The impact of the environment on health by country: a meta-synthesis. Environmental Health 2008;7(7). doi:10.1186/1476-069X-7-7.
- 65.
Corvalan C, Briggs D, Zielhuis G. Decision-making in Environmental Health. London: Taylor & Francis Group; 2000.
- 66.
Briggs D. Environmental Health Indicators: Framework and Methodologies. WHO/SDE/OEH/99.10. http://whqlibdoc.who.int/hq/1999/WHO_SDE_OEH_99.10.pdf, 1999 (Accessed May 4, 2014).
- 67.
Lawrence RJ. Urban environmental health indicators: appraisal and policy directives. Rev Environ Health. 2008;23(4):299–326.
- 68.
Purciel M, Neckerman KM, Lovasi GS, Quinn JW, Weiss C, Bader MDM et al. Creating and validating GIS measures of urban design for health research. J Environ Psychol. 2009;29:457–66.
- 69.
Angeles G, Lance P, Barden-O’Fallon J, Islam N, Mahbub AQM, Nazem NI. The 2005 census and mapping of slums in Bangladesh: Design, select results and application. International J Health Geographics 2009;8(32). doi:10.1186/1476-072X-8-32.
- 70.
Maroko AR, Maantay JA, Sohler NL, Grady KL, Arno PS. The complexities of measuring access to parks and physical activity sites in New York City: A quantitative and qualitative approach. International Journal of Health Geographics 2009;8(34). doi:10.1186/1476-072X-8-34.
- 71.
Clarke P, Ailshire J, Melendez R, Bader M, Morenoff J. Using Google Earth to conduct a neighborhood audit: reliability of a virtual audit instrument. Health Place. 2010;16:1224–9.
- 72.
Badland H, Whitzman C, Lowe M, Davern M, Aye L, Butterworth I, et al. Urban liveability: emerging lessons from Australia for exploring the potential for indicators to measure the social determinants of health. Soc Sci Med. 2014;111:64–73.
- 73.
Villanueva K, Badland H, Hooper P, Koohsari MJ, Movoa S, Davern M, et al. Developing indicators of public open space to promote health and well being in communities. Appl Geogr. 2015;57:112–9.
- 74.
Badland H, Mavoa S, Villanueva K, Roberts R, Davern M, Giles-Corti B. The development of policy-relevant transport indicators to monitor health behaviours and outcomes. J Transp Health. 2015. Epub. http://dx.doi.org/10.1016/j.jth.2014.07.005i.
- 75.
Claussen B, et al. EURO-URHIS 2, Urban Health Monitoring and Analysis System to Inform Policy. http://www.urhis.eu/, 2010 (Accessed March 10, 2015).
- 76.
United Nations Human Settlements Programme. Urban indicators guidelines: monitoring the habitat agenda and the millennium development goals. http://ww2.unhabitat.org/programmes/guo/documents/urban_indicators_guidelines.pdf, 2004 (Accessed March 10, 2015).
- 77.
United Nations Development Programme. Human Development Reports. http://hdr.undp.org/en/statistics/hdi, 2014 (Accessed March 10, 2015).
- 78.
ICLEI. Local Governments for Sustainability. http://www.iclei.org/our-activities/our-agendas/sustainable-city.html, 2014 (Accessed March 10, 2015).
- 79.
European Commission. European Common Indicators. http://ec.europa.eu/environment/urban/common_indicators.htm, 2014 (Accessed March 10, 2015).
- 80.
UK Department for Environment Food and rural Affairs. Sustainable Development Indictors. https://www.gov.uk/government/collections/sustainable-development-indicators, 2014 (Accessed March 10, 2015).
- 81.
WHO European Healthy Cities Network. http://www.euro.who.int/en/health-topics/environment-and-health/urban-health/activities/healthy-cities/who-european-healthy-cities-network, 2014 (Accessed March 10, 2015).
Acknowledgements
This study was supported by a grant from the WHO Center of Health Development (the WHO Kobe Center). Research support was also supported by the National Institute of Minority Health and Health Disparities of the National Institutes of Health under award number 1P20MD004806. The authors would like to acknowledge the participation of Jeremy Crampton, PhD, in the initial versions of this work. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health or the World Health Organization.
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
RR carried out the initial literature review and wrote a portion of the original draft. CS wrote a portion of the initial draft and provided environmental assessment. SW editing the draft and confirmed the references. DD edited the draft and provided consultation on geographic issues. AP and MK helped to conceptualize the review, provided fugitive references, and edited the final draft. All authors approved the final draft.
Rights and permissions
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
About this article
Cite this article
Rothenberg, R., Stauber, C., Weaver, S. et al. Urban health indicators and indices—current status. BMC Public Health 15, 494 (2015). https://doi.org/10.1186/s12889-015-1827-x
Received:
Accepted:
Published: | https://bmcpublichealth.biomedcentral.com/articles/10.1186/s12889-015-1827-x |
Nicole Alvarez, LMHC
Our stories are comprised of unique experiences that both consciously and unconsciously shape who we are. I believe that by seeking to understand the ways in which these experiences have influenced us, a pathway for growth and healing is unveiled. My goal is to create a space that is welcoming, warm and nonjudgmental. One that supports the development of self and will encourage each client to remain steadfast in their journey towards living their most authentic life.
My integrative approach to treatment allows for a uniquely tailored therapeutic experience. One where your individual needs will be taken into careful consideration and handled with the utmost care. With your commitment to this process, my hope is that together, we can work to understand what hinders you and collaboratively seek solutions that leave you in alignment with your identified goals.
While I have previously worked closely with those struggling with substance use disorders and severe and persistent mental illness, I am most passionate about working with problems related to varying types of trauma, the management of anxiety and depression and issues concerning the LGBTQ+ community. | https://coopertherapy.com/staff/nicole-alvarez-lmhc/ |
- Why Be Moral?Comments on Yong Huang's Book on the Cheng Brothers
In Why Be Moral: Learning from the Neo-Confucian Cheng Brothers, Yong Huang presents a comparative study on the moral philosophy of the Cheng brothers as how comparative philosophy should be done: to engage in contemporary philosophical problems and to propose solutions that could be gleaned from the ideas of ancient Chinese philosophers. His analysis provides a paradigm for comparative philosophy. I think this is the right way to do comparative philosophy—to focus on problem solving rather than textual comparison. I am very impressed both with the breadth of his knowledge of Western moral philosophy and with the depth of his analysis of the moral philosophy of the Cheng brothers. Since I share his view on how to do comparative philosophy, I will now engage Huang's book philosophically as well. In what follows, I will focus on four of the philosophical problems raised in this book, and examine whether the solutions presented by Huang on behalf of the Cheng brothers are really good solutions. I will also briefly touch on some interpretative issues at the end.
Why Be Moral?
The question considered in chapter 1 of the book is: why should I be moral? Huang says, "Obviously, this is a question raised by an egoist who is first of all concerned with his or her self-interest" (p. 27). How could an ethicist provide any convincing answer for such a person? Huang first discusses the nature of the question "Why should I do what I should do?" and he dismisses the following suggestions: (1) that the question is "illegitimate" because it is simply asking a tautological question "Why should I be moral?" (p. 31; quoting Toulmin 1964, p. 162), and (2) that the question is an unreasonable one because it is "self-contradictory"—the person asking this question is taking morality "as a mere means to an ulterior end," but morality should be the end in itself (p. 32; quoting Bradley 1935, pp. 61–62).
According to Huang, the question is really about moral motivation—"What motivations do or can I have to be moral?" He says, "The person who asks the question is not a moral skeptic. She knows clearly that she [End Page 268] should be moral but lacks the motivation to be so." A person who is motivated to be moral will never ask the question "Why should I be moral?" (p. 33). Huang thinks that this is a perfectly legitimate question. After considering how Western ethical theorists (Plato, Hobbes, Hume, Aristotle, and Kant) have failed to provide a sufficiently convincing answer, Huang argues that the Chengs' answer is "more promising." The answer, in a nutshell, is that I should be moral because I can find joy in being moral.
This answer, however, is not really satisfactory. To begin with, the kind of joy that Confucius and his prize student Yan Hui found in learning, practicing what one learns, and realizing the virtue of humaneness in oneself is not appreciated by most other people—including the other students who were following Confucius at the time. The Chengs acknowledge that when "living in poverty, while other people would be worried, only Yan Hui can find joy. This is because of his [virtue of humaneness]" (p. 45). In other words, not everyone will find joy in being moral. As a result, only someone who can be joyful in being moral will find the answer satisfactory. And yet this person would not have asked the question to begin with, since he or she is already experiencing the supreme joy. As Huang puts it, "One needs to first have knowledge and only then can one find joy" (p. 51). It would thus seem that finding joy is the outcome of one's being moral (having the knowledge of virtue and possessing the constant virtue of humaneness), rather than a goal with which to motivate oneself to be moral.
To be more precise, we need to ask what constitutes joy. Huang explains that the Confucian notion of joy is different from the common conception of joy. For the Cheng brothers, "joy" means "to be... | https://muse.jhu.edu/article/724179/ |
For patients with lumbosacral disk herniation (LDH) being considered for epidural steroid injection (ESI) to treat back and radicular pain, transforaminal ESI (TFESI) may be superior to caudal ESI (CESI) in terms of clinical outcomes, according to findings published in The Spine Journal, although results from this systematic review and meta-analysis did not reach statistical significance.
Although CESI is effective and more commonly used for the treatment of LDH-related pain, the use of TFESI may offer better outcomes because it allows the delivery of steroid medications directly to the target area.
For this analysis, the investigators searched MEDLINE, EMBASE, KoreaMed, and Cochrane review databases for studies published through July 2017 that examined the efficacy of TFESI vs CESI for the treatment of LDH-related leg pain and low back pain. From 6711 articles reviewed, 6 (4 randomized controlled trials [RCT] and 2 non-RCT) were selected for qualitative analysis; 4 (3 RCT and 1 non-RCT) of these studies were also included in a quantitative analysis to evaluate statistical significance and effect size.
Continue Reading
Outcomes of interest included pain — as assessed with the visual analog scale or numeric rating scale — and functional limitation, which was measured using the Oswestry disability index. The Grading of Recommendations Assessment, Development and Evaluation method was used to determine level and quality of evidence.
In the qualitative synthesis, 4 of the 6 chosen studies indicated that TFESI was superior to CESI, 1 study found CESI to be superior to TFESI, and 1 study found the 2 techniques comparable. When the techniques were compared by quantitative meta-analysis, investigators found TFESI to be clinically superior to CESI at short- and long-term follow-up (1 month and 6 months, respectively), although these results were not found to be statistically significant.
The mean difference in pain scores over 3 studies was 1.43 point (95% CI, -0.37 to 3.24; P =.12) at 1 month and 0.14 point (95% CI, -0.54 to 0.81; P =.69) at 6 months, both in favor of TFESI. The mean difference in functional improvement, as assessed with the Oswestry disability index, was 0.25 (95% CI, -10.33 to 10.83; P =.96) at 1 month and 2.30 (95% CI, -13.82 to 18.41, P =.78) at 6 months, both favoring TFESI over CESI. Heterogeneity was high for all measures, ranging from I2 = 88% to I2 = 99%.
Imprecision and inconsistency across the studies examined resulted in a low level of evidence. The investigators suggested that increasing the amount of injectate beyond that traditionally used might allow CESI to achieve results comparable with those seen with TFESI.
Study limitations included high heterogeneity, which lowered the statistical power of the meta-analysis, and weak level of evidence. | https://www.clinicalpainadvisor.com/back-spine-pain/transformainal-epidural-steroid-injection-superior-to-caudal-in-lumbosacral-disk-herniation/ |
The toxicity of organophosphates (OPs) are well described. They cause neuronal overstimulation which can lead to death. OPs have become tools of suicide and weapons of war due to their ease of access in areas throughout the globe. Thankfully, there are drugs that can inhibit the activity of OPs at their source as well as treating certain symptoms related to exposure. However, current drugs can only treat those whom have already been affected. They do not provide preventative treatment and cannot destroy existing stockpiles. The overabundance of OPs exists because of their ability as herbicides, fungicides and insecticides. Presently, usage of many organophosphates has been banned in many countries but they are still used in certain agricultural areas. This project proposes a novel type of bioscavenger that molecularly sort OPs and their byproducts towards complete enzymatic detoxification. This research precedes the eventual goal of using DNA as a biomolecular scaffold for multienzyme structures by modulating OP binding to increase the efficacy of multistep enzymatic degradation. The studies presented here ascribe short sequences of DNA being able to bind organophosphates as well as the fact that binding is probably sequence dependent. Insight from these experiments will lead to a better understanding of the molecular landscape of substrate-DNA interactions.
Renewable chemical production via microbial fermentation is a growing and critical industry. To overcome limitations in the productivity of commonly used hosts, nonconventional microbes that have uniquely advantageous metabolisms and phenotypes are needed. One such organism is the yeast Yarrowia lipolytica, which has a high native capacity to synthesize lipids and grow on a range of substrates. A significant bottleneck in engineering Y. lipolytica is a lack of synthetic biology tools for multiplexed genome editing and rapid strain development. To overcome these limitations, we have developed CRISPR-Cas9 based tools for (i) targeted gene disruption, (ii) expression cassette integration into predefined genomic loci, (iii) repression of native genes using CRISPR interference, (iv) activation of cryptic genes using CRISPR activation, and (v) genome-wide knockout screening. Straightforward adaptation of CRISPR-Cas9 strategies used in other eukaryotes had limited success, resulting in low gene disruption rates in Y. lipolytica. To improve this, we designed novel synthetic RNA polymerase III promoters for sgRNA expression, the best of which achieved gene disruption rates >90% and genome integration rates >50%. CRISPR-mediated markerless genome integration was used to make a strain of Y. lipolytica capable of producing over 21 mg/g DCW of the valuable carotenoid lycopene. We also adapted the CRISPR-Cas9 system for gene repression and significantly increased homologous recombination by transiently repressing genes involved in nonhomologous end-joining. The system was further adapted for gene activation and used to express natively silent β-glucosidase genes to enable growth on cellobiose. To accelerate strain engineering, we designed and constructed a library of plasmids expressing ~48,000 sgRNAs that target all protein coding sequences in the genome with 6-fold coverage. By transforming this pooled library into Y. lipolytica strains expressing Cas9 and quantifying sgRNA enrichment or depletion after outgrowth, genes important for growth under different conditions can be identified. By performing screening experiments in different strain backgrounds, determinants of CRISPR-Cas9 function can be characterized and industrially relevant phenotypes can be selected for.
Production of renewable, non-toxic and environmental-friendly biofuels and chemicals has been the focus of metabolic engineering. To achieve high yield production by microbial cell factories, it is necessary to identify highly active biocatalysts and engineer efficient biosynthetic pathways. Alcohol-O-actyl/acyltransferase (AATase) is responsible for synthesis of fatty acid ethyl esters (FAEEs) by condensing acetyl/acyl-CoAs and ethanol in yeast and plants. This work demonstrated that S. cerevisiae is a suitable host for FAEE production, because AATase has higher specific enzymatic activity compared to that in E. coli. To enable the rapid profiling of AATase activities, a spectrophotometric-based coupled enzyme assay was developed for high throughput screening of AATase enzymatic activity. With this assay, a library of AATases was characterized for the substrate specificity towards acyl-CoAs and Atf-Sl from tomato was discovered with high activity towards various alcohols. Enzyme co-localization and substrate channeling are strategies to improve enzyme cascade reaction rate and yield. A protein-based scaffold based on oleosin-cohesin-dockerin was developed for co-localizing multienzyme pathways on the surface of intracellular lipid droplets (LDs). The upstream enzyme in yeast ester biosynthesis was recruited to the native localization LDs of the terminal reaction step, AATase.
Fluorescent microscopy studies show that most of the endogenous AATases in S. cerevisiae are localized to LDs. To understand the localization mechanism and trafficking pathway of AATase, structure-function analysis and protein-protein interaction studies were performed for Eht1 and its paralogue Eeb1, which arose from genome duplication. N- and C-terminal regions of Eht1 are necessary for initially targeting to the ER membrane and subsequent sorting to LDs. Eht1 is a peripheral membrane protein on ER with both termini exposed to cytosol. The translocation from ER to LDs of Eht1 is related to translocons on the ER. Immunoprecipitation and MS analysis have identified possible physical protein interactions of Eht1 to assist in the trafficking from the ER to LDs. Combined, this work not only engineers AATase in S. cerevisiae by investigating highly active AATase and spatially organizing multienzyme pathways, but also understand the mechanism of AATase ER-dependent LDs targeting.
P450s have significant industrial relevance in the field of biotechnology and sustainability. In cells, they allow for the utilization of various substrates for energy usage and storage. Their ability to oxidize various substrates can also be beneficial to create anything from biofuel precursors to oleochemicals. As Yarrowia lipolytica produces a wide variety of P450s that catalyze disparate reactions, it presents itself as an ideal target for enzyme production. Genetic engineering methods for Y. lipolytica have made enzyme overexpression in the yeast more obtainable. This project aims to utilize genetic engineering tools available to Y. lipolytica in an attempt to overproduce P450s. Specific genes relating to protein expression will be targeted. This research hopes to prove the viability and capacity of Y. lipolytica as a large scale producer of membrane associated proteins by proliferating ER creation through altering carbon flux in the organism. The experiments presented ascribe to quantify the potential of this microorganism with studies involving protein expression and cell morphology.
Acetate esters are ubiquitous in nature and broadly used for a range of applications including as solvents, aromas, and flavours in the polymer cosmetics, pharmaceutical and beverages industries. In yeast and fruit ripening, alcohol-o-acetyltransferase (AATase, E.C. 2.3.1.84.) is responsible for the biosynthesis of a range short and medium chain esters from an alcohol and an acetyl-CoA. Microbial production of acetate esters has been the focus of a number of metabolic engineering efforts; however, a poor understanding of kinetic characteristics of the AATase family has limited the success of acetate ester biosynthesis via metabolic engineering. The overall goal of the study is to work towards the development of a biosynthesis pathway for the production of acetate esters from the metabolic engineering of AATase enzymes. These studies include 1) activity screening of AATase orthologs from saccharomyces and non-saccharomyces yeast and various fruit species; 2) observation of the acetate ester effect on cell cultures; and 3) substrate channeling simulations of coupled enzyme complexes. These studies of the AATase family can lead to a better understanding of these enzymes and provide insight into the selection of the most suitable candidate to develop a biosynthetic pathway for acetate ester production.
The threat of climate change and a recent sway in popular opinion towards sustainable energy and chemicals have fueled the field of biotechnology for renewable chemicals production. To achieve process feasibility and compete with fossil-based processes, microbial production units are required to synthesize products at high titers and productivities while utilizing cheap and sustainable substrates. To fulfill these requirements careful selection of the host organism is essential. The emergence of efficient genome editing tools has enabled engineering of, thus far, intractable organisms, and allows for the selection of a host organism based on a desired phenotype that is beneficial for the process. The yeast Kluyveromyces marxianus was chosen because of its natural capacity to produce high amounts of ethyl acetate. Other characteristics such as fast growth kinetics, thermotolerance and the ability to metabolize various carbon sources make this host especially interesting for industrial applications. The development of an efficient CRISPR-Cas9 system in K. marxianus allowed us to interrogate the role of alcohol acetyltransferases and alcohol dehydrogenases in volatile metabolite production. We identified Eat1 as the critical enzyme for acetate ester production, and found that mitochondrial localization of Eat1 is essential for high ester production yields. Overexpression of Eat1 significantly increased ester production indicating that this step is the bottleneck of the reaction. To furthermore increase ester production TCA cycle flux was slowed down through CRISPR interference-mediated knockdown of the TCA cycle and electron transport chain.
Chemical biosynthesis with enzymes and microorganisms holds great promise for renewable production of chemicals, fuels, and therapeutics. To achieve high productivity, identification of active enzymes and engineering of efficient metabolic or synthetic pathways are essential for massive production. Enzyme co-localization orchestrated by synthetic DNA and protein scaffolds have been shown improvement of pathway yields in vitro and in vivo. However, an investigation of the effect of DNA scaffold to enzyme activity of enzyme-DNA nanostructure and development of tools for enzyme co-localization on intracellular membranes are limited.
Using a model system of horseradish peroxide and a multi-valent DNA scaffold, we demonstrated that DNA-conjugated horseradish peroxidase (HRP) activity can be enhanced by tuning the binding between substrates and DNA. The concept extracted from this work can be extended to rational designing of enzyme activity toward given substrates.
Biological production of esters can be accomplished by the enzymatic reaction catalyzed by alcohol-O-acetyltransferase (AATase) with condensation of acyl-CoA and alcohol. Ethyl acetate synthesized by Atf1, an AATase, in S. cerevisiae relies on the availability of acetyl-CoA metabolized by aldehyde dehydrogenase (Ald6) and acetyl-CoA synthetase (Acs1) from acetaldehyde and ethanol produced by alcohol dehydrogenase (Adh). Both Ald6 and Acs1 have been shown to localize to the cytosol or mitochondria, whereas Atf1 is found on the endoplasmic reticulum (ER) and lipid droplets (LDs). To engineering enzyme co-localization of Ald6, Acs1, and Atf1, we first elucidated molecular transport of Atf1 to the ER and LDs, and essential domains required for membrane association. We then developed an Oleosin-Cohesin-Dockerin based synthetic protein scaffold that functionally localizes Ald6 and Acs1 to Atf1 on LDs. Such intracellular organization of the engineered pathway has improved the yield of ethyl acetate production. In addition, we also developed a spectrophotometric-based high throughput screening assay for determination of AATase activity and discovered a previously unexplored Atf-Sl from tomato with high activity toward a diverse set of alcohols and acyl-CoAs. This work not only broaden our understanding of LD biology, but also expand our capabilities to control intracellular localization of enzymes for efficient chemical conversion and to rapidly investigate enzyme activity of AATase. | https://escholarship.org/search/?q=author%3A%22Wheeldon%2C%20Ian%22 |
This article was co-authored by our trained team of editors and researchers who validated it for accuracy and comprehensiveness. wikiHow's Content Management Team carefully monitors the work from our editorial staff to ensure that each article is backed by trusted research and meets our high quality standards.
There are 15 references cited in this article, which can be found at the bottom of the page.
wikiHow marks an article as reader-approved once it receives enough positive feedback. This article received 32 testimonials and 80% of readers who voted found it helpful, earning it our reader-approved status.
This article has been viewed 370,357 times.
Learn more...
Ghost shrimp are interesting, low-maintenance aquatic pets. Also known as glass shrimp, translucency is their most recognizable trait. They’re fairly hardy, and you’ll just need to ensure the water’s temperature, chemical, pH, and oxygen levels are within healthy ranges. While they only live about a year, they tend to breed rapidly, so you establishing a long-term colony is super easy!
Steps
Method 1 of 3:
Setting up the Tank
-
1Keep your shrimp in a 5 to 10 gallon (19 to 38 L) aquarium. Choose a tank no smaller than 5 gallons (19 L) for your pets. A larger tank is preferable if you're raising a large number of shrimp. As a rule of thumb, the tank should hold 1 gallon (3.8 L) of water for every 10 ghost shrimp it houses. X Research source
- Shop for tanks for aquatic pets online or at a pet store. Go with a tank that has a secure lid. Believe it or not, ghost shrimp can jump out of the water and escape!
- If you have an existing aquarium and want to add shrimp to it, keep in mind shrimp don’t do well with most fish species. Unless you’re raising the shrimp to feed your fish, keep them in a tank with other shrimp, snails, and docile fish, such as Cory catfish.
-
2Install a sponge filter or use a filter with a covered intake. Even though ghost shrimp do much of the cleaning themselves, a filter is necessary for a healthy aquarium. For a smaller tank, use an internal sponge filter, which doesn’t generate a strong flow or pose the risk of sucking up shrimp.
- For a larger tank, go with an external aquarium filter with a sponge cover over the intake. That way, shrimp won’t accidentally get sucked into the filter.
- If you go with an external filter for a larger tank, choose one that changes 3 to 5 times the amount of water in your tank per hour. If you're not sure which product to buy, head to the pet store and ask an employee for recommendations. X Research source
-
3Use an air pump to add oxygen to the water. Even if you’re using an external tank filter, it’s best to install an additional air pump, which you can find online and at pet stores. Ghost shrimp need high oxygen levels in order to breed and shed their exoskeletons.
- Keeping live plants in the tank can also help oxygenate the water.
-
4Line the tank with 1 to 2 in (2.5 to 5.1 cm) of gravel and sand. Purchase chemical and dye-free aquarium gravel and sand at a pet store. Before adding it to the tank, place the sand and gravel in a fine sieve and rinse thoroughly it under running water. Add coarser gravel to the bottom of the tank, then cover it with finer gravel or sand. X Research source
- Ghost shrimp are sensitive to chemicals, dust, and debris, so be sure to rinse away any impurities before lining the tank.
- Add the gravel to the tank gently to avoid damaging the glass.
-
5Add aquatic plants and hiding spots. Live plants will add oxygen to the water, promote healthy bacteria growth, and add aesthetic appeal to your aquarium. Purchase aquatic plants at the pet store (don't use wild specimens), and ask a store employee for help choosing species that are safe for shrimp.
- You could also put a cave or other decorative hiding spots in your aquarium. In addition to leafy aquatic plants, consider adding moss to the tank. Moss is low maintenance and will provide food for your shrimp.
-
6Place a heater in the tank to keep the temperature around 75 °F (24 °C). Ghost shrimp can tolerate water temperatures between 65 and 85 °F (18 and 29 °C), but they do best in water that’s around 75 °F (24 °C). To maintain this temperature, purchase an aquarium heater and monitor the tank’s temperature with a thermometer. X Research source
Advertisement
- Look online or at your local pet store for an aquarium heater and thermometer. The right heater depends on the size of your tank. A 50-watt heater should do the trick for a 10 gallon (38 L) tank. For other sizes, use this calculator to determine the wattage your heater needs: https://aquariuminfo.org/volumecalculator.html.
Method 2 of 3:
Adding Your Shrimp to the Tank
-
1Cycle the tank for 2 to 8 weeks before adding the shrimp. Fill the tank with warm tap water, then add a few flakes of fish food or store-bought ammonia labeled for fishless cycling. Using aquarium water test strips, check the ammonia level in the tank after 3 to 4 days. Look for an ammonia level between 2 and 4 ppm (parts per million). X Research source
- Then, after 1 to 2 weeks, test for nitrites. Look for nitrite levels to spike, then drop after a few days to 0 ppm. When nitrite levels drop, nitrate levels should increase. After 2 to 8 weeks, ammonia and nitrite levels should stabilize at 0 ppm, and nitrate levels should be under 2 ppm.
- Cycling the tank encourages healthy bacteria to grow. These bacteria consume ammonia and nitrite, which are toxic to ghost shrimp and other aquatic pets.
-
2Place the shrimp and the water from the pet store bag in a bowl. When you’re ready to introduce the shrimp to their new home, open the travel bag or container provided by the pet store. Carefully pour the shrimp and the water from the bag into a fishbowl or bucket. X Research source
- After adding the shrimp and water to the bowl, it should only be about half full. There needs to be enough room to add more water, so choose a large enough bowl.
-
3Siphon water from the tank into the bowl. Place the bowl with the shrimp next to the tank. Dip a flexible tube into the tank, and twist a rubber band tightly around the other end. Lower the end with the rubber band over the bowl with the shrimp, and allow water to slowly drip into the bowl. X Research source
- Gravity will siphon water through the tube from the tank into the bowl. Monitor the water flow and, if necessary, tighten the rubber band to slow the drip. Allow water to drip into the bowl for about 30 minutes to slowly acclimate the shrimp to their new water's chemistry.
-
4Transfer the shrimp to the tank with a soft mesh net. After acclimating the shrimp for 30 minutes, gently scoop up a few of them with a soft mesh net. Carefully release the shrimp into the tank, and repeat the steps until you’ve transferred all them from the bowl to the tank. X Research source
Advertisement
- Don’t just dump the water from the bowl into your tank, especially if you’re adding the shrimp to an existing aquarium. Water from the pet store may contain parasites and bacteria that could contaminate your tank.
Method 3 of 3:
Keeping Your Shrimp Healthy
-
1Offer store-bought pellets or small bits of boiled vegetables. Ghost shrimp are not picky eaters. Look for store-bought shrimp pellets online and at pet stores. Additionally, you can feed your pets small amounts of boiled vegetables, such as zucchini or spinach. X Research source
- Ghost shrimp will also munch on waste, algae, and other matter in the tank.
-
2Feed your ghost shrimp a small amount of food twice a day. You only need to feed your shrimp a tiny amount of food at a time. About 1 to 2 pea-sized amounts of vegetable matter or store-bought shrimp pellets can sustain 5 or 6 adult shrimp for a day. X Research source
- If you feed your shrimp store-bought pellets, check the instructions for the recommended amount to feed your pets.
- Watch your shrimp as they eat. Since their bodies are translucent, you’ll be able to see food make its way through their digestive systems!
-
3Change 30% of the water once a week. Use a flexible tube or vacuum siphon to remove about 30% of the tank’s water. Be sure not to suck up any of your shrimp with the siphon. Then add an equal amount of clean tap water to the tank. X Research source
- Make sure the water temperature is around 75 °F (24 °C). If you’re only keeping shrimp in the tank, you shouldn’t need to do much more maintenance than water changes. However, if there are larger fish in the tank, periodically remove waste with a siphon vacuum or brush.
- Test your tap water before adding it to the tank. It should be free of heavy metals and chlorine, and ammonia and nitrite levels should be 0 ppm. If necessary, treat your water with a dechlorinator, which you can buy at the pet store, or use bottled or filtered water.
-
4Choose other shrimp species, snails, or small, docile fish for tank mates. Ghost shrimp live well with other species of freshwater shrimp and non-aggressive aquatic animals, such as snails. In general, most fish that are larger than ghost shrimp aren’t suitable tank mates. Small, docile species, such as Cory or Otocinclus catfish, may get along with your shrimp. X Research source
- Unless you’re using your shrimp as food, fish species you should definitely avoid include oscars, arowanas, cichlids, angelfish, discus, and Triggerfish.
- If you want to add shrimp to your existing aquarium and don’t care if some get eaten, add at least 20 to the tank. The shrimp will be more resilient if their numbers are stronger.
- If you’re using the shrimp as food, it’s wise to establish a colony in a separate tank to replenish the population in the main aquarium. X Research source
-
5Test the water’s pH and chemical levels monthly. Find aquarium water test kits at the pet store, and keep a supply on hand. Every 3 to 4 weeks, test your tank’s water to ensure the pH, ammonia, nitrite, and nitrate are within ideal ranges.
Advertisement
- The water’s pH, or acidity level, should be neutral. If the pH isn’t between 6.0 and 8.5, purchase an aquarium tank amendment at the pet store. Treat the water according to your product’s instructions.
- If the ammonia or nitrite levels are over 0 ppm, do a 30% water change, remove any visible waste, and consider applying ammonia neutralizing drops to the water. If you have a friend who owns a healthy aquarium, you could also add gravel from their tank to yours to introduce beneficial bacteria.
Community Q&A
-
QuestionCan I hold my ghost shrimp in my hands without causing harm?Community AnswerNo, they need to stay in the water, if you take them out and try to hold them, the shock could kill them.
-
QuestionCan a ghost shrimp be held?Community AnswerSorry, but that would not work. Ghost shrimp are like fish when it comes to handling. Keep them in the water at all times.
-
QuestionWhere can I get the sinking pellets?Community AnswerYou can get them at your local Walmart. Petsmart or pet stores specifically dedicated to aquariums should also carry various kinds of sinking pellets.
-
QuestionHow do I clean a fish tank that has ghost shrimp?Community AnswerYou can use a gravel vac. Just pay attention to where the shrimp are and if one starts to get sucked in, plug the other end with your finger to stop the suction. He will swim back out when the suction stops.
-
QuestionHow can I tell if they are carrying eggs?Community AnswerYou will know if a female is carrying eggs just by looking at her stomach. She will appear much rounder, and you will actually be able to see the eggs! Ghost shrimp are not live bearers, so if suddenly she is thin and you see no baby shrimp, it just means she has laid her eggs.
-
QuestionWhat do pet shrimp eat?Community AnswerPet shrimp can eat a wide selection of things. My ghost shrimp eat plain goldfish flakes and love them. They eat a ton, but keep in mind that if you have a tall tank, you might want to get some sinking pellets.
-
QuestionIs it okay to have only one ghost shrimp?Community AnswerYes, they do just fine on their own.
-
QuestionMy ghost shrimp are in my locker with a bunch of whirligig beetles, in a small bit of algae in a water bottle with a hole in it. Will they live like that?Community AnswerNot for long, no. The algae won't do much. I suggest you at least get a 5 gallon tank and food.
-
QuestionDo I need light to care for ghost shrimp?Community AnswerA light is most certainly not needed for shrimp, but is recommended if you have a planted tank which, for shrimp, is preferred.
-
QuestionIf I keep them in freshwater will they breed, or do they need brackish water?Community AnswerThey breed easily in freshwater and don't need to be separated in order to do so!
Video
Tips
- Your shrimp will be easier to spot if the bottom of the tank is filled with darker material.Thanks!
- Ghost shrimp tend to be more active at night so keeping your tank in a dimly lit room will encourage them to come out more.Thanks!
- Ghost shrimp turn different colors based on what you feed them. Notice patterns developing on their bodies as they feed on different types of food.Thanks!
- Shrimp spawn rapidly and are easy to breed. To promote breeding, purchase at least 20 ghost shrimp to ensure a healthy mix of males and females. Eggs and baby shrimp are fragile, so keep plenty of plants and other cover to offer protection. X Research sourceThanks!
- With the right conditions, ghost shrimp can live for a year or more. X Research sourceThanks!
Warnings
- Ghost shrimp can jump out of their tanks if the water is too high or the tank is lidless.Thanks!
- If you’re not raising shrimp for food, buy ghost shrimp sold specifically as pets. Shrimp sold for feeding aren’t typically kept in good conditions, may not be healthy specimens, and may not live as long as shrimp sold to be kept as pets.Thanks!
- Wash your hands with soap and hot water after maintaining the tank or feeding your shrimp.Thanks!
Support wikiHow's Educational Mission
Every day at wikiHow, we work hard to give you access to instructions and information that will help you live a better life, whether it's keeping you safer, healthier, or improving your well-being. Amid the current public health and economic crises, when the world is shifting dramatically and we are all learning and adapting to changes in daily life, people need wikiHow more than ever. Your support helps wikiHow to create more in-depth illustrated articles and videos and to share our trusted brand of instructional content with millions of people all over the world. Please consider making a contribution to wikiHow today.
References
- ↑ https://aquariuminfo.org/ghostshrimp.html
- ↑ https://aquariuminfo.org/beginner.html
- ↑ https://aquariuminfo.org/shrimptank.html
- ↑ https://aquariuminfo.org/ghostshrimp.html
- ↑ https://aquariuminfo.org/cycling.html
- ↑ https://aquariuminfo.org/shrimptank.html
- ↑ https://aquariuminfo.org/ghostshrimp.html
- ↑ https://aquariuminfo.org/ghostshrimp.html
- ↑ https://petcentral.chewy.com/keeping-and-breeding-dwarf-freshwater-shrimp/
- ↑ https://petcentral.chewy.com/keeping-and-breeding-dwarf-freshwater-shrimp/
- ↑ https://aquariuminfo.org/ghostshrimp.html
- ↑ https://petcentral.chewy.com/keeping-and-breeding-dwarf-freshwater-shrimp/
- ↑ https://aquariuminfo.org/ghostshrimp.html
- ↑ https://aquariuminfo.org/ghostshrimp.html
- ↑ https://www.theaquariumguide.com/articles/ghost-shrimp-care
About This Article
To take care of ghost shrimp, feed them a small amount of store-bought shrimp pellets twice a day. Or, you can feed your ghost shrimp small bits of boiled vegetables, like zucchini or spinach. Also, make sure you keep the temperature in their tank around 75 degrees Fahrenheit. If you want to add other aquatic animals to the tank with your ghost shrimp, stick with other shrimp species, snails, or small, docile fish. To learn how to set up a tank for ghost shrimp, keep reading!
Reader Success Stories
-
"I bought 3 ghost shrimps at first months ago but now I manage to breed them."
-
"The information presented on the "how to take care of a ghost shrimp" page is essential for all ghost shrimp care providers from the naive novice to the seasoned veteran. The "method 2" section dishes industry secrets that would otherwise be reserved for well-established shrimpers, and surely fade from public eye as the elders of the trade pass. The authors of this article have made a decided leap in the name of humane ghost shrimp husbandry."..." more
-
"All of it helped mostly. I wanted to know more about ghost shrimp because I was in the pet store and assumed it was an Amano shrimp, but when I looked at its tail, I knew it was a ghost shrimp. I got nearly everything I needed to know from this."..." more
-
"I'm a first-time shrimper, jumping in to get my feet wet. So excited about it. I have about 14 ghost/glass shrimp and 2 Japanese bottom cleaners. So far, so good. I also know that the only way they will breed is in brackish water."..." more
-
"I bought ghost shrimp to share a 20 gallon tank with 1 male and 3 female bettas, and I wanted to be certain, I was caring for them properly. The tips were specific and simple enough for a beginner."..." more
-
"Wasn't sure if I needed an air pump. The statement, "if your filter causes a disturbance to the water, you won't need an air pump" helped."..." more
-
"I really learned a lot about ghost shrimp. I have six of them and one of them laid eggs! It really helped me! Thanks"..." more
-
"I am making a tank for the fist time, and I found a lot of good information pertaining to the care of ghost shrimp."
-
"I just got a ghost shrimp yesterday and it's my first time getting one. This article really helped me. "
-
"It helped me know how to tell if a ghost shrimp is pregnant and what kind of light would be good."
-
"I'm adding bottom feeders to my 10 gallon aquarium. Ghost shrimp fit the description. Thanks."
-
"I learned the types of food to feed, no need for heater or filter, tank mates and lighting."
-
"I love how it said 5 gallons or more because at our pet store they say 8 a gallon."
-
"Really helped me. I now know how to set up the tank and take good care of them. "
-
"Great facts. Clearly written. Covered a variety of topics about ghost shrimp."
-
"Everything helped. First time doing this tank thing. Loving it. Thank you."
-
"The questions and answers helped a lot. Thank you very much for your help."
-
"Definitely the tips and the warnings helped so I could look out for them!"
-
"What helped me was all of the questions that had been asked and answered."
-
"It helped me a lot because I always thought ghost shrimp were so awesome."
-
"I didn't know what to do with the shrimp my kids brought home, now I do."
-
"Learned how to care for ghost shrimp, including food, plants and oxygen."
-
"This was a very helpful overview. Exactly what I was looking for!"
-
"This helped me because I had no idea how to care for shrimp."
-
"It was easy for my 6 year old granddaughter to understand."
-
"I learned that you can keep shrimp with betta fish. "
-
"It helped me to understand how to get shrimp tanks."
-
"The part where what you can keep the shrimp with."
-
"I learned a lot about ghost shrimp. Thanks. "
-
"My shrimp has lived for 9 years. Thank you." | https://www.wikihow.com/Take-Care-of-Ghost-Shrimp |
Tropical Cyclone Jaya came ashore in northern Madagascar in the morning of April 2, 2007 at around 11:00 a.m. local time (08:00 UTC). The storm formed in the Indian Ocean on March 30 and traveled westward toward Madagascar as predicted. What was not predicted, however, was its explosive growth in power from a strong tropical storm to a powerful Category 3 cyclone in just 36 hours, according to figures provided by the University of Hawaii’s Tropical Storm Information Center.Fortunately, the intensification took place while Jaya was still far from Madagascar.
This photo-like image was acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS) on the Aqua satellite on April 3, 2007, at 1:15 p.m. local time (10:15 UTC). The storm was a tropical cyclone with a circular shape, but no distinct eye at its center. According to the University of Hawaii’s Tropical Storm Information Center, Cyclone Jaya’s sustained winds had fallen in strength to roughly 125 kilometers per hour (80 mph) at the time this image was acquired.
When the storm made landfall on Madagascar, sustained winds were around 150 kilometers per hour (90 miles per hour), a marked change from 200 km/hr (125 mph) just twelve hours earlier. While much weakened, Jaya remained a powerful storm. Furthermore, it struck the northern part of Madagascar where a series of other cyclones have also come ashore in recent months. Forecasters were concerned that Jaya might reform after crossing the island and head inland into Mozambique, where residents are recovering from floods caused by recent heavy rains.
The high-resolution image provided above is at MODIS’ full spatial resolution (level of detail) of 250 meters per pixel. The MODIS Rapid Response System provides this image at additional resolutions.
You can download a 250-meter-resolution Cyclone Jaya KMZ file for use with Google Earth.
NASA image by Jeff Schmaltz, MODIS Rapid Response Team, Goddard Space Flight Center. | https://earthobservatory.nasa.gov/images/18188/tropical-cyclone-jaya |
Personalized approach Preparation Before you start writing your cold war dbq essay paper, progressive era dbq essay, or any other topic, there are some steps you need to bear in mind. Read and understand the prompt. They allocate you a specific period to read and understand the topic, roughly 15 minutes.
Body 4 optional - Third point Documents and analysis that support third point Conclusion Draw a comparison to another time period or situation synthesis Depending on your number of body paragraphs and your main points, you may include different numbers of documents in each paragraph, or switch around where you place your contextual information, your outside example, or your synthesis.
The next section will cover time management skills. You can be as organized as this library! There could be a few things at play here: Do you find yourself spending a lot of time staring at a blank paper?
No one will look at those notes but you! Are you too anxious to start writing, or does anxiety distract you in the middle of your writing time?
Do you just feel overwhelmed? Sounds like test anxiety. Lots of people have this. You might talk to a guidance counselor about your anxiety. They will be able to provide advice and direct you to resources you can use. There are also some valuable test anxiety resources online: Are you only two thirds of the way through your essay when 40 minutes have passed?
You are probably spending too long on your outline, biting off more than you can chew, or both. Remember, an outline is just a guide for your essay—it is fine to switch things around as you are writing.
To cut down on your outline time, practice just outlining for shorter and shorter time intervals. When you can write one in 20 minutes, bring it down to 18, then down to You may also be trying to cover too much in your paper. If you have five body paragraphs, you need to scale things back to three.
If you are spending twenty minutes writing two paragraphs of contextual information, you need to trim it down to a few relevant sentences. Be mindful of where you are spending a lot of time, and target those areas.
Start with 20 minutes for your outline and 50 for your essay, or longer, if you need. Then when you can do it in 20 and 50, move back to 18 minutes and 45 for writing, then to 15 and You absolutely can learn to manage your time effectively so that you can write a great DBQ in the time allotted.
On to the next skill! In other words, how do you reference the information in the documents in a clear, non-awkward way?
All of the history exams share a DBQ rubric, so the guidelines are identical. Thesis - 2 Points One point is for having a thesis that works and is historically defensible. This just means that your thesis can be reasonably supported by the documents and historical fact.
Per the College Board, your thesis needs to be located in your introduction or your conclusion. You can receive another point for having a super thesis. How will you know whether the historical evidence agrees or disagrees?
A super thesis, however, would take the relationships between the documents and the people behind the documents! Document Analysis - 2 Points One point for using six or seven of the documents in your essay to support your argument.
You can get an additional point here for doing further analysis on 4 of the documents. This further analysis could be in any of these 4 areas: What is their position in society and how does this influence what they are saying?
What are they trying to convince their audience of? Historical context - What broader historical facts are relevant to this document? Audience - Who is the intended audience for this document? Who is the author addressing or trying to convince? Be sure to tie any further analysis back to your main argument!If they lack a sense of the overall purpose and structure of such essays, they will not see the central importance of the thesis statement within that structure.
Students in a hurry often fail to tailor the thesis statement to the exact details and . DBQ Essay Outline Guide Use the following outline to plan and write your essays, in response to a Document Based Question (DBQ). The format is of your Thesis statement B. Summarize the key idea of your argument(s) C.
Explanation of why the question is significant or important. 1. Why is the question important today? How to Write a DBQ Essay.
This packet will be your guide to writing successful DBQ essays for social studies. Keep this in your binder ALL YEAR (it will also Now that you know what you have to do, you are ready to write your thesis statement. This is your 1-sentence answer to the task question.
In other words, you need to answer all parts. Resource How to write a DBQ thesis statement (Powerpoint) How to write a DBQ thesis statement (Powerpoint) APUSH DAY 6 (Columbus DBQ Prep) SKILLS: DBQ Support; Description: PPT about the DBQ thesis statement that Ms.
Toyama got from an AP Conference.
Targets: Student Rough draft of the Columbus DBQ is due after . Write a thesis statement that will point to the direction that your essay will take. The thesis statement should not repeat the topic question.
It should contain a paragraph that explains the . Writing a thesis for a document-based question (DBQ) is not easy if you don't know how to approach the historical material.
A DBQ is an attempt to analyze history from multiple sources and to defend a thesis in your writing. | https://debodoxalov.caninariojana.com/how-to-write-a-thesis-statement-for-a-dbq-essay-33802gd.html |
= -2.5cm Ł[[L]{}]{} ł Ø ø
FR-THEP/96-10\
hep-th/9606077\
June 1996
[**Symmetries, Currents and Conservation Laws**]{}
[**of Self-Dual Gravity**]{}
[A.D.Popov[^1]$^,$[^2], M.Bordemann and H.Römer]{}\
[*Fakultät für Physik, Universität Freiburg,\
Hermann-Herder-Str. 3, 79104 Freiburg, Germany*]{}
[[email protected]\
[email protected]\
[email protected]]{}
[**Abstract**]{}
We describe an infinite-dimensional algebra of hidden symmetries for the self-dual gravity equations. Besides the known diffeomorphism-type symmetries (affine extension of $w_\infty$ algebra), this algebra contains new hidden symmetries, which are an affine extension of the Lorentz rotations. The full symmetry algebra has both Kac-Moody and Virasoro-like generators, whose exponentiation maps solutions of the field equations to other solutions. Relations to problems of string theories are briefly discussed.
[**1. Introduction**]{}
The purpose of this paper is to describe a new infinite-dimensional algebra of hidden symmetries of the self-dual Einstein equations on a metric of signature $(+ + + +)$ or $(+ + - -)$. These equations define manifolds with self-dual Weyl tensor and vanishing Ricci tensor, which is equivalent to the self-duality equations for the Riemann tensor.
Four-dimensional self-dual Euclidean backgrounds often arise as the internal part of superstrings compactified to six dimensions in consideration of consistent string propagation (see, e.g., \[1\] and references therein). Self-dual gravity configurations also arise as consistent backgrounds for the $N=2$ closed string theory \[2,3\], and the $N=2$ string theory provides a quantization of the self-dual gravity model in a space-time with signature (2,2). Self-dual geometries are also important in compactifications of recently proposed 12-dimensional fundamental Y- and F-theories \[4\]. It is believed that discrete subgroups of the classical symmetry group of consistent string backgrounds are symmetries of string theory and that these subgroups of a large hidden symmetry group of string theory become visible for various compactifications \[5\]. Therefore hidden symmetries of the self-dual gravity equations are relevant to the symmetries of string theories.
The study of these symmetries is important for an understanding of non-perturbative properties and quantization of gravity and string theories. Euclidean solutions of the self-dual gravity equations (gravitational instantons) give a main contribution to a path integral of quantum gravity (see, e.g., \[6\]), and quantization of the self-dual gravity model itself may provide useful hints for full quantum gravity (see, e.g., \[7\]).
The self-duality equations on the curvature of a metric in four dimensions are an important example of a multidimensional integrable system, which can be solved by a twistor geometric construction \[8–11\]. The discussion of hidden symmetries of this model was started in the papers \[12\] on the basis of Plebański’s equations \[13\], and has been continued by many authors (see, e.g., \[14–18\]). For the study of hidden symmetries the reformulation of the self-dual gravity equations as (reduced) self-dual Yang-Mills equations with infinite-dimensional gauge group was very useful \[19,16,20\] (see also the clear exposition in \[21\]). It was shown that the self-dual gravity equations are invariant with respect to a group, whose generators form the affine Lie algebra $w_\infty\otimes C[ \l , \l^{-1}]$, $\l\in C$, associated with the Lie algebra $w_\infty$ of area-preserving diffeomorphisms of a certain (null) surface \[12,14–18\].
We shall make a further step in the investigation of hidden symmetries of the self-dual gravity (SDG) equations. Our main results are the following:
- To each Lorentz rotation of the tangent space we associate an infinite number of new symmetries of the SDG equations and conserved currents. We show that these symmetries form a Kac-Moody-Virasoro type algebra, in fact the same as the one considered in \[22\]. These symmetries underlie the cancellation of almost all amplitudes in the theory of $N=2$ closed self-dual strings \[2, 3\].
- We define the action of the classical algebra $w_\infty$ on the (conformal) tetrad and, using certain operator product expansion type formulae, we present a new derivation of the symmetry algebra $w_\infty\otimes C[\l ]
\subset w_\infty\otimes C[\l , \l^{-1}]$ of the SDG equations. We also describe the commutation relations between the generators of the ‘old’ and the ‘new’ symmetries.
- It is well-known that for metrics with ‘rotational’ Killing symmetry the SDG equations are reduced to the continual Toda equation ($sl(\infty )$-Toda field equation) \[23\], and for metrics with ‘translational’ Killing symmetry the SDG equations are reduced to the Gibbons-Hawking equations \[24\] in three dimensions. By reduction of the symmetry algebra of the SDG equations we obtain the well-known symmetry algebra $w_\infty$ of the continual Toda equation \[25\] and the symmetry algebra of the Gibbons-Hawking equations, which has not appeared in the literature before.
- Recently, it was found that the T-duality transformation with respect to the rotational Killing vector fields (i.e. those which do not in general preserve the complex structure(s)) does not preserve the self-duality conditions, that leads to apparent violations of the $N=4$ world-sheet supersymmetry \[26\] (see also \[27\]). The T-duality transformation with respect to the translational Killing vector fields (i.e. those which preserve the complex structure(s)) preserves the self-duality conditions. We show that the translational vector fields generate the Abelian loop group $LU(1)=C^\infty(S^1, U(1))$ of symmetries of the SDG equations, and the rotational vector fields generate the non-Abelian Virasoro symmetry group Diff($S^1$). This “non-Abelian nature" of the rotational Killing vector fields underlies the nonpreservation of the local realizations of the world-sheet and space-time supersymmetries under the T-duality transformation with respect to such Killing vector fields.
In this paper we describe new hidden symmetries of the SDG equations omitting direct computations and writing out only the final formulae.
[**2. Manifest symmetries of self-dual gravity**]{}
Let $M^4$ be a complex four-dimensional manifold with a nondegenerate complex holomorphic metric $g$. We shall suppose that $M^4$ is oriented and denote by $\o$ a complex holomorphic volume four-form. Consider the infinite-dimensional algebra sdiff$(M^4)$ of volume-preserving vector fields on $M^4$. For $N=N^\mu\p_\mu\in$ sdiff$(M^4)$ ($\mu ,\nu,...=1,...,4\ $) the Lie derivative of $\o$ along $N$ should vanish (divergence free vector fields). Here, and throughout the paper, we use the Einstein summation convention.
Self-dual vacuum (i.e. Ricci flat) metrics may be constructed as follows \[19\]: For four pointwise linearly independent vector fields $B_\a \in$ sdiff$(M^4)$ let us consider the following equations: $$\frac{1}{2}\e_{\a\b}\,^{\g\d} [B_\g , B_\d ]=[B_\a , B_\b ],
\eqno(1)$$ where $\a ,\b ,...=1,...,4$ are Lorentz indices. If one introduces the vector fields $$V_1=\frac{1}{2}(B_1-iB_2), \ V_{\t 1}=\frac{1}{2}(B_1+iB_2),
\ V_2=\frac{1}{2}(B_3-iB_4), \ V_{\t 2}=\frac{1}{2}(B_3+iB_4)
,\eqno(2)$$ then one may rewrite eqs.(1) in the form $$[V_{\t 1}, V_{\t 2}]=0, \ [V_{\t 1}, V_1]-[V_{\t 2}, V_2]=0,\ [V_1, V_2]=0
.\eqno(3)$$ Finally, let $f$ be a scalar, a conformal factor, defined by $f^2=\o (V_1, V_2, V_{\t 1}, V_{\t 2})$. Then one may define a (contravariant) metric
$$g=f^{-2}(V_1\otimes V_{\t 1} + V_{\t 1}\otimes V_1-
V_2\otimes V_{\t 2} - V_{\t 2}\otimes V_2)\ \Leftrightarrow
\eqno(4a)$$ $$g^{\mu\nu}=f^{-2} g^{A\t A}(V_A^\mu V_{\t A}^\nu +V_A^\nu V_{\t A}^\mu ),
\eqno(4b)$$ where $g^{1\t 1}=g^{\t 1 1}=-g^{2\t 2}=-g^{\t 2 2}=1,\ A,B,...=1,2,\
\t A, \t B,...=1,2,\ $ and the Riemann tensor of this metric will be self-dual. Conversely, every self-dual vacuum metric arises in this way. For proofs and discussions see \[19–21\]. We call eqs.(3) (and eqs.(1)) the self-dual gravity (SDG) equations. Notice, that $\{
f^{-1}V_{\t A}, f^{-1}V_A\}$ is a null tetrad for the self-dual vacuum metric (4).
An infinitesimal symmetry transformation of a system of partial differential equations is a map $\d :\ s\rightarrow \d s$, which to each solution $s$ of the system assigns a solution $\d s$ of the linearized (around $s$) form of the system. The linearized form of the system may be derived by substituting $s+\e\d s$ into the system, and keeping only terms of the first order in the parameter $\e$. In particular, for eqs.(1) we obtain the following equations on $\d B_\a$: $$\e_{\a\b}\,^{\g\s} [B_\g , \d B_\s ]=[B_\a , \d B_\b ] +[\d B_\a , B_\b ]
.\eqno(5)$$
For any two vector fields $M, N$ in the algebra sdiff$(M^4)$ we define the transformations of the vector fields $\{ B_\a\}$ as follows: $$\d^0_M B_\a := [M, B_\a ]\ \Rightarrow \
[\d^0_M, \d^0_N]B_\a =\d^0_{[M,N]} B_\a
.\eqno(6)$$ Substituting (6) into (5) and using the Jacobi identities, it is not hard to show that $\d^0_MB_\a$ satisfy eqs.(5), i.e. $\d^0_M$ is a symmetry of eqs.(1).
Let us now consider global (not depending on coordinates) Lorentz rotations, which form the algebra $so(4,C)\simeq sl(2, C)\oplus sl(2, C)$, with the generators $\{W_i\,_\a^\b \}=\{X_a\,_\a^\b, X_{\hat a}\,_\a^\b\}$: $$[X_a, X_b]= f_{ab}^c X_c, \ [X_a, X_{\hat b}]=0, \
[X_{\hat a}, X_{\hat b}]= f_{\hat a\hat b}^{\hat c} X_{\hat c}
,\eqno(7)$$ where $i, j,...=1,...,6;\ a, b,...=1,2,3;\ \hat a, \hat b,...=1,2,3;
$ and $f_{12}^3=f_{\hat 1\hat 2}^{\hat 3}=-f_{23}^1=$ $
-f_{\hat 2\hat 3}^{\hat 1}=-f_{31}^2=-f_{\hat 3\hat 1}^{\hat 2}=1$ are the structure constants of the algebra $sl(2, C)$. Let us define the following transformations $\D_{W_i}$ of the vector fields $\{B_\a\}$: $$\D_{W_i} B_\a :=W_i\,_\a^\b B_\b \ \Rightarrow \ [\D_{W_i}, \D_{W_j}]B_\a
= -\D_{[W_i,W_j]} B_\a
.\eqno(8)$$ One may consider $\{B_\a\}$ as a vector field with extra Lorentz index $\a$. We write out the explicit formulae for the components $W_i\,_\a^\b$ of the matrices $W_i$, defining the action of the transformations (8) on the vector field with components $\{V_{\t A},
V_A\}$ in the null frame: $$\D_{X_1}V_{\t 1} = - \frac{i}{2}V_{\t 2},\
\D_{X_1}V_{\t 2} = \frac{i}{2}V_{\t 1},\
\D_{X_1}V_{ 1} = \frac{i}{2}V_{ 2}, \
\D_{X_1}V_{ 2} = - \frac{i}{2}V_{1},
\eqno(9a)$$ $$\D_{X_2}V_{\t 1} = \frac{1}{2}V_{\t 2},\
\D_{X_2}V_{\t 2} = \frac{1}{2}V_{\t 1},\
\D_{X_2}V_{ 1} = \frac{1}{2}V_{ 2}, \
\D_{X_2}V_{ 2} = \frac{1}{2}V_{1},
\eqno(9b)$$ $$\D_{X_3}V_{\t 1} = - \frac{i}{2}V_{\t 1},\
\D_{X_3}V_{\t 2} = \frac{i}{2}V_{\t 1},\
\D_{X_3}V_{ 1} = \frac{i}{2}V_{ 1}, \
\D_{X_3}V_{ 2} = - \frac{i}{2}V_{2},
\eqno(9c)$$
$$\D_{X_{\hat 1}}V_{\t 1} = - \frac{i}{2}V_{ 2},\
\D_{X_{\hat 1}}V_{\t 2} = - \frac{i}{2}V_{ 1},\
\D_{X_{\hat 1}}V_{ 1} = \frac{i}{2}V_{\t 2}, \
\D_{X_{\hat 1}}V_{ 2} = \frac{i}{2}V_{\t 1},
\eqno(10a)$$ $$\D_{X_{\hat 2}}V_{\t 1} = \frac{1}{2}V_{2},\
\D_{X_{\hat 2}}V_{\t 2} = \frac{1}{2}V_{1},\
\D_{X_{\hat 2}}V_{ 1} = \frac{1}{2}V_{\t 2}, \
\D_{X_{\hat 2}}V_{ 2} = \frac{1}{2}V_{\t 1},
\eqno(10b)$$ $$\D_{X_{\hat 3}}V_{\t 1} = - \frac{i}{2}V_{\t 1},\
\D_{X_{\hat 3}}V_{\t 2} = - \frac{i}{2}V_{\t 2},\
\D_{X_{\hat 3}}V_{ 1} = \frac{i}{2}V_{ 1}, \
\D_{X_{\hat 3}}V_{ 2} = \frac{i}{2}V_{2}
.\eqno(10c)$$ It is obvious that $$[\d^0_M, \D_{W_i}] B_\a =0
,\eqno(11)$$ i.e. the transformations (6) and (9), (10) commute.
The symmetries under the transformations (6) in the group SDiff$(M^4)$ are gauge symmetries, and we may use them for the partial fixing of a coordinate system. Namely, from eqs.(3) it follows that one can always introduce coordinates $(y, z, \t y, \t z)$ so that $V_{\t 1}$ and $V_{\t 2}$ become coordinate derivatives (Frobenius theorem), i.e. $V_{\t A} =\p_{\t A}$, where $\p_{\t 1}\equiv \p_{\t y}, \ \p_{\t 2}\equiv \p_{\t z}.$ Then $[V_{\t 1}, V_{\t 2}]\equiv 0$, and the SDG eqs.(3) are reduced to $$g^{\t AA}\p_{\t A} V_A =0\ \Leftrightarrow\ \p_{\t 1} V_1- \p_{\t 2} V_2=0
,\eqno(12a)$$ $$\e^{AB}V_A V_B=0\ \Leftrightarrow \ [V_1, V_2]=0,
\eqno(12b)$$ where $\e^{12}=-\e^{21}=1,$ and we have used the fact that $[\p_{\t A}, K]=\p_{\t A}K$ for any vector field $K$.
[**Remark.**]{} If we put $\t y=\bar y,\ \t z=\bar z$ (where $\bar y$ and $\bar z$ are complex conjugate to $y$ and $z$), then the solutions of eqs.(12) will define a tetrad on a real self-dual manifold with metric (4) of signature $(2,2)$. If we put $\t y=\bar y,\ \t z=-\bar z$, then solutions of eqs.(12) will define a tetrad on a real self-dual manifold with metric (4) of signature $(4,0)$ (hyper-Kähler manifolds).
The vector fields $\{V_A\}$ from (12) may be parametrized by a scalar function (the only degree of freedom of self-dual metrics) in a different way, and then eqs.(12) will be reduced to different nonlinear equations on the scalar function (see \[13–21\]). For example, if we choose $$V_1 =\O_{2\t 2}\p_1 - \O_{1\t 2}\p_2, \
V_2 =\O_{2\t 1}\p_1 - \O_{1\t 1}\p_2, \
\O_{A\t A}\equiv \p_A\p_{\t A}\O ,\
\p_1\equiv \p_y, \ \p_2\equiv \p_z,
\eqno(13a)$$ then eqs.(12) are reduced to Plebański’s first heavenly equation \[13\]: $$\O_{1\t 2}\O_{2\t 1}-\O_{1\t 1}\O_{2\t 2}=1
.\eqno(13b)$$ We shall not perform these reductions, because eqs.(12) are more fundamental than various scalar equations \[13,16–18\], obtained from (12) and carrying information about different parametrization of the vector fields $\{V_A\}$.
It is obvious that eqs.(12), derived from (3) by partial fixing of the coordinate system (in which $ V_{\t A}=\p_{\t A}$ should not change), will be not invariant under all the transformations from (6) and (8). The discussion of residual gauge invariance and of hidden symmetries will be the topic of the following Sections.
[**3. Affine extension of the $w_\infty \simeq$ sdiff$(\Sigma^2)$ algebra**]{}
It is easy to see that the symmetries of eqs.(12) have to satisfy the equations: $$\d V_{\t A}\equiv \d \p_{\t A}=0,
\eqno(14a)$$ $$\p_{\t 1}\d V_1- \p_{\t 2}\d V_2=0
,\eqno(14b)$$ $$[V_1, \d V_2] + [\d V_1, V_2] =0
.\eqno(14c)$$ As to the transformations (6) from the algebra sdiff$(M^4)$, it is evident that eqs.(12) will be invariant only under the subalgebra sdiff$(\Sigma^2)
\subset\ $sdiff$(M^4)$ of those vector fields $M, N,...,$ which satisfy $$\d^0_M \p_{\t A}:=[\psi^0_M, \p_{\t A}]=0,\
\d^0_M V_{ A}:=[\psi^0_M, V_{A}]\ \Rightarrow
\eqno(15a)$$ $$[\d^0_M, \d^0_N]\p_{\t A} =\d^0_{[M,N]} \p_{\t A}=0,\
[\d^0_M, \d^0_N]V_A =\d^0_{[M,N]} V_A
.\eqno(15b)$$ Here we have denoted by $\Sigma^2$ the isotropic two-dimensional surfaces, parametrized by the coordinates $\{y, z\}$, and $\psi^0_M:=M$.
It is not difficult to show that the transformations (15a) satisfy eqs.(14) and from eq.(14b) it follows that $\{\d^0_MV_1, \d^0_MV_2\}$ are two components of the conserved current $\d^0_MV_A$. From (14b) it also follows that there exists a vector field $\psi^1_M$ such that $$\d^0_M V_{ A}\equiv [\psi^0_M, V_{A}] = \e^{\t B}_A\p_{\t B}\psi^1_M,
\eqno(16)$$ where $\e^{\t B}_A = g^{\t BB}\e_{BA},\ \e_{12}=-\e_{21}=1 \Rightarrow
\e^{\t 2}_1=\e^{\t 1}_2=1.$ Using $\psi^1_M$, we introduce the transformation $\d^1_M$ by the formulae: $$\d^1_M \p_{\t A}:=0,\
\d^1_M V_{ A}:=[\psi^1_M, V_{A}]
.\eqno(17)$$
It is not hard to verify that by virtue of eqs.(12), $\d^1_M V_{ A}$ satisfies eqs.(14). Therefore, $\d^1_M V_{ A}$ is also a conserved current. Now we may use a standard inductive procedure that was used, for example, for the construction of (nonlocal) currents of the chiral fields model \[28,29\]. Namely, let us suppose that we have constructed $\d^n_M$ such that $$\d^n_M \p_{\t A}:=0,\
\d^n_M V_{ A}:=[\psi^n_M, V_{A}],\ n\ge 1
.\eqno(18)$$ Assuming that the current $\d^n_M V_{ A}$ is conserved implies that there exists a vector field $\psi^{n+1}_M$ such that $$[\psi^n_M, V_{A}] = \e^{\t B}_A\p_{\t B}\psi^{n+1}_M
.\eqno(19)$$ Using this we shall show that the $(n+1)$-th current $\d^{n+1}_M V_{ A}:=[\psi^{n+1}_M, V_{A}]$ is conserved, which will complete the induction: $$\p_{\t 1}\d^{n+1}_M V_1- \p_{\t 2}\d^{n+1}_M V_2=
[\p_{\t 1}\psi^{n+1}_M, V_1] - [\p_{\t 2}\psi^{n+1}_M, V_2]=$$ $$=[[\psi^n_M, V_{2}], V_1]+[[V_1, \psi^n_M], V_2]=[[V_1, V_2], \psi^n_M] =0,
\eqno(20a)$$ $$[\d^{n+1}_M V_1, V_2] + [V_1, \d^{n+1}_M V_2] =
[[\psi^{n+1}_M, V_1], V_2] + [V_1,[ \psi^{n+1}_M, V_2]] =
[\psi^{n+1}_M,[ V_1, V_2]]=0
.\eqno(20b)$$ Thus, for any $n\ge 1$ we construct a vector field $\psi^n_M$ and a conserved current $\d^{n}_M V_A$, starting from $\psi^0_M:=M$ and $\d^0_M V_A:=[M, V_A]$.
[**Remark.**]{} Using (4), (15), (18) and (19), one may show by direct calculations that $\d^0_M g^{\mu\nu}={\cal L}_{\psi^0_M}g^{\mu\nu}$, but for $n\ge 1\ $ $\d^n_M g^{\mu\nu}\ne{\cal L}_{\psi^n_M}g^{\mu\nu}$, where ${\cal L}_{\psi^n_M}$ is a Lie derivative along the vector field $\psi^n_M$. This means that $\d^0_M$ is a gauge symmetry (an infinitesimal diffeomorphism), and $\d^n_M$ with $n\ge 1$ is not a gauge symmetry.
Having an infinite number of vector fields $\psi^n_M$ on $M^4$, one can introduce the vector field $\psi_M(y,z,\t y, \t z, \l ):=
\sum^\infty_{n=0}\l^n\psi^n_M(y,z,\t y, \t z),$ depending on the complex parameter $\l\in C$. Then the infinite number of equations (19) ([*recurrence relations*]{}) may be rewritten as two linear equations on $\psi_M(\l )$: $$\begin{array}{c}
\p_{\t 1}\psi_M + \l [V_2, \psi_M]=0
\\
\p_{\t 2}\psi_M + \l [V_1, \psi_M]=0
\end{array}
\Longleftrightarrow
\ \p_{\t A}\psi_M + \l\e^{B}_{\t A} [V_B, \psi_M]=0,
\eqno(21)$$ where $\e^{A}_{\t B} = g^{A\t A}\e_{\t A\t B},
\ \e_{\t 1\t 2}= -\e_{\t 2\t 1}=1\ \Rightarrow \
\e^{ 2}_{\t 1}=\e^{ 1}_{\t 2}=1$.
[**Remark**]{}. Equations (21) can be considered as a linear system (Lax pair) for the SDG equations (12), because eqs.(12) are the compatibility conditions of eqs.(21). As a ‘canonical’ vector field one may choose, e.g., $\p_A$ and consider the linear equations (21) on $\psi_{\p_A}$.
Instead of an infinite number of symmetry generators $\d^n_M$ one may introduce the generator $\d_M(\l ):= \sum^\infty_{n=0}\l^n\d^n_M,\ $ depending on a complex ‘spectral’ parameter $\l\in C$. It is evident that $\d^n_M =(2\pi i)^{-1} \oint_{C'}d\l\
\l^{-n-1}\d_M(\l ),\ $ where $C'$ is a contour in the $\l$-plane about the origin. Using $\psi_M(\l )$ and $\d_M(\l )$, formulae (15a) and (18) may be rewritten in the form of a one-parameter family of infinitesimal transformations $$\d_M(\l ) V_{\t A}:=0,\
\d_M(\l ) V_{ A}:=[\psi_M(\l ), V_{A}]
,\eqno(22)$$ which are symmetries of eqs.(12) for each $M\in\ $sdiff$(\Sigma^2)$.
Now we are interested in the algebraic properties of the symmetries (22). It is not difficult to show that $$\d_M(\l ) \d_N(\z )V_{ A}:=\d_M(\l )(V_A + \d_N(\z )V_{ A})-\d_M(\l )V_A =$$ $$=[\psi_M(\l ) + \d_N(\z )\psi_M(\l ), V_A + \d_N(\z )V_{ A}]-\d_M(\l )V_A$$ $$\cong [\d_N(\z )\psi_M(\l ), V_A] + [\psi_M(\l ), \d_N(\z )V_A]
,\eqno(23a)$$ $$\d_N(\z )\d_M(\l )V_A=
[\d_M(\l )\psi_N(\z ), V_A]+[\psi_N(\z ), \d_M(\l )V_A].
\eqno(23b)$$ Then the commutator of two symmetries is equal to $$[\d_M(\l ), \d_N(\z )]V_{ A}= [\d_N(\z )\psi_M(\l )-
\d_M(\l )\psi_N(\z )+[\psi_M(\l ), \psi_N(\z )], V_A]
.\eqno(24)$$ Accordingly, for the variation $\d_N(\z )\psi_M(\l )$ we have the following equations $$\p_{\t A}\d_N(\z )\psi_M(\l )+ \l \e^B_{\t A}[V_B,\d_N(\z )\psi_M(\l )]=
\l \e^B_{\t A}[\psi_M(\l ), \d_N(\z )V_B]
,\eqno(25)$$ the solutions of which have the form (cf. \[29\]): $$\d_N(\z )\psi_M(\l )=\frac{\z }{\l -\z}(\psi_{[M,N]}(\z )-
[\psi_M(\l ), \psi_N(\z )])\ \Rightarrow$$ $$\d_M(\l )\psi_N(\z )=\frac{\l }{\l -\z}(\psi_{[M,N]}(\l )-
[\psi_M(\l ), \psi_N(\z )]).
\eqno(26)$$ Substituting (26) into (24), we obtain $$[\d_M(\l ), \d_N(\z )]=
\frac{1}{\l -\z}(\l\d_{[M,N]}(\l )- \z\d_{[M,N]}(\z ))\
\Rightarrow\
[\d_M^m, \d_N^n]=\d_{[M,N]}^{m+n},\ m,n\ge 0
,\eqno(27)$$ when we consider the action on $V_A$ and $V_{\t A}$. The algebra (27) is the affine extension sdiff$(\Sigma^2)\otimes C[\l ]$ of the algebra sdiff$(\Sigma^2)$ of area-preserving diffeomorphisms. Formulae (27) give us commutators between half of the generators of the affine Lie algebra sdiff$(\Sigma^2)\otimes C[\l , \l^{-1}]$.
[**Remarks**]{}.
1. The described algebra sdiff$(\Sigma^2)\otimes C[\l ]$ of symmetries of the SDG equations (12) is known. But the formulae (15)–(27), describing the action of these symmetries on the (conformal) tetrad $\{V_{\t A}, V_A\}$, are new. These formulae may be useful for applications.
2. In the described algebra there is an Abelian subalgebra with generators $\{\d^n_{\p_A}\}$, where $\{\p_A\} = \{\p_y, \p_z\}. $ In the usual way (see, e.g., \[30\]), one can associate to this algebra of Abelian symmetries the hierarchy of the SDG equations (cf. \[15,17\] for other approaches).
3. We restrict our attention to the subalgebra sdiff$(\Sigma^2)\otimes C[\l ]$ of the symmetry algebra sdiff$(\Sigma^2)\otimes C[\l , \l^{-1}]$. The rest will be obtained if we choose the coordinates in such a way that $V_A$ will be coordinate derivatives (in Sec.2 and Sec.3 $\ V_{\t A}$ were the coordinate derivatives) and consider symmetries of eqs.(3) after this ‘dual’ partial fixing of coordinates.
[**4. Hidden symmetries from Lorentz rotations**]{}
The explicit form of the infinitesimal transformations of the vector fields $\{V_{\t A}, V_A\}$ under the action of the Lorentz group $SO(4, C)$ was written out in (9) and (10). From (9), (10) one can see that $\D_{W_i}\p_{\t A}\ne 0$, i.e. these transformations, being the symmetry of eqs.(3), are not the symmetry of eqs.(12). In other words, the transformations (9) and (10) do not preserve the chosen gauge. Nevertheless the Lorentz symmetry can be restored by compensating transformations from the diffeomorphism group SDiff$(M^4).$ The point is that from formulae (9), (10) it follows that $$\p_{\t 1}(\D_{W_i}\p_{\t 2})-\p_{\t 2}(\D_{W_i}\p_{\t 1})=0
,\eqno(28)$$ for any $W_i\in so(4, C).$ This means that there exist vector fields $\{\psi^0_{W_i}\} = \{
\psi^0_{X_a}, \psi^0_{X_{\hat a}}\}$ such that $$\D_{W_i}\p_{\t 1}=\p_{\t 1}\psi^0_{W_i}, \
\D_{W_i}\p_{\t 2}=\p_{\t 2}\psi^0_{W_i}
,\eqno(29)$$ and one can define the transformation $$\d^0_{W_i}\p_{\t A}:=\D_{W_i}\p_{\t A}+[\psi^0_{W_i}, \p_{\t A}]=0,
\eqno(30a)$$ $$\d^0_{W_i}V_{A}:=\D_{W_i}V_{A}+[\psi^0_{W_i}, V_{A}],
\eqno(30b)$$ satisfying the following commutator relations $$[\d^0_{W_i}, \d^0_{W_j}] V_{\t A}=\d^0_{[W_i,W_j]}V_{\t A}=0,\
[\d^0_{W_i}, \d^0_{W_j}] V_{A}=\d^0_{[W_i,W_j]}V_{A}
.\eqno(31)$$ Notice that the equality to zero in (30a) follows from the definition (29) of the vector fields $\{\psi^0_{W_i}\}$.
[**Remark.**]{} Using eqs. (4) and (30), one can show by direct computation that $\d^0_{W_i}$ acts on $g^{\mu\nu}$ as a Lie derivative, i.e. $\d^0_{W_i}g^{\mu\nu}={\cal L}_{\psi^0_{W_i}}g^{\mu\nu}. $ Therefore, if one defines the action of $\d^0_{W_i}$ on the vector fields $\psi_M$ from the linear system (21), then one can develop the method of reduction for the SDG equations (12) and the linear system (21) for them, analogous to the method developed for the self-dual Yang-Mills model \[31,32\].
It is not difficult to show that (30) satisfy eqs.(14). So, $\d^0_{W_i}V_A$ is a conserved current and from (14b) it follows that there exists a vector field $\psi^1_{W_i}$ such that $$\d^0_{W_i}V_A\equiv \D_{W_i}V_A + [\psi^0_{W_i}, V_A]=
\e^{\t B}_A\p_{\t B}\psi^1_{W_i}.
\eqno(32)$$ Let us define in full analogy with Sec.3 the transformations $$\d^1_{W_i}\p_{\t A}:=0, \
\d^1_{W_i}V_A:=[\psi^1_{W_i}, V_A]
.\eqno(33)$$ One can verify that (33) is a symmetry of eqs.(12). Now with the help of the inductive procedure, identical to the one described in Sec.3, it is not difficult to show that the transformations $$\d^{n+1}_{W_i}\p_{\t A}:=0, \quad
\d^{n+1}_{W_i}V_A:=[\psi^{n+1}_{W_i}, V_A]
\eqno(34)$$ are symmetries of eqs.(12), if $$\d^n_{W_i}V_A\equiv [\psi^n_{W_i}, V_A]=\e^{\t B}_A\p_{\t B}
\psi^{n+1}_{W_i},\ n\ge 1,
\eqno(35)$$ is a conserved current.
One may introduce the generating vector field $\psi_{W_i}(y,z,\t y, \t z, \z ):=$ $\sum^\infty_{n=0}\z^n \psi^n_{W_i}
(y,z,\t y, \t z),$ $ \ \z\in C.$ Then the recurrence relations (35) can be collected into the following two linear equations $$[\p_{\t A}+\z\e^B_{\t A}V_B, \psi_{W_i}(\z )]=
\D_{W_i}\p_{\t A}+\z \e^B_{\t A}\D_{W_i}V_B.
\eqno(36)$$ Analogously, introducing $\d_{W_i}(\z ):=\sum^\infty_{n=0}\z^n \d^n_{W_i}$, we obtain a one-parameter family of infinitesimal transformations $$\d_{W_i}(\z )\p_{\t A}:=0,\
\d_{W_i}(\z )V_{ A}:=[\psi_{W_i}(\z ), V_A]+\D_{W_i}V_A .
\eqno(37)$$ For each $W_i\in so(4, C)$ these transformations are new ‘hidden symmetries’ of the SDG equations (12).
After some calculations we have the following expression for the commutator of two symmetries $$[\d_{W_i}(\l ), \d_{W_j}(\z )] V_A=
\e^{\t B}_A\p_{\t B}
\{\frac{1}{\l}\d_{W_j}(\z )\psi_{W_i}(\l )-
\frac{1}{\z}\d_{W_i}(\l ) \psi_{W_j}(\z )\}+$$ $$+\frac{1}{\z}\e^{\t B}_A \D_{W_j}\,_{\t B}^C\d_{W_i}(\l )V_C
-\frac{1}{\l}\e^{\t B}_A \D_{W_i}\,_{\t B}^C\d_{W_j}(\z )V_C
. \eqno(38)$$ From eqs.(36) one obtains the equations for the variation $\d_{W_i}(\l )\psi_{W_j}(\z )$: $$[\p_{\t A}+\z \e^B_{\t A}V_B, \d_{W_i}(\l )\psi_{W_j}(\z )]
=\z \e^B_{\t A}[\psi_{W_j}(\z ), \d_{W_i}(\l )V_B]+
(\D_{W_j}\,_{\t A}^B+\z \e^C_{\t A}\D_{W_j}\,_ C^B)\d_{W_i}(\l )V_B
.\eqno(39)$$ Using the identities $$\D_{X_{a}}\,_{\t A}^{B}=0,\quad
\D_{X_{\hat a}}\,_{\t A}^{B}+ \z \e^C_{\t A}\D_{X_{\hat a}}\,_C^{B}=
(Z_{\hat a}^\z -\frac{\z}{2}\dot Z_{\hat a}^\z)\e^B_{\t A}
,\eqno(40)$$ where $Z_{\hat a}^\z$ are the components of vector fields $$Z_{\hat a} = Z_{\hat a}^\z \p_\z , \ [Z_{\hat a}, Z_{\hat b}]=
f^{\hat c}_{\hat a \hat b}Z_{\hat c}$$ $$Z_{\hat 1}^\z =-\frac{i}{2}(1+\z^2),\
Z_{\hat 2}^\z =\frac{1}{2}(1-\z^2),\
Z_{\hat 3}^\z =i\z,\
\dot Z_{\hat a}^\z\equiv\frac{d}{d\z}Z_{\hat a}^\z ,
\eqno(41)$$ we obtain the solution of eqs. (39) in the form $$\d_{W_i}(\l )\psi_{W_j}(\z )=\frac{\z}{(\l -\z )}\{\psi_{[W_i, W_j]}(\z )-
[\psi_{W_i}(\l ), \psi_{W_j}(\z )]-W_i^\z\p_\z\psi_{W_j}(\z )\}+$$ $$+\frac{\l}{(\l -\z )^2}W_j^\z\{\psi_{W_i}(\l )- \psi_{W_i}(\z )\}
,\eqno(42)$$ where $W_a^\z :=0,\ W_{\hat a}^\z := Z_{\hat a}^\z.$
Substituting (42) into (38), we obtain the following expression for the commutator of two successive infinitesimal transformations: $$[\d_{W_i}(\l ), \d_{W_j}(\z )] V_A=
\frac{1}{(\l -\z )}
\{{\l}\d_{[W_i, W_j]}(\l )-{\z}\d_{[W_i, W_j]}(\z )\}V_A+$$ $$+\frac{1}{(\l -\z )^2}\{
\frac{\z}{\l}W_i^\l (
{\z}\d_{W_j}(\z )-{\l}\d_{W_j}(\l ))
+\frac{\l}{\z}W_j^\z (
{\z}\d_{W_i}(\z )-{\l}\d_{W_i}(\l ))\}V_A+$$ $$+\frac{1}{\z}\e^{\t B}_A \D_{W_j}\,_{\t B}^C\d_{W_i}(\l )V_C
-\frac{1}{\l}\e^{\t B}_A \D_{W_i}\,_{\t B}^C\d_{W_j}(\z )V_C
+$$ $$+\frac{1}{(\l -\z )}\{W_i^\z \p_\z (
{\z}\d_{W_j}(\z ))+{W_j}^\l\p_\l (
{\l}\d_{W_i}(\l ))\}V_A
. \eqno(43)$$ In order to rewrite (43) in terms of the generators $\d_{W_i}^n=(2\pi i)^{-1}
\oint_{C'}d\l \ \l^{-n-1}\d_{W_i}(\l )$, it is convenient to introduce $Y_0, Y_\pm$ instead of $X_{\hat a}$: $$Y_0:=iX_{\hat 3},\
Y_+:=-iX_{\hat 1}+X_{\hat 2},\
Y_-:=-iX_{\hat 1}-X_{\hat 2},\
[Y_\pm , Y_0]=\pm Y_\pm ,\ [Y_+ , Y_-]=2 Y_0.
\eqno(44)$$ Using (43) and (44), we obtain $$[\d^m_{X_a}, \d^n_{X_b}]=\d^{m+n}_{[X_a,X_b]},\ m,n,...\ge 0,
\eqno(45)$$ $$[\d^m_{Y_0}, \d^n_{Y_0}]=2(m-n)\d^{m+n}_{Y_0},\
[\d^m_{Y_+}, \d^n_{Y_+}]=2(m-n)\d^{m+n-1}_{Y_+},\
[\d^m_{Y_-}, \d^n_{Y_-}]=2(m-n)\d^{m+n+1}_{Y_-},$$ $$[\d^m_{Y_0}, \d^n_{Y_+}]=\d^{m+n}_{[Y_0,Y_+]}+2m\d^{m+n-1}_{Y_0}
-2n\d^{m+n}_{Y_+},$$ $$[\d^m_{Y_0}, \d^n_{Y_-}]=\d^{m+n}_{[Y_0,Y_-]}+2m\d^{m+n+1}_{Y_0}
-2n\d^{m+n}_{Y_-},\$$ $$[\d^m_{Y_+}, \d^n_{Y_-}]=\d^{m+n}_{[Y_+,Y_-]}+2m\d^{m+n+1}_{Y_+}
-2n\d^{m+n-1}_{Y_-},
\eqno(46)$$ $$[\d^m_{Y_0}, \d^n_{X_a}]=-2n\d^{m+n}_{X_a},\
[\d^m_{Y_+}, \d^n_{X_a}]=-2n\d^{m+n-1}_{X_a},\
[\d^m_{Y_-}, \d^n_{X_a}]=-2n\d^{m+n+1}_{X_a},
\eqno(47)$$ Formulae (45) mean that $\{\d^m_{X_a}\}$ are the generators of the affine Lie algebra $sl(2,C)\otimes C[\l ]$, which is the subalgebra in $sl(2,C)\otimes C[\l , \l^{-1}].$ From (46) one can see that $\d^m_{Y_0}, \d^m_{Y_+}$ and $\d^m_{Y_-}$ generate three different Virasoro-like subalgebras of the symmetry algebra.
Thus, the new algebra of ‘hidden symmetries’ of the SDG equations with generators\
$\{\d^m_{X_1}, \d^m_{X_2}, \d^m_{X_3}, \d^m_{Y_0}, \d^m_{Y_+},
\d^m_{Y_-}\}$ forms a Kac-Moody-Virasoro algebra with commutation relations (45) – (47). This algebra has the same commutation relations as a subalgebra of the symmetry algebra of the self-dual Yang-Mills equations \[22\].
[**5. Commutators of symmetries and comments**]{}
In Sec.4 and Sec.3 eqs.(36) on $\psi_{W_j},\ W_j\in so(4, C),$ and eqs.(21) on $\psi_M$, $M\in$ sdiff$(\Sigma^2)$, have been written out. From these equations one can derive the equations for the variations of the vector fields $\psi_{W_j}$ and $\psi_M$: $$[\p_{\t A}+\z\e^B_{\t A}V_B, \d_M(\l )\psi_{W_j}(\z )]=
\z\e^B_{\t A}[\psi_{W_j}(\z ), \d_M(\l )V_B]+$$ $$+(\D_{W_j}\,_{\t A}^B+\z\e^C_{\t A}\D_{W_j}\,_{C}^B)\d_M(\l )V_B,
\eqno(48a)$$ $$[\p_{\t A}+\l\e^B_{\t A}V_B, \d_{W_j}(\z )\psi_M(\l )]=
\l\e^B_{\t A}[\psi_M(\l ), \d_{W_j}(\z )V_B]
.\eqno(48b)$$ We have the following solutions of these equations: $$\d_M(\l )\psi_{W_j}(\z )=\frac{\z}{\l-\z}
[\psi_{W_j}(\z ), \psi_M(\l )]+\frac{\l}{(\l-\z)^2}W^\z_j
\{\psi_M(\l ) - \psi_M(\z ) \}
,\eqno(49a)$$ $$\d_{W_j}(\z )\psi_M(\l )=\frac{\l}{\l-\z}\{
[\psi_{W_j}(\z ), \psi_M(\l )]+W^\l_j\p_\l\psi_M(\l ) \}
.\eqno(49b)$$ Then after some computation we obtain the following expression for the commutator $$[\d_M(\l ), \d_{W_j}(\z )]V_A=\frac{1}{(\l-\z)^2}\{\frac{\l}{\z}
W^\z_j(\z \d_M(\z )-\l\d_M(\l ))\}V_A +$$ $$+\frac{1}{\z}\e^{\t B}_{A}\D_{W_j}\,_{\t B}^C\d_M(\l )V_C +
\frac{1}{(\l-\z )}W_j^\l\p_\l(\l\d_M(\l ))V_A.
\eqno(50)$$ Using the definition of $\d_M^m,\ \d_{W_j}^n$, formulae (44) and the commutator (50), we obtain $$[\d^m_{X_a}, \d^n_M]=0,\
[\d^m_{Y_0}, \d^n_M]=-2n\d^{m+n}_M,
\eqno(51a)$$ $$[\d^m_{Y_+}, \d^n_M]=-2n\d^{m+n-1}_M,\
[\d^m_{Y_-}, \d^n_M]=-2n\d^{m+n+1}_M,\
m,n,...\ge 0
.\eqno(51b)$$ Thus, the ‘hidden symmetries’ of the SDG equations (12) form the infinite-dimensional Lie algebra with the commutation relations (27), (45)–(47) and (51).
Notice, that taking $\z =0$ in (49b), we obtain the action of $\d_{W_j}^0$ on $\psi_M (\l )$: $$\d^0_{X_a}\psi_M(\l )=[\psi^0_{X_a}, \psi_M(\l )],\
\d^0_{X_{\hat a}}\psi_M(\l )=[\psi^0_{X_{\hat a}}, \psi_M(\l )]+
Z^\l_{\hat a}\p_\l\psi_M(\l )
.\eqno(52)$$ From (52) it follows that $\d^0_{X_a}$ acts on $\psi_M (\l )$ as the Lie derivative along the vector field $\psi^0_{X_a}$, and $\d^0_{X_{\hat a}}$ acts on $\psi_M (\l )$ as the Lie derivative along the “lifted" vector field $\psi^0_{X_{\hat a}}+Z_{\hat a}$ (cf. \[31,32,22\] for the SDYM case). The reason is that the vector fields $\psi^0_{X_a}$, defined on the manifold $M^4$, have the trivial lift on the twistor space $M^4\times B^2$ ($B^2\simeq S^2$ for Euclidean signature and $B^2\simeq H^2$ for the signature (2, 2)), and the lift of the vector fields $\psi^0_{X_{\hat a}}$ is nontrivial.
Remember that $\d_{W_j}^0$ act on the metric as Lie derivatives: $\d_{W_j}^0g^{\mu\nu}={\cal L}_{\psi_{W_j}^0}g^{\mu\nu}$. Therefore one can consider reductions of the SDG equations (12) and of the linear system (21) for them by imposing the invariance conditions of the tetrad and of $\psi_M$ with respect to the vector fields $\{\psi_{W_j}^0\}$. For example, conditions $\d_{Y_0}^0V_A={\cal L}_{\psi_{Y_0}^0}V_A=0$ reduce the SDG equations to the $sl(\infty )$-Toda field equation (see, e.g., \[23,26\]). Since $\d_{Y_0}^0$ generates the Lie algebra diff$(S^1)=\{\d_{Y_0}^n\}$ of the group Diff$(S^1)$, then the space of solutions of the $sl(\infty )$-Toda field equation can be obtained from the space $\cal M$ of solutions of the SDG equations by factorization under the group Diff$(S^1)$. The imposing of $\d_{Y_0}^0$-symmetry (from which there also follow the symmetries under $\d_{Y_0}^n, \ n\ge 1$), automatically reduces the algebra of hidden symmetries of the SDG equations to the well-known algebra $w_\infty\simeq$ sdiff$(\Sigma^2)$ of symmetries of the $sl(\infty )$-Toda field equation \[25\]. Namely, only the subalgebra with generators $\{\d_M^0\}$ will preserve the symmetry condition (this algebra is a normalizer of the algebra diff$(S^1)$ in the symmetry algebra).
Analogously, the conditions $\d_{X_3}^0V_A={\cal L}_{\psi_{X_3}^0}V_A=0$ reduce the SDG equations to the Gibbons-Hawking equations \[24,26 \], describing, in particular, ALE gravitational instantons. From the symmetry with respect to $\d_{X_3}^0$ there follows the symmetry with respect to the whole algebra $\{\d_{X_3}^n\}$ of the Abelian loop group $LU(1)=C^\infty(S^1,U(1))$. Therefore, the space of solutions of the Gibbons-Hawking equations is obtained from the space $\cal M$ of solutions of the SDG equations by factorization under the group $LU(1)$. From the commutation relations (45)–(47) and (51) it follows that the subalgebra with generators $\{\d_M^n, \d_{Y_0}^n, \d_{Y_+}^n,
\d_{Y_-}^n, \ n\ge 0\}$ will preserve the symmetry condition. This algebra has not been described in the literature before.
[**References**]{}
1. E.Kiritsis, C.Kounnas and D.Lüst, Int. J. Mod. Phys. [**A9**]{} (1994) 1361; M.Bianchi, F.Fucito, G.Rossi and M.Martellini, Nucl. Phys. [**B440**]{} (1995) 129; M.J.Duff, R.R.Khuri and J.X.Lu, Phys. Rep. [**259**]{} (1995) 213.
2. H.Ooguri and C.Vafa, Mod. Phys. Lett. [**A5**]{} (1990) 1389; Nucl. Phys. [**B361**]{} (1991) 469; Nucl. Phys. [**B451**]{} (1995) 121.
3. N.Berkovits and C.Vafa, Nucl. Phys. [**B433**]{} (1995) 123; N.Berkovits, Phys. Lett. [**B350**]{} (1995) 28; Nucl. Phys. [**B450**]{} (1995) 90.
4. C.M.Hull, String Dynamics at Strong Coupling, hep-th/9512181; C.Vafa, Evidence for F-Theory, hep-th/9602022; D.Kutasov and E.Martinec, New Principle for String/ Membrane Unification, hep-th/9602049.
5. A.Font, L.Ibáñez, D.Lüst and F.Quevedo, Phys. Lett. [**B249**]{} (1990) 35; A.Sen, Int. J. Mod. Phys. [**A9**]{} (1994) 3707; A.Giveon, M.Porrati and E.Rabinovici, Phys. Rep. [**244**]{} (1994) 77; E.Alvarez, L.Alvarez-Gaumé and Y.Losano, Nucl. Phys. (Proc. Sup. ) [**41**]{} (1995) 1; C.M.Hull and P.K.Townsend, Nucl. Phys. [**B438**]{} (1995) 109; Nucl. Phys. [**B451**]{} (1995) 525; J.H.Schwarz, Superstring Dualities, hep-th/9509148.
6. S.W.Hawking and G.W.Gibbons, Phys. Rev. [**13**]{} (1977) 2752; G.W.Gibbons, M.J.Perry and S.W.Hawking, Nucl. Phys. [**B138**]{} (1978) 141; G.W.Gibbons and M.J.Perry, Nucl. Phys. [**B146**]{} (1978) 90.
7. K.Yamagishi and G.F.Chapline, Class. Quantum Grav. [**8**]{} (1991) 427; K.Yamagishi, Phys. Lett. [**B259**]{} (1991) 436.
8. R.Penrose, Gen. Rel. Grav. [**7**]{} (1976) 31.
9. M.F.Atiyah, N.J.Hitchin and I.M.Singer, Proc. R. Soc. Lond. [**A362**]{} (1978) 425.
10. K.P.Tod and R.S.Ward, Proc. R. Soc. Lond. [**A386**]{} (1979) 411; N.J.Hitchin, Math. Proc. Camb. Phil. Soc. [**85**]{} (1979) 465; R.S.Ward, Commun. Math. Phys. [**78**]{} (1980) 1.
11. M.Ko, M.Ludvigsen, E.T.Newman and K.P.Tod, Phys. Rep. [**71**]{} (1981) 51.
12. C.P.Boyer and J.F.Plebański, J. Math. Phys. [**18**]{} (1977) 1022; J. Math. Phys. [**26**]{} (1985) 229; C.P.Boyer, Lect. Notes Phys. Vol.189 (1983) 25.
13. J.F.Plebański, J. Math. Phys. [**16**]{} (1975) 2395.
14. C.P.Boyer and P.Winternitz, J. Math. Phys. [**30**]{} (1989) 1081.
15. K.Takasaki, J. Math. Phys. [**30**]{} (1989) 1515; J. Math. Phys. [**31**]{} (1990) 1877; Preprint RIMS-747, 1991.
16. Q-Han Park, Phys. Lett. [**B238**]{} (1990) 287; Phys. Lett. [**B257**]{} (1991) 105; J.Hoppe and Q-Han Park, Phys. Lett. [**B321**]{} (1994) 333.
17. J.D.E.Grant, Phys. Rev. [**D48**]{} (1993) 2606; I.A.B.Strachan, J. Math. Phys. [**36**]{} (1995) 3566.
18. V.Husain, Class. Quantum Grav. [**11**]{} (1994) 927; J. Math. Phys. [**36**]{} (1995) 6897.
19. A.Ashtekar, T.Jacobson and L.Smolin, Commun. Math. Phys. [**115**]{} (1988) 631; L.J.Mason and E.T.Newman, Commun. Math. Phys. [**121**]{} (1989) 659; R.S.Ward, Class. Quantum Grav. [**7**]{} (1990) L217.
20. S.Chakravarty, L.Mason and E.T.Newman, J. Math. Phys. [**32**]{} (1991) 1458; R.S.Ward, J. Geom. Phys. [**8**]{} (1992) 317; C.Castro, J. Math. Phys. [**34**]{} (1993) 681; V.Husain, Phys. Rev. Lett. [**72**]{} (1994) 800; J.F.Plebański and M.Przanowski, Phys. Lett. [**A212**]{} (1996) 22.
21. L.J.Mason and N.M.J.Woodhouse, [*Integrability, Self-Duality and Twistor Theory*]{}, Clarendon Press, Oxford, 1996.
22. A.D.Popov and C.R.Preitschopf, Phys. Lett. [**B374**]{} (1996) 71.
23. C.Boyer and J.Finley, J. Math. Phys. [**23**]{} (1982) 1126; J.Gegenberg and A.Das, Gen. Rel. Grav. [**16**]{} (1984) 817.
24. G.W.Gibbons and S.W.Hawking, Phys. Lett. [**78B**]{} (1978) 430; Commun. Math. Phys. [**66**]{} (1979) 291.
25. I.Bakas, In: Proc. of the Trieste Conf. “Supermembranes and Physics in 2+1 Dimensions", eds. M.Duff, C.Pope and E.Sezgin, World Scientific, Singapore, 1990, p.352; Q-Han Park, Phys. Lett. [**B236**]{} (1990) 429; I.Bakas, Commun. Math. Phys. [**134**]{} (1990) 487; K.Takasaki and T.Takebe, Lett. Math. Phys. [**23**]{} (1991) 205.
26. I.Bakas, Phys. Lett. [**B343**]{} (1995) 103; I.Bakas and K.Sfetsos, Phys. Lett. [**B349**]{} (1995) 448; E.Alvarez, L.Alvarez-Gaumé and I.Bakas, Nucl. Phys. [**B457**]{} (1995) 3; Supersymmetry and Dualities, hep-th/9510028.
27. E.Bergshoeff, R.Kallosh and T.Ortin, Phys. Rev. [**D51**]{} (1995) 3003; S.F.Hassan, Nucl. Phys. [**B460**]{} (1996) 362; K.Sfetsos, Nucl. Phys. [**B463**]{} (1996) 33.
28. M.Lüscher and K.Pohlmeyer, Nucl. Phys. [**B137**]{} (1978) 46; E.Brezin, C.Itzykson, J.Zinn-Justin and J.-B.Zuber, Phys. Lett. [**82B**]{} (1979) 442; H.J. de Vega, Phys. Lett. [**87B**]{} (1979) 233; H.Eichenherr and M.Forger, Nucl. Phys. [**B155**]{} (1979) 381; L.Dolan, Phys. Rev. Lett. [**47**]{} (1981) 1371; Phys. Rep. [**109**]{} (1984) 3; K.Ueno and Y.Nakamura, Phys. Lett. [**117B**]{} (1982) 208; Y.-S.Wu, Nucl. Phys. [**B211**]{} (1983) 160; L.-L.Chau, Lect. Notes Phys. Vol. 189 (1983) 111.
29. J.H.Schwarz, Nucl. Phys. [**B447**]{} (1995) 137; Nucl. Phys. [**B454**]{} (1995) 427.
30. A.C.Newell, [*Solitons in Mathematics and Physics*]{}, SIAM, Philadelphia, 1985.
31. M.Legaré and A.D.Popov, Phys. Lett. [**A198**]{} (1995) 195; T.A.Ivanova and A.D.Popov, Theor. Math. Phys. [**102**]{} (1995) 280; JETP Lett. [**61**]{} (1995) 150.
32. T.A.Ivanova and A.D.Popov, Phys. Lett. [**A205**]{} (1995) 158; Phys. Lett. [**A170**]{} (1992) 293; M.Legaré, J.Nonlinear Math.Phys. [**3**]{} (1996) 266.
[^1]: Supported by the Alexander von Humboldt Foundation
[^2]: On leave of absence from Bogoliubov Laboratory of Theoretical Physics, JINR, Dubna, Russia
| |
What is the ancient festival of Samhain? According to Irish mythology, Samhain (like Bealtaine) was a time when the ‘doorways’ to the Otherworld opened, allowing supernatural beings and the souls of the dead to come into our world; while Bealtaine was a summer festival for the living, Samhain “was essentially a festival for the dead“.
When did the festival of Samhain start? Definition. Samhain (pronounced “SOW-in” or “SAH-win”), was a festival celebrated by the ancient Celts halfway between the autumn equinox and the winter solstice. It began at dusk around October 31st and likely lasted three days.
Why was Samhain so significant to the Celts? The Celtic Festival of Samhain
Ancient Celts marked Samhain as the most significant of the four quarterly fire festivals, taking place at the midpoint between the fall equinox and the winter solstice. During this time of year, hearth fires in family homes were left to burn out while the harvest was gathered.
Who is the demon of Halloween? Samhain, also known as the origin of Halloween, was a powerful and special demon of Hell and was one of the 66 Seals. He could only rise when summoned by two powerful witches through three blood sacrifices over three days, with the last sacrifice day on the final harvest, Halloween. | https://aquariusage.com/what-is-the-ancient-festival-of-samhain/ |
I am a hands-on, curious and creative-driven packaging and branding designer based in New York and Connecticut and I’ve been really lucky to work with some of the greatest companies and creatives in the industry. I collaborate and art direct across all disciplines with creative, marketing, production and developing teams to take a project from initial fruition to end product. I love to mentor, inspire and share and I love to continually learn and be inspired. I strive to create the unexpected with a solution that tells a story - that makes you smile. I’m passionate about design and the process that goes into creating it.
I work with clients and agencies that include Landor and Fitch, Bath & Body Works, Victoria’s Secret and C.O. Bigelow Apothecaries. When C.O. Bigelow’s apothecary line became a part of Bath & Body Works, I worked with their internal creative and marketing team to create the identity and branding for the expanded line, bringing Bigelow’s heritage forward and merging it with a contemporary edge. I have continued to serve as caretaker for the brand, overseeing the launch of dozens of products and winning numerous awards along the way.
My approach to all projects is simple: listening to the client, understanding the brief, researching the brand and it’s competitors contemporarily, historically, visually and finally creating concepts and ideas based on a solid understanding of the project using craft based and hands on attention to detail. | https://www.coroflot.com/whitedot/WhiteDotPortfolio |
India is suffering from a loneliness epidemic and some Indian youths are confining themselves away from the concrete clutter of our urban agglomerates… Away from the society’s eyes.
Extreme forms of loneliness can be seen as hikikomori, which is also associated with a number of other mental health issues like depression, personality disorders, insomnia, Alzheimer’s disease etc.
Therefore, it is important for us to have early detection mechanisms in place and also intervene at the right moment to prevent acute cases of hikikomori from spreading.
In India, however, the subject of chronic loneliness as a mental health issue is still not receiving the attention it deserves. Loneliness is a common experience around the world, with 80% of people below 18 years of age and 40% above 65 years of age, reporting feeling lonely at least sometimes in their life.
Indian loneliness is generally associated with ageing and older people. Hence, a number of studies have been conducted in that regard. But, a considerable amount of youth and adolescents are reporting feeling chronic levels of loneliness, especially with the pandemic as a backdrop.
In January 2020, the British prime minister, Theresa May, announced a separate “Minister for Loneliness“, to concentrate on the condition that affects nearly 14% of the UK’s population. In Japan, it has been a condition that has affected generations since it was first recognised in the early 1980s.
In India the conversation around mental health is only getting started, especially regarding youth mental wellness.
At the beginning of the year I talked to Karthik*, a 25-year-old recent college graduate residing in Kolkata. According to his parents, he was always a shy and introverted child, but the drastic changes in his demeanor started to become prominent in 2017.
He started to drop calls from his family, avoid social gatherings with friends and was holed up in his hostel room all day long. He came back home after his third semester break and refused to go back to the university. His parents assumed that the stress of not getting placed at a reputable firm hurt him.
Even though his parents are eager to take him to a psychologist, he refuses to do so as he feels this is something that cannot be helped clinically. He says that he feels content staying at home.
When asked about his “future” aspirations, a look of uncertainty was cast over his face and he looked unsure of what to say. He replied with, “All my past planning was futile. I don’t plan anymore. If nothing goes well, I will become a Twitch creator.”
Twitch is a social media, streaming platform where creators livestream their daily lives or videos of them playing video games. Globally, it has emerged as a main source of income for a sizeable section of the virtual society. Upon asking what his favorite video games were, he answered cult favorites like the “Resident Evil” series, “Outcast”, the viral hit “Among Us” and a few more.
He even showed us a YouTube livestream he did a few months ago, playing Among Us with online streamers. When asked about his plans to start interacting with his neighbors or the people outside his house, he showed trepidation.
For him, it’s easier to play games online as he does not have to directly interact with anyone… Calling out a couple of responses to his fellow streamers is enough.
Yet, Karthik does recognise his problem and wishes to overcome it one day, just not today or in the near future. The pandemic and the social lockdowns has also fueled his desire to avoid people.
“Even if I wanted to talk to a few of my closest childhood friends, I can’t because of this lockdown. And, my willingness to go out and meet people may soon wither away. I feel helpless yet relieved.”
I asked him if he knew about the term hikikomori and he replied with a short “Yes!” He explained that he doesn’t feel that he is one. He says he does speak with his parents and occasionally interact with online strangers when he has to. But, he isn’t particularly offended by the term either.
He knows his situation and has made peace with it. He says that he knows a couple of people, whom he met with online, and who are exactly like him, if not more intense in their seclusion. With keen eyes and an excitable demeanor, he assured us one more time that he was thriving in spite of being cooped up, when we were bidding them goodbye.
He likes being within the four walls of his home as this helps him avoid societal gatherings and he feels much better this way.
Why is it that the Indian youth feels such intense seclusion and loneliness? We can attribute this to a few major factors like:
1. Failure to conform to society’s standards and expectations, or prolonged cases of “ijime” i.e., bullying.
2. Certain people and their personality traits are also more prone to or sensitive to changes in their environment and society.
3. A strong dependency on family members may trigger this phenomenon. People feel too dependent on their family to overcome challenges and hence, they might not be able to respond well to stress.
In a professional environment, however, very strong competition among peers and loss of jobs and depressed economy may push someone to take some drastic steps in life.
4. Depression or anxiety within a person can trigger their social fear and lead them to lose motivation. They end up becoming passive-aggressive. They develop social anxiety, choose to stay at home and rarely leave their rooms.
They often reject interactions with cohabiting persons too.
In this age of globalization and the IT revolution, the means of indirect transmissions are spreading and people are directly connected all over the world through the Internet.
The need for direct social interactions is diminishing and people prefer to connect via the Internet, which further strengthens seclusion among the youth of our nation.
As Sandip Chattopadhyay and Hans Dembowski wrote in their article “Unacknowledged Suffering“:
India’s mental-health scenario is worse than many think. For 2015-2016, the national institute of mental health and neurosciences (NIMHANS) reported that one in seven Indians suffers some form of mental disorder in their lifetime. Ten percent of Indians were said to require immediate intervention. Due to stigma and insufficient health infrastructure, however, only about one fifth of the people concerned were getting the needed medical support within a year of falling ill.
A study conducted by Prabha Vig and Deepak Gill, titled “Assessment of Loneliness: A study of Chandigarh adolescents“, they suggest that 62% of adolescents reported feeling lonely. The Press Trust of India (PTI) quoted WHO’s (world health organisation) article that stated that 1 in 4 children in India, between the ages of 13 to 15 years, faces loneliness.
In 2004, the NSSO (National Sample Survey Office) reported that 4.91 million people in India lived alone, away from their families, and were suffering from isolation as well as seclusion.
Recently, the National Mental Health Survey of India (2015-2016) stated that high suicidal risk is emerging to be a major concern for India; and that the youth and adolescents are especially vulnerable to mental disorders.
Nearly 10% of the Indian population is affected by anxiety and depression. In 2016, the Centre for the Study of Developing Societies, in collaboration with Konrad Adenauer Stiftung, conducted a survey on the attitudes and anxieties of India’s youth and adolescent population(aged 15-34 years).
The final reports revealed that 12% of them reported feeling depressed and anxious often. Almost 8% reported feeling lonely and secluded quite frequently.
It is quite important to understand the underlying causes of such rampant loneliness among the young Indian population and come up with possible solutions to help them cope up with the metal stress.
In India, mental health discussions are rare and conversations are only just beginning. Not only is there a lack of discussion regarding mental health in India, but the cost of availing treatment is also through the ceiling. Hence, people prefer to stay at home where the condition goes unchecked and worsens over time.
In India, conversations around loneliness and mental health need to developed at a national level. A deeper understanding of the probable conditions, perhaps born out of conflicts in gender identity, or isolation even within the structure of family and friends, needs to be acknowledged.
According to prominent psychiatrists, treatment for hikikomori starts at home. Family is the basic unit from where an individual can expect moral help and support to overcome difficult hurdles in life. Once they feel like they have their familial support, they can then seek medical attention and professional help.
Mental ailment is no different from physical ones. It is important that people have access to professional help who can then provide guidance, assistance, support and eventual healing from this problem. It is important to acknowledge the problem rather than just brushing it under the carpet.
Loneliness is rarely understood, oftentimes criticised along with anxiety and depression. But, it is our new reality with the Covid-19 pandemic affecting our lives more than we could have thought. The impact of loneliness on our physical and mental health has been acute.
Yet, in our society, we are trained to remain under a false sense of “happiness” and “connectivity” and hence, reject the idea of medical help, which in turn causes severe psychological problems like hikikomori.
Families and peers should be aware of the symptoms associated with chronic loneliness and be encouraged to check social media usage by young adults. Unfortunately, these cases are considered to bring “shame” to the families and are often times hidden.
This is where the administrative bodies and social welfare services should step in, to educate them and avoid such cases of “child social neglect”. But, now, as we are slowly progressing as a nation and a society, it is important that we use all our resources available to us.
Our local support communities, social support systems and the government aided programs, family and peers, even controlled use of social media to some extent, can help in reducing the sense of emptiness that accompanies loneliness.
To conclude, I would like to quote my favorite author Olivia Laing who in her book “The Lonely City: Adventures in the Art of Being Alone” writes:
So much of the pain of loneliness is to do with concealment, with feeling compelled to hide vulnerability, to tuck ugliness away, to cover up scars as if they are literally repulsive. But why hide? What’s so shameful about wanting, about desire, about having failed to achieve satisfaction, about experiencing unhappiness?
And, to anyone suffering from intense thoughts on loneliness or self-harm: please talk about your hardships and seek professional help. To the peers and family of those affected, educate yourself on the matters of mental health to provide a better life to your loved ones. | https://www.youthkiawaaz.com/2021/09/the-looming-pandemic-of-loneliness-and-hikikomori-in-indian-youth/ |
This Study assesses how public development banks (PDBs)–from different sizes and geographies–have interpreted and are including sustainable development priorities in their day-to-day discussions, processes and operations. Some PDBs have so far implemented innovative practices which should now be shared among all PDBs in a view of harmonisation and coherence, as a crucial prerequisite to a scaled-up alignment. As for governments, shareholders and other stakeholders, they should also contribute to this alignment endeavour by an enhanced political backing and support to PDBs.
Key Messages
- Public development banks’ strategies should lead to a complete, comprehensive and systemic integration of the SDGs, percolating all of their activities, instead of classifying existing projects by individual SDGs. In order to integrate SDGs in all actions and processes, the 2030 Agenda needs to be solidly anchored within PDBs’ organisational culture, backed by adequate incentives and capacity building. PDBs should also place a much stronger focus on early stage project preparation support, facilitating the structuring of SDG bankable projects.
- PDBs need to re-envision the way they finance development, relying on their ability to partner and work side-by-side with other stakeholders and private financial actors–underpinned by strategic partnerships, blended finance or other financial mechanisms at their disposal like guarantees or green/SDG bonds–to play a larger, and potentially transformational role in scaling up finance for achieving the SDGs.
- PDBs’ actions need to be uphold by a clear SDG national policy and budget–through an Integrated National Financing Framework (INFF) for instance–and tailor-made regulations that increase their appetite to take risks and invest in non-traditional sectors or poor/fragile settings. Establishing and “SDG Credit Score” would also be a major step to encourage and support PDBs to drive sustainable development transformations.
- Although there is no “model bank”, PDBs need to harmonise their practices and develop common norms and standards on the way they align with the SDGs. PDBs should actively engage in discussions with other international organisations, commercial banks, private investors and businesses who are part of sustainable investing and SDG alignment initiatives. | https://www.iddri.org/en/publications-and-events/study/scaling-public-development-banks-transformative-alignment-2030-agenda |
As we explore the complexity of socionatures, we universally find that values, traditions, needs, narratives, perceptions, norms, priorities, and policies are always evolving and often in conflict. Thus, it is critical to investigate the mental models of individuals and communities in order to deepen understandings of behavior and decision-making. However, mental cognition is non-linear, complex, and systemic, and we argue that the suite of systems thinking tools for eliciting mental models can be expanded for qualitative research. We demonstrate the advantages of this approach through the lens of socionatural conflict for Chilean smallholder farming, including how it further enriches narrative storytelling through improved contextualization and pluralization. Smallholder agriculture is a major contributor to the export-based economy of Chile. However, the combination of broad socioeconomic and environmental change has put such strain on smallholder farmers in the south-central region, that they are being forced into selling off land parcels for residential homes. Given the specific historical, political, and cultural context of Chile and the Biobío Region, typical adaptation approaches that may be suggested in academic or professional literature are not necessarily viable for Chilean smallholder farmers. Thus, deeper and more holistic understandings of the multi-layered socionatural conflict are herein developed. | https://www.scirea.org/journal/PaperInformation?PaperID=5821 |
ABT Incubator, a two-week choreographic program directed by ABT Principal Dancer David Hallberg, provides a focused lab to generate and inspire ideas for the creation of new work. Choreographers will be provided resources, including studio space, a stipend, collaborators, panel discussions and mentorships, to create new work on dancers from American Ballet Theatre and the ABT Studio Company.
The inaugural workshop took place October 31 – November 10, 2018 at ABT’s 890 Broadway studios in New York City. ABT Incubator, open to ABT dancers and freelance choreographers, culminated with a studio showing of the works created (each no longer than 15 minutes in length).
Participants were chosen through an audition process with selections made by a panel of choreographers, directors and artists in ballet and dance, including ABT Artist in Residence Alexei Ratmansky, ABT Principal Dancer David Hallberg, ABT Artistic Director Kevin McKenzie, Danspace Director Judy Hussie-Taylor and choreographers Jessica Lang and Lar Lubovitch.
2018 ABT Incubator Choreographers
Kelsey Grills is a dancer-choreographer based in New York City. She earned a BFA in Dance at Florida State University and has interned at the Maggie Allesee National Center for Choreography. She has choreographed work for PORTER Magazine and clothing brand Madewell, and has assisted on the choreography of films for Skype, Samsung and Derek Lam.
Sung Woo Han joined American Ballet Theatre’s corps de ballet in 2013. Han was born in Seoul, South Korea and has won numerous awards including International Ballet Competition-Varna, Youth America Grand Prix and second prize at the Prix de Lausanne in 2011.
Gabrielle Lamb is the director of Pigeonwing Dance, a contemporary dance company based in New York City. Lamb is the winner of a Princess Grace Award for Choreography and a New York City Center Choreography Fellowship, as well as choreographic competitions at Hubbard Street Dance Chicago, Milwaukee Ballet and Western Michigan University. Her work has been presented by companies such as the Royal Winnipeg Ballet, Sacramento Ballet, Jacob’s Pillow and Dance Theatre of Harlem.
Duncan Lyle, originally from Melbourne, Australia, graduated from the Royal Ballet School in 2010 and joined the corps de ballet of Boston Ballet the same year. He joined American Ballet Theatre as a member of the corps de ballet in 2012.
James Whiteside, born in Fairfield, Connecticut, joined the corps de ballet of Boston Ballet in 2002 and rose through the ranks to become principal dancer in 2009. He joined ABT as a Soloist in 2012 and was named a Principal Dancer in 2013. | https://www.abt.org/the-company/opportunities-at-abt/abt-incubator/ |
The person-centered therapy was initially developed in the forties by Charles Rogers and represents constantly developing approach to human growth and change. Its central hypothesis states that the potential of any individual for growth tends to disclose in relations, in which the one who assists tests and expresses authenticity, reality, care, deep and exact understanding. Its uniqueness is that while being focused on process, it deduces hypotheses from the direct data of therapeutic experience and from the written down and filmed conversations. It steadily checks all hypotheses in corresponding researches. It is useful in any sphere of application of human efforts where the purpose is psychological growth of the individual.
Basic Concepts
The basic the concept of the person-centered therapy is the possibility to express in the form of a hypothesis "if - then". If at installations of the therapist, there are certain conditions, namely congruence, the positive relation and empathic understanding that the person who is a client makes changes towards growth. Theoretically this hypothesis remains true for any relations, in which one person shows congruence, empathy and the positive attitude, and other person receives and perceives them.
The hypothesis is based on the deep understanding of human nature. The person-centered theory postulates the tendency of the person to self-actualization, or “the instinct for self-preservation and the organismic striving for self-actualization" (Rogers, 1959b).
In this sense the tendency to self-actualization is a part organismic in the nature human. Rogers quotes Lancelot Whyte: "Crystals, plants and animals grow without any conscious fuss, and the strangeness of our own history disappears once we assume that the same kind of natural ordering process that guides their growth, also guided the development of man and of his mind and does so still” (Whyte, 1960).
Forces of self-actualization at the baby and the child encounter conditions which are established in life by significant others. These name it "value conditions", when the worthy love and acceptance when behaves according to the established standards. The child eventually assimilates some in own self-concept of these conditions. Then, according to Rogers, "it is to experience that I must return again and again, to discover a closer approximation to truth as it is in the process of becoming in me" (1959b).
Buy Carl Rogers Person-Centered Therapy essay paper online
Despite external restrictions organismic promptings of the child, all of them are internally endured by it. It leads to incongruence between the forces of an organism to self-actualization and the ability to realize them in operation.
The person-centered theory aspires to answer the following question: How can the individual reestablish the lost communication with impulses of self-actualization and to recognize their wisdom? In wide understanding, the psychotherapy is "releasing of an already existing capacity in a potentially competent individual” (Rogers, 1959b).
In the presence of certain conditions, the tendency to search for self-actualization gradually leads the individual to overcoming those restrictions which were internalized as value conditions. Such certain conditions are therapeutic relations, which are perceived by the individual as sincerity or congruence, exact empathic understanding and the unconditional positive relation.
These three conditions are not separate conditions between which the expert therapist intuitively makes a choice. They are interdependent and logically connected with each other.
First of all, the therapist should reach deep and exact empathy. However, such deep sensitivity to direct "existence" of other person and the smallest changes in its condition demands that the therapist at first has accepted and has somewhat estimated other person. Differently, sufficiently deep empathy is not possible, if there is no unconditional positive relation. However, these conditions apparently can become significant interpersonal event only when they are real. During a session meeting with the therapist should be complete and original. “Therefore it seems to me that authenticity, or congruence is the most important of three conditions” (Rogers, 1959a).
Authenticity, or congruence, is a basic ability of the therapist to read own internal experiences and evidently to show them in therapeutic relations. It does not allow the experience to play a role or to show a facade. His or her words will be coordinated with experiences. It follows for by itself. It follows a varying stream of own feelings and proves. It is transparent. With the client the therapist tries to be oneself.
Concepts of authenticity and exact empathic understanding are closely connected with each other. The therapist tries to plunge into the world of the client’s feelings to experience this world in himself. His understanding starts with his own internal experience of the client’s feelings and own internal processes of comprehension. He actively endures feelings of the client, and also understands own internal reactions to these feelings. During this process the special value starts to develop quite often in order to get comprehension by the therapist of the feelings that are not directly put into words, which is on the verge of comprehension of the client.
At the heart of empathy to the client, the care dispossessed feeling, or acceptance of his or her individuality, which is called as the unconditional positive relation lies. Such relation arises from the belief of the therapist in internal wisdom of processes of self-actualization of the client and the belief that the client will find out those resources and directions, which will be accommodated into his or her personal growth. The care of the therapist certainly does not take the form of councils or instructions. The therapist emphasizes on value of individuality of the client sometimes directly, but more often through understanding and the sincere response.
Some researches show that achievements of the client during therapy are significantly connected with presence of congruence, exact empathy and the positive relation.
Halkides (1958) has found out positive correlation between presence of these three qualities and the success rate of clients. Numerous researches of Godfrey and Barrett-Lennard (1959; 1962) have checked up this hypothesis and have established that the productivity of therapy depends on perception clients and how much these three qualities are inherent in their therapists.
Later researches included new theoretical parameter of the person-centered therapy, namely the remedial concept of change of the client’s personality. The theory asserts that the change occurs in a continuum, which on one hand with presented rigid, static, repeating behavior. On the other hand there is another behavior, which is modified in the process of change and course of internal experiences. Research on the patients hospitalized with the diagnosis of schizophrenia has established that patients, whose therapists had the highest indicator on a triad of therapeutic conditions, were the most successful. Thus, patients who cooperate in therapy at higher level showed the big results compared to those whose behavior was rather static and rigid (Rogers, 1967b).
To sum up, positive change of the individual in therapeutic relations becomes more visible, when the client perceives authenticity, empathy and care of the therapist. The direction of personal change is an increasing comprehension of the internal experience, the ability to allow the internal experiences to surface, and the behavior, which is congruent to the aforementioned internal experience.
Directive Technicians
Historically, those who are connected with the person-centered therapy, firmly oppose the belief that therapists were directive with the clients. It is reflected in the initial name of the given strategy - "not directive therapy".
Directive therapy is any practice, in which the therapist is considered to be the expert, whose proceeding from the knowledge of internal processes of human beings. The therapist also establishes diagnoses that are under consideration and treats those who address to it for the help.
From the very conception of the person-centered point of view, the belief that the individual himself is capable to define a direction of the development was the major idea. Years of experience with clients and numerous psychology researches have confirmed this belief. Additionally, the accommodated knowledge has developed it to such degree that today intrusion of the therapist at a concentration of the client on internal process of experience is considered unproductive.
The person-centered therapist is necessary for uncovering the inner resources of the client. Any pose or manipulations, for example, the usage of esoteric language, professional terminology or diagnostic testing, are excluded. The aforementioned strategies considered to lead to the situation, when the therapist received and monopolizes the control over therapy process, thereby there is a transfer of a locus of control from the hands of the client to the hands of the therapist. Therefore, the client is blasting his faith in own abilities, which is needed to find a way to growth. Any technics of type of a psychodrama, methods of Gestalt therapy and bio-energetics, puts the therapist in a role of the expert and lowers the ability of the client to rely on own internal processes. "The psychotherapy is not manipulations of the expert over more or less passive person" (Rogers, 1959b).
Three Forces in Psychology
Rogers identifies so-called third force in psychology, which is humanistic psychology, enough various group of people united by the general idea. His identification with humanistic psychology is based on upholding the advantages of this concept and the values of the separate person in search for growth. There is also a contribution of Rogers's interest in psychology of development as science which considers advantage and value of the person primary. That’s how Rogers summarizes the basic distinction between psychoanalysis and the person-centered theory: “I test not enough liking to rather widespread sight that the person at the heart of the irrational and that its impulses if them not to supervise, will lead to destruction of others and itself. The human behavior is absolutely rational, in thin and by definition difficult advancement to the purposes which the organism aspires to reach.” (Rogers, 1961a)
The author considers that protection is included into a way of comprehension of organismic processes, which directs the individual to positive growth. The person, being free from protective distortions, lives in a stream of the internal experience, addressing to nuances of organismic stream for instructions for the behavior. Contrary to the psychoanalytic point of view, Rogers considers natural impulses of the person preceding their internal organismic experiences as constructive and effective to health and realization.
The psychoanalytic theory asserts that by means of concentration on the past and its understanding thanks to interpretations of the analyst, the patient finds insight into the present behavior. The person-centered theory is focused on the current experience of the client, believing that restoration of comprehension and trust in personal abilities gives resources for change and growth. In psychoanalysis, contrary to the person-centered view, the analyst is aimed at interpretation of communications between the past and the present of the patient. In the person-centered therapy the therapist acts as a facilitator, helping the client to find the senses of current internal experiences.
By means of focusing on insight interpretive activity and encouragement in development of transference relations between the patient and the therapist, based on a neurosis of the patient, the psychoanalyst occupies a role of a teacher. In the person-centered therapy the therapist communicates fairly and openly as much as possible and tries to establish the connection in which he is playing the role of a person, who shows care in relation to other individual and listens to him or her.
Though in the person-centered therapy, there are over rudiments, such relations do not reach full blossoming (Rogers, 1951). Rogers has expressed opinion that carrying over relations develop in estimated atmosphere, where the client feels that the therapist knows about him or her more than he or she knows about himself or herself, resulting in client becoming dependent. The person-centered therapist needs to avoid any estimated statements. He does not inform the client those or other values through interpretations, does not ask questions in an investigating manner, does not calm, does not criticize, does not praise and does not describe the client. The person-centered therapy does not consider transference relation as a necessary part of the client’s change in a growth direction.
Distinctions between the person-centered concept and sights behaviorism can be seen under the relation of both these approaches to a science and behavior change. Science, from the point of view of behaviorism, is supervision, registration and manipulation with the observable phenomenon. So the behaviorism scientists try to apply those rules which are accepted in natural sciences to behavior research. However, internal experience of the person is not a studying subject as its direct supervision and repetition in controllable conditions is impossible. Thus, there is an exact set of criteria of scientific knowledge, which defines what kind of behavior can be investigated, how it can be understood, predicted and supervised. Rogers asserts that there are certain restrictions in research of the world of experiences by scientific methods, but completely to ignore internal experience and its influence on behavior would be a tragic mistake (Hart and Tomlinson, 1970). According to Rogers, the science about the person should try to understand people in all of displays.
According to the ideas of behaviorism, behavior change occurs through the external control of stimulus and compensation. From the point of view of the person-centered theory, behavior change arises from within the individual. The purpose of behavioral therapy is symptom elimination. Relations between the therapist and the client are not especially important as the internal experiences connected with a symptom. Thus, a therapist aspires to eliminate symptoms as soon as possible, using theory principles learning. This point of view is completely opposite to the person-centered therapy, which believes that "high-grade the functioning person" it is necessary on internal experiences in definition of the behavior.
The Theory of the Person
Working out of the theory of the person never was a priority of the person-centered theorists. "Though the theory of the person has arisen from our experience the client-aligned of therapy, to any who is connected with this direction, quite clearly that it not our main focus. In the center of our interest, - is faster, how there is a change in the human person... To us seem more important and reasonable questions are faster process of personal change, than about the reasons of presence of personal characteristics of the person.” (Rogers, 1959b)
The person-centered theory of the person has grown from experience of the client-aligned therapy, researches and the theory of change of the person (Holstock and Rogers, 1983).
As theoretical concepts in this case follow from experience as process, it is faster than theory weeding, than the genetic theory what psychoanalysis is. Significant factors - direct relations, as in electric field. The person-centered theory is first of all the theory of conditions, thanks to which there are changes.
The Developing Baby
The person-centered theory of the therapy begins with the certain postulates, which are concerning a person at a birth. The world of the baby is the world of his or her own experiences. They form the one’s unique reality. In the world of the organism, the baby has one base motivational force: the tendency to self-actualization. Along with this base motivation, the child possesses knack positively to estimate experience, which he perceives as an organism strengthening it, and negatively to estimate those experiences which are represented contradicting its actualizing tendency. This organismic estimated process "directs its behavior to self-actualization”.
The Self-Concept
As the child grows and develops, he starts to differentiate in experience, recognizing what is a part of its existence and functioning, and carrying other experience to other people and things in the environment. As his comprehension of own existence and functioning develops, it gets the feeling (sense of self), from which his self-concept develops. Self-concept development in many respects depends on the perception of the experience in the environment by the individual, which is the child’s requirement for the positive relation influences, - universal and steady requirement of the human being (Rogers, 1959b). On the other hand, frustrations are the requirement for a positive estimation of the individual is formed of all complex of experiences of satisfaction its self-esteem (or self-regard), which is result of the acquired feeling based on perception of an estimation of others.
The self-esteem becomes deep construct, influencing behavior of an organism as whole, and gets certain independence of estimations from the outside and from other people. It occurs because of introjection of the individual of conditions of value.
Value Conditions
The requirement of the child to keep love of the parents inevitably conflicts to requirements of its organism. Values, which he realizes in own organism, sometimes contradict values of his parents. His behavior, which is caused by personal requirements and desires, sometimes contradicts behavior that his parents consider comprehensible. Under the influence of this experience, a child starts to reconstruct own system of self-esteem, there is a distinction between experiences of a positive and negative estimation from the significant others. The child starts to avoid or completely deny organismic experience, which, though he has acquired, do not cause the positive relation from outside a significant environment. These interjected value conditions become a part of its system of self-esteem.
He tests the positive relation to when its experience corresponds to the experience, which has received a positive estimation from outside of significant others. His or her self-esteem decreases, when the external positive estimation is absent. So his or her self-esteem starts to depend on the conditions of values acquired in interaction with significant others in his or her world. What occurs to the actualization tendency as value conditions become a part of system of self-esteem? It, nevertheless, remains for the individual base motivation. However, there is a conflict between organismic requirements and requirements for the self-esteem, now connected with value conditions. The individual, as a result, should choose between aspirations and actions, according to organismic sensation and their censorship proceeding from the received conditions of value. In order to keep self-esteem and the feeling of value and self-actualization experience, the child prefers to operate according to value conditions. Differently, his or her requirement for self-esteem gets the best of requirements of an organism. At the moment of a choice the child can believe that it organismic requirements "are bad" and contradict to be "the good" person and consequently prevent self-actualization. Rogers writes: "Alienation of the person from directing organismic processes is not an indispensable part of human nature. It is acquired through learning that is especially characteristic for the western civilization. The satisfaction or execution of the actualizing tendency became doubled, and has led to formation of incompatible behavioral systems. “Such dissociation is a basis of a psychological pathology of people." (Rogers, 1963)
Fortunately, organismic promptings do not stop the existence when by means of negation they are not related to consciousness. Their persistence becomes a problem for the individual. He or she starts to perceive the experience selectively, accordingly, he or she confirms or not his or her self-concept which in essential degree is defined now by value conditions. “Every time when the perception is deformed or denied by the individual’s experience, discrepancy arises between self-concept and experience, psychological maladjustment and vulnerability" (Rogers, 1959b). Experiences, which are not consistent with the self-concept of the individual, are perceived as a threat: if they are correctly symbolized in consciousness of the individual, they could break the organization of its self-concept as would enter into the contradiction with the incorporated conditions of value. Therefore, these experiences cause alarm in the mind of a person and activates protective mechanisms, which deform or deny it reaching for possible stability of perception by the individual. Requiring, thus, protection against exact perception of the experience contradicting its conditions of value, the individual develops rigidity perceptions in corresponding areas.
Psychotherapy and Personal Change
Therapy process is an intervention in discrepancy or incongruence, which was generated by the individual between own organism with the experience and the self-concept. In therapeutic relations he or she can risk, having admitted to comprehension before deformed or denied experiences. In atmosphere of understanding it can allow the denial of earlier organismic to aspirations to become a part of its concept. In the course of therapy the individual changes the conditions of value for trust to wisdom of the developing organism in all integrity. | https://exclusivepapers.com/essays/medicine/carl-rogers-person-centered-therapy.php |
Topic 7 - Sampling Distributions, Central Limit Theorem (CLT), Confidence Intervals and Sample Size
This article is a topic within the subject Business & Economic Statistics.
Contents
Required Reading
Gerald Keller (2011), Statistics for Management and Economics (Abbreviated), 9th Edition, pp. value.
Sampling Distribution of a Sample Mean
There are 2 methods to find the sampling distribution of the sample mean.
- Draw samples of the same size from the population and calculate the statistic of interest
- Rely on Probability, E(X) & V(X)
Bank Teller Example
In this example, the sample size is 2 (there are 2 bank tellers), and they can make either 0, 1 or 2 errors ('X' number of errors). In the following picture the probability distribution is given and the mean and variance is calculated.
Now, we create a new random variable, X bar - representing the sample mean. For example, if Teller 1 makes 2 mistakes and Teller 2 makes 0 mistakes, X bar = 1 with probability 1/5 * 3/5 (from the original probability distribution) = 3/25.
The full list of every combination of X bar is shown above.
Sampling from a Normal Distribution - Example
Let X = the amount of money customers owe a firm. 'Mu' = Average = 40. The standard deviation of 'X' is 10. In this example we are told that X ~ N(40,100), this means that with a sample size of n = 25, X bar ~ N(40,4). (100/25 = 4)
If an auditor took a a random sample of 25, what is the probability that the average amount outstanding is less than 36.
P(X bar < 36) = P(Z < (36-40)/2) = P(Z < -2) = P(Z > 2) [By Symmetry] = 0.5 - P(0 < Z < 2) = 0.0228
Central Limit Theorem
The sampling distribution of the mean of a random sample will be ‘normal’ for a large sample size (typically n > 30).‘Large’ is dependent on the abnormality of the population’s distribution. To fully grasp the concept of the central limit theorem, we highly recommend you click here
Claire Auditing Example (Lecture Notes) – Sampling From a Non-Normal Distribution
Interval Estimation
Produce a range of values with a degree of confidence attached - Confidence Interval.
Interpretation: If we repeatedly draw samples of size ‘n’, (1-α)% of the values of ¯X will be such that μ would lie within the confidence interval ¯X± Z_(α/2)×σ/√n
Example – If 100 samples were drawn, we would expect 95 of them to include the population mean. The confidence Level determines the multiple of standard errors of the end points
Clare Example, Using the interval of +/- 4, 95% Confidence Level, Sample Size = 250
Sampling Distribution of a Sample Proportion & Estimation
- Sample Proportion = P ̂ = X/n
- If ‘n’ =10 & ‘p’ = 0.4, the probability of P ̂≤0.5=0.8338
- If we assume a normal distribution
- Sample Proportion = P ̂ = X/n
- Estimator: A Statistic whose purpose is to estimate a parameter
- Point Estimator: A formula for combining Sample information to estimate a parameter (single value)
Requirements & Rules
- NP and N(1-P)≥5
- E(P^)= P,
- V(P^)= (σp^)^2 = P(1-P)/N,
- Standard Deviation/Error = σp^ = √(P(1-P)/N)
- P^= X/n (X = number of successes, n = sample size)
Textbook Example (Page 326)
- n=300 p=0.52
- SD = 8.65, E(X) = 156, X = 150, P(Z > (X-(E(X) / SD)), P(Z > -0.69) = 0.7549
- OR
- P (P^) > 0.50) = (P^- p)/√(p(1-p)/n)= (0.5-0.52)/0.0288= .7549
End
This is the end of this topic. Click here to go back to the main subject page for Business and Economic Statistics.
References
Textbook refers to Gerald Keller (2011), Statistics for Management and Economics (Abbreviated), 9th Edition,. | http://unistudyguides.com/wiki/Topic_7_-_Sampling_Distributions,_Central_Limit_Theorem_(CLT),_Confidence_Intervals_and_Sample_Size |
Armenia, Australia, Austria, Belarus, Belgium, Brazil, Bulgaria, Canada, Czech Republic, France, Germany, Hungary, India, Ireland, Italy, Japan, Moldova, Mongolia, Poland, Romania, Russia, Serbia, Slovakia, Slovenia, Spain, Switzerland, Taiwan, Ukraine, USA, Uzbekistan, Vietnam.
Effects of strong electron correlations in high-temperature superconductors, colossal magneto-resistance compounds (manganites), heavy-fermion systems, low-dimensional quantum magnets with strong spin-orbit interaction, topological insulators, etc. will be investigated based on a variety of underlying many-band electronic models including the extended Hubbard model, Anderson model, super-exchange spin-orbital models of transition of metal oxides with strong relativistic spin-orbital coupling. The electronic band structure, spectral properties of charge carrier quasiparticles, magnetic and charge collective excitations, metal-insulator and magnetic phase transitions, Cu- and Fe-based high-Tc superconductivity, charge and spin-orbital ordering will be studied. The obtained results will be used to support neutron scattering experiments performed at FLNP, JINR.
Investigations in the field of nanostructures and nanoscaled phenomena will be addressed to a study of physical characteristics of nanomaterials promising for various applications in modern nanotechnologies. The electronic, thermal and transport properties of carbon nanostructures will be investigated. It is planned to study the problem of quantum transport in molecular devices. Spin dynamics of magnetic nanoclusters will be investigated. The analysis of resonance tunneling phenomena in the layered superconductors and superconducting nanostructures in the external fields will be performed. Numerical modeling of resonance, radiative and chaotic properties of intrinsic Josephson junctions in high temperature superconductors is planned to be carried out.
Мodels in condensed matter physics will be studied by using methods of equilibrium and non-equilibrium statistical mechanics with the aim of revealing general properties of many-particle systems based on the ideas of self-similarity and universality. Mathematical mechanisms, underlying the kinetic and stationary behavior of model systems, as well as possible links between different models, will be investigated. The study of two-dimensional lattice models by the transfer matrix method will be focused on confirming the predictions of the logarithmic conformal field theory. The theory of integrable systems will be developed in the aspect of finding new integrable boundary conditions for two-dimensional spin systems and the solution of the corresponding Yang-Baxter equations. The universal behavior of correlation functions in non-equilibrium systems will be studied as well. The research in the structure theory and the theory of representations of quantum groups and matrix algebras will be directed to further applications in the theory of integrable models in quantum mechanics and statistical physics. Applications of the elliptic hypergeometric integrals, defining the most general solutions of the Yang-Baxter equation and most complicated known exactly computable path integrals in four-dimensional quantum field theory, to two-dimensional spin systems will be studied.
Theoretical study of electronic and magnetic properties of strongly correlated systems including newly synthesized oxides 3d, 4d and 5d transition metals.
Theoretical study of electronic, transport and optical properties of hybrid perovskites for the third generation of solar cells.
Calculation of the spin-wave excitation spectrum, magnetization, susceptibility and the Neel temperature for the quasi-two-dimensional Kitaev-Heisenberg model on the honeycomb lattice proposed for iridates in the antiferromagnetic and paramagnetic states.
Development of the superconducting theory for the quasi-two-dimensional compass- Heisenberg model and the Kitaev-Heisenberg model on the honeycomb lattice.
Investigation of the one-dimensional Bose gas in external potentials. Description of short-range correlations in the Bose gas in the regime of strong interactions.
Development of theoretical models for small-angle scattering from mass and surface fractals. Theoretical study of small-angle scattering from multifractals.
Investigation of generation of magnetic precession by Josephson current in the presence of spin-orbital coupling in the external electromagnetic field.
The classification of the appearing superconducting spintronics effects.
Study of the hopping mechanism of the vibron excitation transport in macromolecular chains in the framework of non-adiabatic polaron theory depending on the quantum state of the macromolecular structural elements, such as squeezed and chaotic squeezed states.
Theoretical investigation of the electron conductivity in polycrystalline graphene. Calculation of a relaxation time due to a finite charged dislocation wall of a different type (grain boundary scattering), and the corresponding contribution to the tensor of conductivity as a function of temperature and carriers concentration.
Application of the extended self-consistent Huckkel method, along with the non-equilibrium Green's function method and finding of the current-voltage characteristics of some graphene nanostructures. Investigation of the influence of the phonon-phonon interaction to the thermal conductance of the graphene nanoribbons with a different width.
Development of the new concepts of electronic nano-devices based on graphene and graphene ribbons. Investigation of different aspects of electron transport in the electronic devices based on the graphene edge states.
Within the dual model of strongly correlated electrons, the spin correlation function is to be computed for the lightly doped cuprates beyond the mean-field approximation.
Construction of superconformal indices for quiver supersymmetric gauge theories and description of their relation to partition functions of lattice spin systems.
Consideration of a traffic model, where a condition of irreversible aggregation is introduced along with the standard conditions of the excluded volume. Irreversible aggregation means that a cluster of particles being emerged once will not be destroyed later and all particles of the cluster move synchronously. The model is defined on a finite interval with a given ingoing and outgoing probabilities , at the ends of interval. The phase diagram of the stationary state will be constructed and its peculiarities in four different sectors will be explained.
Detailed investigation of the rotor-router aggregation model on infinite graphs.
Investigation of separation of variables in three-body elliptic Calogero-Moser systems.
Computation of large-deviation probabilities in the spherical model, generalized spherical model and in the models of the Boson gas.
Investigation of the structure of quantum matrix algebras of orthogonal and symplectic types, classification of irreducible representations of the braid group B3 in low dimensions.
Solving of spectral problems in systems of mixed dimensionality. Derivation of new characteristics of the equiangular tight frames.
Description of the transition from the Kardar-Parisi-Zhang regime to the deterministic aggregation regime in the non-stationary fluctuations of particle current in the model of generalized asymmetric simple exclusion process. Calculation of the universal finite size corrections to the large deviation function of particle current in the model of random walks in a random environment.
Development of the theory of Bose-condensed systems with dipolar interaction potentials.
Formulation of an approach for describing non-equilibrium networks of complex quantum systems.
Study of the time evolution and stationary states of open many-particle interacting systems by means of a special procedure of a reduced description. The method of maximum information entropy will be analyzed in this context.
Construction of solutions of the Yang-Mills equations on conical spaces with Lorentzian metric in various dimensions. Study of the particle-vortex duality. | http://wwwinfo.jinr.ru/plan/ptp-2017/a731115.htm |
The invention relates to a puzzle cube displaying a combination of different symbols on each face of the cube depending on the configuration of the puzzle cube. The invention also relates to a system and method for accessing a database either using the puzzle cube, a set of symbol dice or an array of symbols.
Puzzle cubes such as the familiar Rubik's cube (RTM) are well known. Such puzzle cubes consist of several smaller cubes or cubelets, which are attached to each other to form the puzzle cube so that each square face of cubelets can be rotated about an axis perpendicular to the face relative to the rest of the puzzle cube. Typically, puzzle cubes consist of a 3 by 3 by 3 arrangement of cubelets, meaning that each face of the puzzle cube is a 3 by 3 square of 9 cubelets and there are 26 cubelets in total in the puzzle cube. The cubelets are all attached to a central rotation mechanism that is hidden from view. The faces of the cubelets are different colours and the aim of the puzzle is to match the colours of the cubelets on each face of the puzzle cube by rotating the faces so that each face of the puzzle cube is a uniform colour.
1
6
Conventional dice are also well known, which have the numbers to displayed on their faces.
According to an aspect of the present invention, there is provided a puzzle cube including a plurality of cubelets forming rotatable faces of the puzzle cube, wherein a plurality of different symbols are respectively provided on a plurality of faces of the cubelets.
Since different symbols are attached to different cubelet faces, rotating the faces of the puzzle cube changes the combination and configuration of symbols displayed on each face of the puzzle cube. The vast number of possible symbol configurations on a given face of the cube can be used as labels in the same way as conventional barcodes for example.
Preferably, the puzzle cube further comprises a near field communication chip.
Preferably, each face of the cube comprises a 3 by 3 array of cubelets. In another embodiment, each face of the cube comprises a 2 by 2 array of cubelets.
According to another aspect of the present invention, there is provided a system for accessing a database comprising: a puzzle cube as described above; an image capturing device for capturing an image of a face of the puzzle cube; an image recognition component communicably connected to the image capturing device and adapted to determine an arrangement of the symbols contained in the captured image; a database containing a plurality of database entries, each database entry being indexed by an arrangement of the symbols; and a database access component communicably connected to the image recognition component and to the database, the database access component being adapted to access the database and retrieve the database entry indexed by the arrangement of symbols determined by the image recognition component.
Once the puzzle cube has been configured as desired by the user, the image displayed on a given face of the puzzle cube in the configuration is scanned and the image recognition component automatically determines the configuration of symbols on the face of the cube based on the image. The database access component then accesses the database using the symbol configuration as a search query. Since the database entries are indexed by possible symbol configurations, a corresponding database entry is returned provided that the particular symbol configuration used is included in the database.
According to another aspect of the present invention, there is provided a system for accessing a database comprising: one or more dice having a plurality of symbols respectively provided on faces of each die; an image capturing device for capturing an image of the faces of the dice; an image recognition component communicably connected to the image capturing device and adapted to determine an arrangement of the symbols contained in the captured image; a database containing a plurality of database entries, each database entry being indexed by an arrangement of the symbols; and a database access component communicably connected to the image recognition component and to the database, the database access component being adapted to access the database and retrieve the database entry indexed by the arrangement of symbols determined by the image recognition component.
By using the dice face symbol configurations in the same way as the puzzle cube face symbol configurations above, the same effects can be provided. In addition, the dice embodiment provides a built in way to generate a random symbol configuration, which makes this embodiment particularly suitable for a game environment.
Preferably, the image recognition component is adapted to determine the arrangement of the symbols on the upturned faces of the dice.
According to another aspect of the present invention, there is provided a system for accessing a database comprising: a recording medium or electronic display having an array of symbols displayed thereon; an image capturing device for capturing an image of the array of symbols; an image recognition component communicably connected to the image capturing device and adapted to determine an arrangement of the symbols contained in the captured image; a database containing a plurality of database entries, each database entry being indexed by an arrangement of the symbols; and a database access component communicably connected to the image recognition component and to the database, the database access component being adapted to access the database and retrieve the database entry indexed by the arrangement of symbols determined by the image recognition component.
The array of symbols has the advantage that it is relatively easy for the image recognition component to detect each individual symbol and their arrangement reliably, so that the overall combination of symbols can be reliably determined. It is also possible for users of the system to recognise and remember particular combinations of symbols that are familiar to them, which is not possible with a barcode for example.
Preferably, the array of symbols is a two-dimensional array. Suitably, the array of symbols includes at least two different symbols.
Preferably, the database entries are user profiles; the system further comprises a login component communicably connected to the database access component; and the login component is adapted to allow a user of the system to access the user profile indexed by the arrangement of symbols determined by the image recognition component.
According to another aspect of the present invention, there is provided a method for retrieving information from a database comprising: manipulating a puzzle cube as described above to produce an arrangement of symbols on a face of the puzzle cube; capturing an image of a face of the puzzle cube; automatically determining an arrangement of the symbols contained in the captured image; and accessing a database containing a plurality of database entries, each database entry being indexed by an arrangement of the symbols, using the determined arrangement of the symbols so as to retrieve the database entry associated with the determined arrangement of symbols.
According to this method, information can be retrieved from a database simply by scanning the puzzle cube to generate a search query. The method may be applied to an array of dice or an array of symbols described above as well as to a puzzle cube.
According to another aspect of the present invention, there is provided a method for adding information to a database comprising: manipulating a puzzle cube as described above to produce an arrangement of symbols on a face of the puzzle cube; capturing an image of a face of the puzzle cube; automatically determining an arrangement of the symbols contained in the captured image; adding a database entry to the database; and indexing the database entry by the determined arrangement of the symbols.
According to this method, information can be added to a database simply by scanning the puzzle cube to generate an index or label for the new database entry. The method may be applied to an array of dice or an array of symbols described above as well as to a puzzle cube.
Preferably, the database entries are website addresses. Alternatively, the database entries are user profiles.
Suitably, the database entries are lottery ticket numbers and the method further comprises: randomly selecting a winning one of the lottery ticket numbers in the database; and comparing the winning lottery ticket number with the lottery ticket number retrieved in the step of accessing the database to determine whether the arrangement of symbols is a winning arrangement.
FIGS. 1 and 2
10
20
10
22
20
10
10
20
10
10
As shown in , the puzzle cube according to an embodiment of the present invention consists of a plurality of cubelets . The puzzle cube has a plurality of different symbols displayed on different faces of the cubelets respectively. In this embodiment the cube includes 26 cubelets forming a 3 by 3 array of 9 cubelet faces on each face of the puzzle cube . However, the invention can also be applied to puzzle cubes consisting of any number of cubelets per side, for example puzzle cubes made up of 2 cubelets per side in a 2 by 2 by 2 arrangement. Similarly, the invention can be applied to puzzle cubes having 4 or more cubelets per side.
22
22
22
The number of different symbols attached to the cubelet faces of the cube is not limited, as long as there are at least two different symbols so a variety of cube face configurations can be produced by rotating the cube faces. In one embodiment, six different symbols are distributed across the cubelet faces in equal numbers so that rotating the cube faces can result in all of the cubelet faces making up each face of the cube displaying the same symbol.
22
22
22
FIG. 3
In another preferred embodiment, seven different symbols are used with the additional symbol being provided on six cubelet faces and the remaining cubelet faces being divided equally between the other six symbols . In this example, the cube can be rotated so that each face has one additional symbol and the remaining cubelet faces of the cube face all display the same one of the other six symbols . An example of a face of the cube in such an embodiment is shown in .
Alternatively, a different symbol may be provided on each cubelet face of the cube. This arrangement maximises the number of distinct cubelet configurations that the cube can be put into.
22
22
22
22
22
22
The type of symbols shown on the cubelet faces is also not limited. For example, the symbols may represent numbers, letters, well known signs such as “play”, “pause” or “rewind” symbols, cartoon characters, game characters and so on. The symbols may have rotational symmetry, in which case they always appear correctly oriented no matter how the cube is configured. Alternatively, the symbols may lack rotational symmetry, in which case more configurations are possible due to the different possible combinations of orientations of the symbols . In an alternative embodiment, blocks of different colours may be provided on the cubelet faces rather than symbols .
FIG. 4
10
30
40
50
60
80
90
30
40
1000
70
40
The system according to an embodiment of the invention is shown in . The system comprises the puzzle cube , a camera , a controller including an image recognition component and a database access component , and a storage device storing a symbol configuration database . In a preferred embodiment, the camera and the controller form part of a mobile telephone and the image database is stored remotely and accessed via an Internet or other network connection , but the invention is not limited to this. The controller may also be part of a laptop, tablet, head mounted display, other mobile computing device or desktop computer for example.
30
30
40
50
40
22
60
90
50
40
70
80
The camera transmits image data captured by the camera to the controller . The image recognition component of the controller is capable of recognising known features in the captured image, such as the cubelet symbols or cubelet colours of the invention. The database access component is capable of generating a search query for the database based on the output of the image recognition component and searching the database using the generated query. The controller also has an internet connection , via which it can communicate with other devices such as the storage device .
80
90
80
40
90
1000
40
90
The storage device may be a server having a hard disk or other mass storage device, as is conventional. In an alternative embodiment, the symbol configuration database may be contained in a storage device local to the controller . For example, the symbol configuration database could be stored in the flash memory of a mobile telephone having the controller . In this case, no internet connection is required to access the symbol configuration database .
50
30
40
40
30
50
In an alternative embodiment, the image recognition component may be provided remotely from the camera and the controller . In this case, the controller passes image data captured by the camera to the image recognition component for processing.
FIG. 5
10
22
30
1000
30
1000
50
22
22
The method according to an embodiment of the invention is shown in . The first step is for the user to configure the puzzle cube by rotating its faces, so that a combination of symbols or colours is displayed on each face. The image capture device such as a camera is then used to capture an image of one of the faces of the configured cube. In the preferred embodiment, the system of the invention includes an app running on a mobile telephone , i.e. a smartphone. The app allows the user to capture the image of the cube face using the camera of the smartphone and the image is then stored by the app in the smartphone memory. The app contains an image recognition component , which is capable of recognising the combination of symbols on the cube face captured in the image using known image recognition algorithms to perform the step of scanning the image for symbols . This can be implemented in a similar way to existing QR code scanner apps on mobile telephones.
22
22
22
80
80
Once the app has determined the combination of symbols shown on the cube face from the captured image, the combination of symbols is used as a query to search a stored database indexed by possible configurations of symbols on a cube face. The database is stored in a storage device . In this embodiment, the storage device is provided in a remote server which performs the database search using the query. The smartphone accesses the remote server via its Internet connection and sends the symbol configuration to the remote server for use as the query.
22
1000
If a match for the symbol configuration is found in the database then the remote server returns the database entry stored in the database in association with the configuration of symbols shown on the cube face. The smartphone can optionally then display the database entry to a user or perform another action based on the database entry returned from the database.
70
1000
22
10
Various types of returned database entry and corresponding actions may be used in the invention. In a preferred embodiment, the database entry may be a link to content provided on a website or on a remote server accessed by the app via a network connection . When the link is returned by the remote server, the smartphone will display the content accessed via the link to the user. Since so many permutations of symbols are possible on a face in a typical embodiment of the puzzle cube , e.g. a 3 by 3 by 3 cube having a different symbol on each cubelet face, it would theoretically be possible to link to every website in existence via unique configurations of the cube faces.
90
Alternatively, the database entry may link to a profile of a user either within the app or on a social network. Users may store their profile in the symbol configuration database in association with a unique cube face symbol configuration using the app.
FIG. 6
A method of storing a database entry in association with a captured cube face configuration is illustrated in . This method can be used to store a user profile as discussed above for example. The method is similar to that used when accessing a database entry using a captured cube face configuration, except that a new database entry is stored in association with the captured configuration rather than an existing entry being accessed.
22
First, the database entry to be added to the database is supplied. This can be any type of data but is information on a user profile in this embodiment. Next the cube is configured, an image of the face is captured and the image is scanned for symbols as above. A check is then performed to determine whether the symbol configuration on the cube face is already present in the database. If so then an error message is returned and the user will have to reconfigure the cube. Otherwise, the new database entry, i.e. the user profile, is added to the database indexed by the symbol configuration corresponding to the puzzle cube face.
10
Subsequently, when the unique cube face symbol configuration is captured and compared to the database, the associated user profile is returned. In this way, a user can adopt a cube face configuration as a unique identifier that could be presented on a puzzle cube or printed on a business card for example, allowing the user or others to access the user's profile automatically using the app.
10
Other types of database entry are possible. For example, the puzzle cube could be used to play a lotto type game by allowing users to pick a cube face symbol configuration as an entry or “lottery ticket”. One or more winning cube face symbol configurations would then be chosen and the winning user could be notified.
22
10
22
Since the number of possible combinations of symbols on one face of a 3 by 3 by 3 or larger puzzle cube is vast, even if only six symbols are used, there is no risk of the app running out of unique configurations to assign to users, content or game entries.
1000
When a user first loads the app on the smartphone they will be invited to enter sufficient basic personal details to identify them as a user. These details are then used by the app to create a user profile, which is stored on a central server accessed by the app via the Internet. On subsequent uses of the app, the app can then identify the user who is capturing a particular cube face configuration or performing other actions within the app.
10
1000
An optional feature of the puzzle cube is a Near Field Communication (NFC) chip that can be embedded within the cube. The NFC chip may contain information uniquely identifying the particular cube it is attached to. With this feature, the app can communicate with the NFC chip using NFC communication hardware on the smartphone or other device on which it is run, and identify the cube being scanned. A cube's identifying information can be associated with a particular user profile in the app so that the user can be signed in to their profile automatically when the cube is scanned by the app.
22
30
1000
30
In addition, the app may recognise movements of the cube as well as recognising the symbols on a face of the cube. The camera of the smartphone or other device captures the movement of the cube in this embodiment and a gesture recognition component is included in the app. The gesture recognition component interprets the movement captured by the camera as one of a number of preset control gestures. Once the gesture made by the user with the cube has been interpreted, the app may respond in various ways. For example, the app could provide the user with extra content in response to the gesture or could activate or unlock a feature within the app.
10
10
1000
In an alternative embodiment, the system of the invention uses a virtual cube in place of the physical puzzle cube . The virtual cube is a 3D simulation of the physical puzzle cube running within the app. The user can manipulate the faces of the virtual cube in the same way as the faces of a physical cube by using the touchscreen of the smartphone or other input device.
22
90
In this embodiment, the same operations described above can be performed by rotating the faces of the virtual cube within the app to display a given combination of symbols . There is no need to capture an image in this embodiment, the combination of images on a selected face of the virtual cube is already known to the app and can be used to access the symbol configuration database as described above.
22
90
22
The virtual cube embodiment has the advantage that a user can customise the symbols attached to their virtual cube and the associated database entries in the symbol configuration database . The symbols and database entries can be unique to the individual user and are stored by the app as part of the user profile.
10
210
22
220
222
220
220
30
222
90
10
FIG. 7
An alternative to the puzzle cube is using one or more individual symbol cubes or dice to generate the combinations of symbols used to access the database. Each die is a cube or other polyhedron having a symbol on each face. One or more of the dice are rolled by the user and the upward facing faces of the dice are captured by a camera in the same way as described above for the puzzle cube face. An example of a symbol configuration resulting from such a dice roll is shown in . The combination of symbols on the upward faces resulting from the dice roll can be used to access a symbol configuration database and deliver content in all the same ways as described above for the puzzle cube .
22
10
22
22
22
10
22
22
10
FIG. 3
Another alternative embodiment uses an array of symbols or icons to access the database, without requiring a puzzle cube or a set of dice. The array of symbols may be one or two-dimensional and may consist of any number of individual symbols . For example, a 3 by 3 array of symbols corresponding to a face of the puzzle cube described above could be used, as shown in . The array of symbols may be printed on a sheet of paper or other recording medium, or may be shown on an electronic display. The types of symbols described above in relation to the puzzle cube may also be used in this embodiment.
22
22
50
22
22
10
In this embodiment, the array of symbols is scanned by an image capture device and the combination of symbols in the array is detected by an image recognition component as described above. The combination of symbols is then compared to the database and can be used to retrieve a particular entry from the database or to store a new entry indexed by the combination of symbols , as also described above in relation to the puzzle cube embodiment.
FIG. 8
100
40
80
illustrates an exemplary embodiment of a computer system in which the system and method of the present invention may be realised. Both the controller and the storage device may be implemented in such hardware.
100
102
100
104
FIG. 8
The computer system may interface to external systems through a fixed wire or wireless connection or any other network interface such as analog or ISDN modems, cable modems (ADSL/DSL), Ethernet or fibre optic interfaces, cellular or HSDS services and satellite transmission interfaces. As shown in , the computer system includes a processing unit , which may be a conventional microprocessor, such as an Intel Core microprocessor or an ARM Cortex microprocessor, which are known to one of ordinary skill in the computer art.
106
104
108
106
108
104
106
110
112
114
112
116
112
30
System memory is coupled to the processing unit by a system bus . System memory may be a DRAM, RAM, static RAM (SRAM) or any combination thereof. Bus couples processing unit to system memory , to non-volatile storage , to graphics subsystem and to input/output (I/O) controller . Graphics subsystem controls a display device , for example a liquid crystal display, which may be part of the graphics subsystem . The I/O devices may include one or more of a keyboard, tablet, stylus, disk drives, printers, a mouse, a touch screen or gesture driven interface and the like as known to one of ordinary skill in the computer art. The I/O devices may also include the camera .
110
106
100
110
90
The non-volatile storage may be a magnetic hard disk, a flash memory or another form of storage for large amounts of data. Some of this data is often written by a direct memory access process into the system memory during execution of the software in the computer system . The non-volatile storage may contain the database .
The foregoing description has been given by way of example only and it will be appreciated by a person skilled in the art that modifications can be made without departing from the scope of the present invention.
Embodiments of the present invention will now be described by way of further example only and with reference to the accompanying drawings, in which:
FIG. 1
is a perspective view of a puzzle cube according to an embodiment of the invention;
FIG. 2
shows the cubelets making up a puzzle cube according to an embodiment of the invention;
FIG. 3
shows an example of a face of a puzzle cube according to an embodiment of the invention in one configuration and also illustrates an array of symbols according to another embodiment of the invention;
FIG. 4
is a block diagram of a communication system according to an embodiment of the invention;
FIG. 5
is a flow diagram of a communication method according to an embodiment of the invention;
FIG. 6
is a flow diagram of another communication method according to an embodiment of the invention;
FIG. 7
shows a dice roll used in an alternative embodiment of the invention; and
FIG. 8
shows an example of hardware in which an embodiment of the invention can be implemented. | |
Transportation, Airport, and Parking Services is the department responsible for the Posting of Information policy on campus.
Source Document: PPM 310-27 Posting of Information
Posting is allowed under the following regulations which are intended to prevent interference with the free flow of persons and traffic and with the regular activities of the University.
General Guidelines
The following information pertains to all posting on campus, both indoors and outdoors.
- Only one notice per event/activity per bulletin board is allowed.
- No 3-dimensional materials may be posted on any public University Bulletin Boards (materials must lay flat on the board).
- All posted materials must clearly indicate the name of the sponsoring department, organization, or person.
- No poster, handbill, or any other form of announcement or statement may be placed on, attached to, hung from, propped against, or written on any structure or natural feature of the campus such as walls, doors of buildings (either inside or outside), windows, restrooms, building or directional signboards, the surface of walkways or roads, fountains, posts, columns, waste receptacles, or trees. The cost of labor associated with enforcement, removal, or restoration may be billed by Grounds for most violations to the department, organization, or person(s) responsible for policy violation.
- Organizations or persons posting or exhibiting materials in a language other than English must file a translated copy of the materials with Center for Student Involvement.
- The painting of signs, posters, and banners in the Memorial and Silo Unions and Lower Freeborn Hallways is not permitted.
- Chalking is not permitted on campus.
INDOOR POSTING
Public University Bulletin Boards (Only one per bulletin board of the following materials may be posted.)
- Announcements of activities sponsored by campus organizations or departments: size limit is 11″ x 17″.
- Off-campus events and commercial materials: size limit 8 1/2″ x 11″.
- Personal ads of students, faculty and staff: size limit 8 1/2″ x 11″.
Departmental Bulletin Boards
- Posting on departmental bulletin boards requires the permission of the department.
- Posting in residence halls requires the permission of Student Housing. More information can be found at Reaching the Residents in Student Housing.
- No commercial materials may be posted.
OUTDOOR POSTING
- Only campus organizations such as departments, registered student organizations, sport clubs, constituent organizations (e.g., ASUCD, GSA), and campus interest groups are permitted to place temporary signs, banners and posters at outdoor campus locations. Content is limited to sponsored events and student government elections and must include the name of sponsor, date, time, and location of event.
- Signs, banners, or posters attached to stakes may only be placed on decomposed granite so long as they do not obstruct the free-flow of campus traffic, damage lawns or grounds, or create a safety hazard, or interfere with a scheduled event sponsored by another organization. Signs, banners, or posters may be staked on the Quad lawn only in association with a reserved Quad event.
- A-frame signs may be placed only on decomposed granite areas near side walks. They are prohibited on sidewalks and patios, in streets, in all bike circles, and on all lawn areas of the campus. A-frames that do not advertise a specific event with date, time, and location will be removed.
- Signs, banners, or posters cannot be propped against, hung from trees, or attached to buildings, balconies, waste receptacles, columns, or campus directional signboards.
- Only wooden posts or stakes of no more than 2” x 2” thickness may be used to support any signs, banners, or posters (no metal or plastic pipes).
- Posts or stakes are to be hammered into the ground. No digging is permitted.
- Size limits for signs, A-frames, banners, and posters are as follows:
- Wooden signs, lightweight plastic board (“coroplast” material), and A-frames are limited to dimensions of 2 1/2’ x 4’ (30” x 48”)
- A-frames must be constructed of sturdy materials to withstand strong winds and weather conditions.
- Signs and banners made of paper, cloth, and plastic sheeting do not have specific size limits as long as good judgment is used.
- Signs, banners, and posters attached to stakes may not be posted in the same location for more than one week art a time. However, ASUCD or GSA posting material used for elections may remain for the duration of the campaign period.
- Sponsors are responsible for removing all signs and materials within 24 hours of the conclusion of the event or they will be discarded. Grounds reserves the right to remove signage as part of their normal maintenance schedule.
Groups may contact Grounds at (530) 752-1655 to retrieve removed A-frames and stakes.
Commercial Advertising
DISTRIBUTION
University regulations prohibit the distribution of commercial advertisements on campus.
POSTING POLICY
Posting of one commercial advertisement per event/activity per University bulletin board is permitted (size limit 8 1/2” x 11”). Posting of commercial advertising on department, individual faculty member, classroom notice, and Student Housing bulletin boards is not permitted.
MAIL POLICY
Commercial mail cannot be delivered by hand to student organizations and residence hall mailboxes. It must be distributed through the United States Postal Service. Mail must be individually addressed. When mailing to registered student organizations use the following address: University of California, Davis, Center for Student Involvement, Name of Student Organization, Box #___, One Shields Avenue, Davis, CA 95616-8706.
ADVERTISEMENT IN CAMPUS NEWSPAPERS
Contact individual newspapers directly for advertising and insertion rates. The Associated Students newspaper, “The California Aggie,” is located on campus in 25 Lower Freeborn Hall, (530) 752-8660. | https://csi.ucdavis.edu/policies/posting-policy/ |
Guest Blog: You Can’t Get There from Here: Protecting Irreplaceable Wildlife Corridors
Scott Comings is the Associate State Director of The Nature Conservancy in Rhode Island. In his role at The Nature Conservancy, he has worked with and facilitated hundreds of scientific research projects in the state, participated in multiple statewide habitat assessments and overseen stewardship of all of the Conservancy’s Rhode Island lands since 2009. Previous to joining The Nature Conservancy staff in 1997, Scott worked as a field ornithologist for Brown University, University of California Davis, Smithsonian Migratory Bird Center, and Louisiana State University. He lives on Block Island.
One of the hallmarks of The Nature Conservancy is its pursuit of non-confrontational, pragmatic solutions to conservation challenges. For this reason, the Conservancy rarely takes a public position on a specific development project. However, Invenergy’s proposed Clear River Energy Center would do such harm to Rhode Island’s ecology, its biodiversity, and its resilience to climate change that we are compelled to oppose this new power plant.
In my testimony to the Energy Facility Siting Board on behalf of the Conservation Law Foundation, I describe the unique habitat of the Rhode Island Borderlands (extending to the northwest corner of the state), the role that habitat corridors play in allowing wildlife to adapt to climate change, and the consequences of cutting off critical pathways of connectivity.
Invenergy’s permit application and wetlands addendum correctly note the impact that the power plant would have on the immediate site. But they omit the important, regional context. The relationship between the proposed location and the surrounding natural areas is fundamental to assessing the impact of development. Some locations provide more essential ecological services than others.
A few maps tell the story. NASA’s satellite image of lights visible at night (shown above) powerfully illustrates how distinctly the Borderlands stand out in the coastal corridor from Washington, D.C., to Boston. The area of dark green running along the Rhode Island–Connecticut border contains tens of thousands of acres of relatively unfragmented forest. There are very few places like it on the East Coast.
The area includes Rhode Island’s northwest corner, which The Nature Conservancy called out as biologically significant and worthy of conservation 20 years ago. Forest fragmentation is the major threat to its integrity.
The second set of maps focuses on connectivity – the degree to which animals can move freely across the landscape. Such movement is critical in maintaining genetic diversity in a population of plants or animals. Additionally, wildlife may need to escape a natural disaster, avoid encroaching human development, or adapt to seasonal changes in the availability of food, water, and shelter. Animals require reliable pathways to find those resources. When habitat connectivity is cut off, species that are unable to migrate or adapt to local conditions will not survive.
Connectivity becomes even more important as our climate changes. As temperature and precipitation patterns shift, many species will need to adjust their range to find suitable habitat. Warming temperatures are already driving many plants and animals to higher altitudes and higher latitudes.
The Nature Conservancy has mapped habitat connectivity corridors for the eastern United States to help identify priority areas for land protection. On the maps below, the green areas show where animals are able to move about freely. The blue areas show areas of concentrated flow. These blue bottlenecks or pinch points are especially important for conservation because of their irreplaceable nature. They are often the only pathway for a species to find new, suitable habitat.
The Clear River Energy Center would be built right on top of an essential pinch point for wildlife habitat connectivity. Invenergy’s proposal for a new power plant – and the associated pavement, light and noise pollution, wetland destruction and deforestation – at this pivotal junction would irreversibly disrupt one of the region’s healthiest ecosystems.
The state statute that created the Energy Facility Siting Board provides that the Board may only grant a license when an applicant has shown that “the proposed facility will not cause unacceptable harm to the environment.”
Our research and that of others indicates that the proposed power plant would cut off the ability of plants and animals to adapt to a changing climate, and undermine important ecological functions across an area that stretches far beyond the physical footprint of the facility. In my opinion, that is an unacceptable harm to the environment. | https://www.clf.org/blog/invenergy-protecting-wildlife-corridors/ |
IEEE International Conference on Intelligence and Safety for Robotics
Humans and Robots coexist with each other in various domains. Robots assist humans in both public and private spaces for transportation and servant purposes where the environments including humans need to be recognized again for autonomous and safe operations. Workers perform collaborative operations with industrial robots in factories where autonomy are major concerns for manpower saving under workers’ safety. These examples show that intelligence and safety are indispensable for human–robot coexistence and they need to be elaborately integrated. The IEEE ISR2021 brings together an international community of experts to discuss a way forward in robotics for the future of safety and security with intelligence by provisions of new research results and perspectives of future developments as well as achievements of social implementation.
ISR is a bi-year conference, and originally it was planned to be held as ISR2020. Due to COVID-19, the conference is postponed to next spring, as ISR2021 at Nagoya, Japan on 4-6 March 2021.
Upcoming Conferences
Past Conferences
*Links may no longer be active*
- 2021 Nagoya, Japan (Virtual Conference) (Website)
- 2018 Shenyang, China (Proceedings)
Easy Links
RAS is a volunteer driven society with over 13,000 members worldwide.
Students are future of robotics and automation. | https://www.ieee-ras.org/conferences-workshops/financially-co-sponsored/isr |
I am delighted to be joining the Hearing the Voice team as a post-doctoral researcher based in Cambridge, where I will continue my work as a neuroscientist with Dr Jon Simons in the Memory Lab within the Department of Psychology. The focus of my research is on understanding the mechanisms of reality discrimination failure in hallucinations – how is it that information generated by the brain can be perceived to come from an external source? A key study from my PhD, carried out in together with HtV researchers in Durham, found differences in a fold in the cortex of the brain known as the paracingulate sulcus, in patients with schizophrenia with hallucinations compared to those without. This is an area of the brain that is involved in reality monitoring which is the cognitive ability to know whether something is real or imagined.
My initial project under the HtV grant will be a study with Professor Paul Allen at the University of Roehampton to investigate this further. We plan to use real-time functional magnetic resonance imaging in a novel neurofeedback paradigm to observe whether healthy participants improve their reality monitoring ability after learning to increase brain activity in the region of the paracingulate sulcus. If successful we hope to extend this research to investigate its possible use in the treatment of hallucinations.
Garrison, J.R., Fernyhough, C., McCarthy-Jones, S., Haggard, M., The Australian Schizophrenia Research Bank & Simons, J.S. (2015). Paracingulate sulcus morphology is associated with hallucinations in the human brain. Nature Communications, 6, 8956. | https://hearingthevoice.org/2016/09/08/welcoming-dr-jane-garrison-to-hearing-the-voice/ |
Magnetic Resonance Imaging (MRI) is primarily a medical imaging technique commonly used in radiology to visualize the internal structure and function of a body. MRI includes a magnet, such as ‘C’ shaped permanent magnet, resistive electromagnet and cylindrical superconducting electromagnet. Such magnet generates a powerful magnetic field to align the nuclear magnetization of hydrogen atoms in water in the body. Radio frequency (RF) field is used to systematically alter the alignment of this magnetization, causing the hydrogen nuclei to produce rotating magnetic field signals detectable by a scanner or a detector. These signals can be manipulated by additional magnetic fields to build up enough information to construct an image of the body. Because clinical magnets generally have a field strength, such as, in the range of 0.2-3.0 tesla (T) and a large swiss roll shape, the conventional MM is neither portable nor cost effective.
| |
Factors influencing the analytical balance
Analytical balances are the standard tools of quantitative analysis. They are used to accurately weigh samples as well as precipitates. The balances have the ability to provide accurate measurements to 4 decimal areas, for example, 0.0001 grams. As a result of the very sensitive nature of these instruments, there are numerous factors that can trigger them to provide incorrect analyses.
For a logical balance to provide a precise reading, the instrument should be adjusted. Calibration is very important due to the fact that it defines the accuracy and also quality of the measurements that are recorded by the equilibrium. To ensure the integrity of the measurement results, there has to be a recurring process of maintenance and keeping the calibration of the tools throughout its lifetime. Consequently, reliable, precise, as well as repeatable measurements will certainly always be achieved.
Below are some aspects that can impact the accuracy of analytical balances:
Temperature level
The tiniest change in room temperature can create noticeable changes in the weight of the sample. Strict temperature controls are as a result needed to provide exact readings on the analytical balance. Right here is an example of just how temperature affects the example: If the space temperature level is expensive, the example can increase or shed a few of its "water weight" due to evaporation. If the temperature level is as well reduced, the sample can acquire or allow for the condensation of water in the sample's container. Both factors can impact the accuracy of the measurement of the logical balance.
Vibrations
Resonances from fridges, ventilation systems, and also various other equipment that produce resonances can impact the precision of a logical equilibrium. Since the example size is really tiny, the tiniest vibration can rearrange, displace, or splash the sample, therefore impacting the quantity of product available for measuring along with its circulation in the equilibrium. Small resonances can likewise interrupt the fragile machinery of the logical equilibrium. These interruptions might call for recalibration of the logical balance, which can suggest lost money and time from relevant research study initiatives.
Chain reaction
Samples can likewise be very conscious mild climatic changes in temperature and also wind pressure. For instance, if you subject an item of white phosphorous to open air, it will break into flames. Direct exposure of such unstable examples to those conditions can bring about chemical reactions that are not only hazardous yet can additionally alter the state of the sample. That is why customers must take preventative measures to make certain that the example remains chemically inert throughout the considering procedure.
Air Currents
Air currents can affect the elaborate systems of the analytical equilibrium the like temperature level and also vibrations can modify the measurement of a tiny sample size. Modifications in atmospheric pressure from ceiling followers, air conditioning system, and also open doors can additionally create sensitive equipment to reveal incorrect dimensions.
Calibration
Adjusting an analytical balance will certainly guarantee that it offers an exact analysis. Despite the fact that some balances have an inner calibration function, several labs perform their very own calibration tests on brand-new devices with qualified calibration weights that aid users to identify the calibration setups for their details lab setting. It is advised that individuals test their balances every couple of months to make certain that the calibration setups are still accurate.
Individual Error
In many cases, malfunctioning dimensions are as a result of customer error. A laboratory employee may mistakenly leave an example on the table, exposing it to responses with atmospheric components; or a laboratory worker might poorly calibrate the machine which can affect the precision of the equilibrium. That is why most labs have stringent treatments for maintaining climatic criteria to make certain precise analyses as well as lower instances of user mistake.
Chaotic Job Room
The accuracy of logical balances depends on exactly how clean the job area is. A messy work space will certainly affect the accuracy of results. Make certain that absolutely nothing comes in contact with the analytical balance. If anything touches or massages versus the equilibrium, it will certainly trigger discrepancies in the readings.
Magnets
Some balances make use of magnets as a part of the evaluating system. As a result, placing the balance near magnetic tools or weighing magnetic sample can cause erroneous analyses.
Slope
The scale or equilibrium must be positioned on a level surface area. Precision balance scales evaluate the products presuming that the lots is applied alongside the force of gravity and vertical to the weighing platform.
Incorrect Grounding
Make sure that the AC resource is properly based to avoid the build-up of fixed energy. Second of all, make certain that the chassis is based to avoid electrostatic discharge.
Plastic or Glass Weigh Containers
Unlike metal containers, plastic as well as glass weigh containers can hold an electrical present. Static costs can cause non-repeatable dimensions or drifting measurement analyses. Also an accurate weight scale can supply inaccurate analyses in such situations.
Not managing the sample suitably
Laboratory employees need to take care of the examples with care. As an example, hot or cozy examples need to be cooled first. Hygroscopic examples need to be considered promptly with the balance doors closed to prevent absorption of moisture. Not complying with these actions will certainly impact the dimensions. Location the sample in the facility of the equilibrium for the most precise outcomes. | https://www.weighinginstru.com/factors-influencing-the-analytical-balance-245.html |
Peer review and evaluation are fundamental principles of scientific publication. Authors are required, for all documents submitted, to participate in a peer review process and to follow publication guidelines.
Composition and role of the editorial committee
The editorial committee makes the decisions about all publication projects submitted to the École française d’Athènes. Made up of the director of the institution, the president of the Scientific Council, the two directors of studies (ancient and Byzantine, and modern and contemporary), the Representative of the National Hellenic Research Foundation (Εθνικό Ίδρυμα Ερευνών) and the publications manager, this committee meets regularly throughout the year.
It currently includes:
-
Madame Véronique Chankowski, director of the École française d’Athènes
-
Monsieur Didier Viviers, president of the Scientific Council of the École française d’Athènes
-
Madame Ourania Poycandrioti, representative of the National Hellenic Research Foundation (Εθνικό Ίδρυμα Ερευνών)
-
Madame Laurianne Martinez-Sève, director of Antique and Byzantine studies
-
Monsieur Gilles de Rapper, director of modern and contemporary studies
-
Monsieur Bertrand Grandsagne, publications manager
The editorial committee decides the editorial direction of the collections and journals published by the École française d’Athènes and examines all the contributions. It carries out an initial examination where publications that are not appropriate are rejected. The editorial board are sent texts that are likely to be accepted and either review them themselves or send them out to experts who will present a report.
Peer review
All texts submitted for publication (members' theses, excavations and projects included in the institution's programs, co-publications, etc.) are subject to peer review. The editorial committee forwards the text to assessors, experts on the subject matter, who are part of its editorial board (who can, if necessary, call on external people). The number of reviewers is, at a minimum, fixed at two, decided by a third if their opinions differ. Their mission is to judge the scientific quality of the article and the methodological validity of the analysis. They then issue a report that establishes whether the article is worth publishing and send their criticisms and correction proposals to the editorial committee, who forward them to the author. The modified text is again subject to review until its final acceptance or rejection.
The reports are anonymous.
Review of texts submitted for publication in the BCH
The Bulletin de correspondance hellénique has been renewed, in 2020 its editorial board. An essential body for any scientific journal, it has guaranteed over the years the good reputation of the BCH and the scientific and editorial quality of the texts published.
The BCH is evolving. Since 2017 and the transfer of archaeological reports to another electronic publication, the Bulletin archéologique des Écoles françaises à l’étranger, it has almost doubled its page capacity. The journal is a reflection of the School's scientific activities, but it also continues to open up to other research linked to field work and the study of sources, and welcomes written contributions in languages other than French. With an online publication which is added to the paper publication, the journal is able to broaden its readership and international distribution.
In this respect, the central role of the editorial board is reaffirmed and developed. Ensuring the best possible representation of the School's disciplinary fields, it brings together experts who are invited to quickly assess the articles submitted for publication.
It includes: | https://journals.openedition.org/bch/587 |
Postulating a supernatural being does not really help explain reality since then we only displace the question of the origins of reality to explaining the existence of the supernatural being.
We create our reality through our understanding of the Universe and our reality is what is possible based on everything we know.
Deduction without any principles is what John Wheeler called a "law without law". If we can explain laws of physics without invoking any a priori laws of physics, then we would be in a good position to explain everything. It is this view that is the common scientific take on "creation out of nothing", creation ex nihilo.
Information is far more fundamental than matter or energy because it can be successfully applied to both macroscopic interactions, such as economic and social phenomena, and information can also be used to explain the origin and behaviour of microscopic interactions such as energy and matter.
Information, in contrast to matter and energy, is the only concept that we currently have that can explain its own origin.
As we compress and find all-encompassing principles describing our reality, it is these principles that then indicate how much more information there is in our Universe to find.
We compress information into laws from which we construct our reality, and this reality then tells us how to further compress information.
Information reflects the degree of uncertainty in our knowledge of a system.
Part One
Information has to be inversely proportional to probability, i.e. events with smaller probability carry more information.
The formula for information must be a function such that the information of the product of two probabilities is the sum of the information contained in the individual events. The information content of an event is proportional to the log of its inverse probability of occurrence.
We only need the presence of two conditions to be able to talk about information. One is the existence of events (something needs to be happening), and two is being able to calculate the probabilities of events happening.
The general principle that Shannon deduced is that the less likely messages need to be encoded into longer strings and more likely messages into shorter strings of bits.
The basic unit of information is the bit, a digit whose value is either zero or one.
Why did Nature choose digital rather than any analogue (non-digital) encoding? There are two reasons in favour of digital encoding: one is the reduced energy overhead to process information, and the other is the increased stability of information processing.
Meaningful information necessarily emerges only as an interplay between random events and deterministic selection.
Any self-replicating entity needs to have the following components: a universal constructing machine, M, a controller, C, a copier, X, and the set of instructions required to construct these three, I.
The Second Law of thermodynamics tells us that in physical terms, a system reaches its death when it reaches its maximum disorder (i.e. it contains as much information as it can handle).
Entropy is a quantity that measures the disorder of a system and can be applied to any situation in which there are multiple possibilities.
The entropy of a closed system always increases.
The First Law says that energy cannot be created out of nothing. It can only be transformed from one form to another.
The Second Law tells us that when we convert one form of energy into another we cannot do this with perfect efficiency (i.e. the entropy, the degree of disorder in the process, has to increase).
Life maintains itself on low entropy through increasing the entropy of its environment.
Computer processes information as it runs and any information processing must lead to wasting of heat.
When we "delete" information all we actually do is displace this unwanted information to the environment, i.e. we create disorder in the environment.
Information, rather than being an abstract notion, is entirely a physical quantity. In this sense it is at least on an equal footing with work and energy.
Information gain is very large when something unlikely happens.
There is a general law in finance that in an efficient market there is no financial gain without risk. Anything worth doing must, according to this law, have a (significant) probability of failure associated with it. If something is a sure thing, you can bet that the reward is going to be negligible.
In order to produce some useful work, you must be prepared to waste some heat - this is the Second Law of thermodynamics.
The Third Law of thermodynamics prohibiting us from reaching absolute zero.
The more profitable life becomes the less profitable its environment.
As the environment increases in entropy, this makes it more and more difficult for life to propagate.
The increase of complexity of life with time is now seen to be a direct consequence of evolution: random mutations and natural selection.
Mutual information is the formal word used to describe the situation when two (or more) events share information about one another.
Globalization is the increasing interconnectedness of disparate societies.
Phase transitions occur in a system when the information shared between the individual constituents become large.
A high degree of mutual information often leads to a fundamentally different behaviour, although the individual constituents are still the same. As a group they exhibit entirely different behaviour.
In the initial state, which was completely disordered, there is very little mutual information.
Mutual information is maximal in a maximally segregated society.
Wealth doesn’t just add to wealth, it multiplies. Those that have more will get proportionally more and so the gap between the haves and have-nots increases to conform to the power.
In a more interconnected society we are more susceptible to sudden changes. Mutual information simply increases very rapidly and if we want to make good decisions we need to ensure that our own information processing keeps pace.
Part Two
With quantum theory the notion of a deterministic Universe fails, events always occur with probabilities regardless of how much information you have.
At the heart of quantum physics is the concept of indeterminism. Indeterminism is linked to the fact that an object can indeed be in more than one state at any one time. This is also known as quantum superposition.
Measurements affect and change the state of the system being measured and through measurements we force the system to adopt one of its many possible states that existed prior to measurement. If we need to know the exact value of some property of an object (e.g. spatial location, momentum, energy), then we have to destroy the quantumness to obtain it – otherwise we can leave the quantumness intact.
The entropy of the whole system must (classically speaking) be at least as large as the entropy of any of its parts.
The problem with Shannon’s information is that it always tells us that there is at least as much information in a whole as there is in any of its parts. This is not true for quantum systems.
A qubit is a quantum system that, unlike a bit, can exist in any combination of the two states, zero and one.
Quantum physics applies to all matter in the Universe. It’s just that its predictions are much less distinct from conventional physics at this level.
Two of the most important features of quantum theory are:
Qubits can exist in a variety of different states at the same time
When we measure a qubit we reduce it to a classical result, i.e. we get a definitive outcome.
Quantum cryptography is one of the areas where quantum physics has demonstrated a new order of information processing. This is not just a theoretical construct; it has been successfully implemented over vast distances.
A computer, at its most basic level, is any object that can take instructions, and perform computations based on those instructions.
Quantum physics helps with problems because unlike a conventional computer which checks each possibility one at a time, quantum physics allows us to check multiple possibilities simultaneously. The main limitation of quantum computation geared towards solving classical problems is that we ultimately have to make a measurement in order to extract the answer, given that the question we are asking requires a definite answer. It is an intrinsically probabilistic process and there is always a finite probability that our answer may be wrong. A far more serious inefficiency is the effect of environmental noise which is, in practice, very difficult to control.
Thinking of computation as a process that maximizes mutual information between the output and the input – i.e. the question being asked, we can think of the speed of computation as the rate of establishing mutual information, i.e. the rate of build up of correlations between the output and the input. The fact that qubits offer a higher degree of mutual information than is possible with bits, directly translates into the quantum speed-up.
We need large systems to be in many different states at the same time in order that they demonstrate quantum behaviour. But, the larger the system, the more ways there are for the particular information about the state to leak out into the environment. The more atoms there are in a superposition, the harder it is to stop one of them decohering to the environment. The solution is redundancy.
The lower the overall entropy of an arbitrary physical system the higher the chances that its constituent atoms may be entangled.
There is continuing evidence that more and more natural processes must be based on quantum principles in order to function as they do.
Living beings are like thermodynamical engines. They must battle the natural tendency to increase disorder. Life does that by absorbing highly disordered energy coming from the Sun and converting it to a more ordered and useful form.
Let us define free will as the capacity for persons to control their actions in a manner not imposed by previous events, i.e. as containing some element of randomness as well as some element of determinism. Free will lies somewhere between randomness and determinism which seem to be at the opposite extremes in reality. Neither pure randomness or pure determinism would leave any room for free will.
Every quantum event is fundamentally random, yet we find that large objects behave deterministically. Sometimes when we combine many random things, a more predictable outcome can emerge.
One of the most fundamental and defining features of quantum theory is that even when we have all information about a system, the outcome is still probabilistic.
The quantity that tells us by how much orderly things can be compressed to shorter programs is known as Kolmogorov’s complexity.
A theory is only genuine if there is a way of falsifying it.
Kolmogorov view of randomness: when the rule is as complicated as the outcome it needs to produce, then the outcome must be seen as complex or, in other words, random.
Part Three
The Second Law already tells us that the physical entropy in the Universe is always increasing. As such, the information content of the Universe can only ever increase.
We currently view the information content of reality in terms of quantum physics and gravity – which are our shortest programs used to describe reality.
Gravity is quite distinct from quantum theory. Gravity dominates large bodies (e.g. planets) and becomes less influential for microscopic bodies.
The modern view of gravity, through Einstein’s general relativity, is to see it as a curvature of space and time.
The higher the entropy of a system the more information it carries.
Entropy is actually proportional to the total number of atoms on the surface, not the volume of an object.
Quantum mutual information
is a form of super-correlation between different objects and that this super-correlation is fundamental to the difference between quantum and classical information processing.
Quantum mutual information is not at all a property of the molecule, it can only be referenced as a joint property between the molecule and the rest of the Universe. It is proportional to the surface area of the molecule.
The information content of anything does not reside in the object itself, but is a relational property of the object in connection with the rest of the Universe.
The very act of partitioning, dividing, and pigeonholing necessarily increases information, as you cut through any parts that may be correlated.
Optical holography
shows that two dimensions were sufficient to store all information about three dimensions. When you look at a hologram, you see the standard two-dimensional image, but you are also seeing light reflected back to you at slightly different times and this is what gives you the perception of a three-dimensional image.
Einstein’s equation in general relativity describes the effect of energy-mass on the geometrical structure of four-dimensional space-time. His equation says that matter tells space-time how to curve, while space-time instructs matter how to move.
In thermodynamics the entropy of a system multiplied by its temperature is the same as the energy of that system.
All quantum information is ultimately context dependent.
Reality is created through your observations and is therefore not independent of us.
Science is constructed more in a way that it tells us what the Universe is "not like" rather than what it is like.
The laws of physics are the compression of reality which, when run on a universal quantum computer, produce reality.
The anthropic principle states that the laws of the Universe are the way they are, because if they were different, we would not be here to talk about them.
It is tempting to say that things and events have no meaning in themselves, but that only the shared (mutual) information between them is real. All properties of physical objects are only encoded in the relationships between them and hence in the information they share with other physical objects. This philosophy goes under the general name of "relationalism".
What emptiness means in Buddhism is that "things" do not exist in themselves, but are only possible in relation to other "things".
The whole of our reality emerges by first using the conjectures and refutations to compress observations and then from this compression we deduce what is and isn’t possible.
Epilogue
There is no prior information required in order for information to exist. Information can be created from emptiness.
Outside of our reality there is no additional description of the Universe that we can understand, there is just emptiness. There is no scope for the ultimate law or supernatural being – given that both of these would exist outside of our reality and in the darkness.
The laws of Nature are information about information and outside of it there is just darkness.
These notes were taken from Vlatko's book. | http://cedricjoyce.com/research/article?page=vlatkovedral |
Hello everyone, I am trying to get some help regarding some Parking Charge Notices issued by UKPC on private land.
I live in a block of flats and own my apartment and my designated parking space on our underground car park. Several years ago UKPC started to patrol our car park to avoid having residents and their friends to park in someone else’s parking spots. My particular parking space lies in part just underneath a ventilation grille which connects the underground with the outside basement of the flats. We happened to have some water infiltration to the underground from a non well insulated basement, so the building company decided to undergo extensive working repairs in order to re-insulate the basement of the building. At times they had to drill through concrete and all of a sudden a lot of derbies and small stones were falling from the grille directly into the bonnet of my car so in order to avoid damages I parked in my space, only slightly backwards so that the bonnet would have been protected. As a result the back end of my car ended just outside the designated lines.
Now, for the peculiarities of my space I could have parked more than one full car length outside my space without it to interfere in any way with the manoeuvring of my neighbours. In four occasions I had to do so and in those occasions the warden promptly issued me with a Parking Charge Notice. I have talked to our building managing agency who understood the issue and assured me they would have written to UKPC to explain the facts and ask for the tickets to be removed. UKPC denied it. This happened in 2014. I have been suggested by many persons to simply disregard those PCN’s as UKPC wouldn’t have legal authorities on private land. After several years of harassment from them through DRP (Debt Recovery Plus) in which they even called my private number shouting and intimating payment, they kept quiet for a year but recently through SCS law in London they send me a Letter before Claim Pursuant to the Pre-Action Protocol. I haven’t been able to promptly reply and after 30 days I finally got on by mail a Claim Form dated 15-06-2018.
The UKPC plaque on the car park states you should park your car in your designated parking space and display the relevant permit. I could have owned a much longer car like an SUV or a Van and as this happens with some of my neighbours, the car sticks out of the standard designated lines. Could a defence along those lines be a viable one? Also I have been made aware that under special circumstances the ticket can be removed. Could mine be considered one of those as I have absolutely no other interest in parking outside my designated lanes. I have a very limited amount of time to reply to the Claim Form and would like to know if I can have some help in finding the best way to succeed. Thank you all in advance.
Acknowledge the claim now using the details and password on the form. Do not put anything into the defence. This gives you 33 days from the date of issue to get a defence to the court. Get out your lease and see what it says about parking etc. You probably have a right to park without having to agree to a contract from a third party stranger to you lease so the cannot b a contract and therefore there can be no breach of any alleged contract any therefore therecan be no charge for that non breach.
Can you get a copy of that letter that the managing agents sent to UKPC. This is the principal asking their agent to act. The agents should not ignore. See if the agent will send a command to UKPC to cancel now.
As for the letter in request to cancel my charges my managing company sent to UKPC, so far I have been able to trace the email history but in their conversations they have omitted to show me the actual request to cancel them. They have only forwarded me the UKPC reply in which they decide not to cancel the charges but to reduce the amount. (by the way I had another occasion where my car was parked in my space but I have forgotten to display the ticket and UKPC had agreed to cancel the charge just by my managing company asking for it).
Also I happened to have a copy of the lease here at home. Have a look if the relevant sections regarding the use of the parking space. Would it be enough? There's no talks about agreement with third party strangers. At the time of creating the lease though, UKPC was not operating on our premises yet.
Your lease cannot be varied by a third party, only with your agreement with the lessor.
Whilst I cannot help you Timmyt I am very ashamed to say it but I know someone who works as an warden for UKPC Ltd.
UKPC is a very dodgy private parking company, firstly their telephone line 03332201030 is fake, it is only a recorded information line. It is only for illustrative purposes only. UKPC likes to hide behind anonoyomous PO boxes and when they interview peospective wardens it is never ever done in their head office only in hotel coffee areas. They claim to prospective warden staff they will pay 8% of every ticket but they do not actually it is only 0.5% or 1 %. Most of the wardens are very scared when issuing tickets because they work on private land like retail parks or leisure parks and face the constant threat of being assualted on site or public abuse. People who do this job are uneducated and cannot really find jobs elsewhere. They are only doing the job because of the desperation of the job market and the need to pay some bills.
This is a template letter for guidance. You need to add your details and where appropriate change the letter to suit your particular circumstances.
Once you’ve made changes, always print it out and read through to check it makes sense to the recipient.
I was not driving the vehicle at the time and am therefore not liable for any costs.
I was not driving my vehicle at the time of the alleged contravention because my vehicle had been stolen. Please see the enclosed correspondence from the police as evidence.
Quite simply, the parking attendant got it wrong and I was not parked inappropriately at the time the ticket was issued. This is due to the fact [insert reason here]. Please see attached evidence, [explain your evidence here & include and refer to documentary evidence if you have any], as proof of my appeal.
I was unable to determine what the relevant parking restrictions were because there was no clear signage to explain what they were. Please see attached evidence, [explain your evidence here & include and refer to documentary evidence if you have any], as proof of my appeal.
There are mitigating circumstances to explain why I parked where I did so I am requesting that the fine be waived for this reason. Please see attached evidence, [explain your evidence here & include and refer to documentary evidence if you have any], as proof of my appeal.
I am a law abiding citizen and I made an honest mistake but I cannot afford the fine. Please see attached proof of my financial position as evidence.
[Enter any further reasons of your own to support your appeal and include as much detail as you can].
Hi Ostell, I’ve been in touch with my managing company and asked for full email history between them and UKPC. They couldn’t provide me with the original request get the ticket cancelled. They only could retrieve the answer from UKPC in which they have offered instead a discount on the fines.
im sure there must be a UKPC contract between them and the leaseholders but is that really binding me even if I have never signed anything?
Ignore post #7, probably written by UKPC !!
You may find that the contract is with the managing agents. If so they are not the landholders and therefore I believe the contract cannot be enforced. Ask the managing agents directly who is the signatory to the parking contract.
You have not got much time left to present your defence to the court. Have you done any of it?
Managed to join pepipoo....eventually.... but trying to post a new topic is beyond my abilities.
You mean here, the private parking section, with the New Topic button towards the left ?
yes I have acknowledged the claim form and the court has now sent me an acknowledge receipt of my defence. I have acknowledged the claim form and appealed through the "appealaparkingticket" company who has drawn a typical defence.
What on earth did you put in your Defence? You have until 18th July to submit a defence! So if the court has acknowledged your defence did you send it with you acknowledgement, which was suggested that you DIDN'T do.
Now that a defence has been submitted then there is no point in appealmyparkingticket (?) sending another defence. It will not be accepted without the payment of the £255 fee.
I am the defendant in this matter. As an unrepresented litigant-in-person I seek the Court's permission to amend and supplement this defence as may be required upon disclosure of the claimant's case.
b) There was an agreement to pay a parking charge.
c) That in addition to the Parking charge there was an agreement to pay additional and unspecified additional sums.
d) The claimant company fully complied with their obligations within the terms of Schedule 4 of the Protection of Freedoms Act 2012.
e) The claimant company fully complied with their obligations within the International Parking Community / British Parking Association Code of Practice of which they were member at the time.
f) That I am liable for the purported debt. It is further denied that I owe any debt to the claimant or that any debt is in fact owed or that any debt exists or could ever exist or has ever existed. That in any event the claimant has failed to comply with the requirements of the Civil Procedure Rules and that their claim is both unfounded and vexatious.
g) The claimant is put to the strictest proof of their assertions.
a) Lack of Standing by Claimant: The Claimant is not the landowner of the car park, and has no proprietary interest in it. This means that the Claimant, as a matter of law, has no locus standi to litigate in their own name. Any consideration is provided by the landholder, and only they can sue for damages or trespass.
land on which they may operate, so that the boundaries of the land can be clearly defined f) any conditions or restrictions on parking control and enforcement operations, including g) any restrictions on hours of operation, h) any conditions or restrictions on the types of vehicles that may, or may not, be subject to parking control and enforcement i) who has the responsibility for putting up and maintaining signs and j) the definition of the services provided by each party to the agreement.
c) I can confirm that at the date and time of these alleged offences, I was parked in my own parking space and I was the owner of my flat and the associated parking space which is outlined in my lease agreement. I have not given the Claimant the authority to issue parking charges on the private land that I own nor does my lease agreement allow for a third party to issue parking charges on my own private land.
d) I have the reasonable belief that the Claimant has attempted to Claim an expense that was not incurred. Specifically, the £160 charges for each parking charge has been inflated by at least £60 as this figure is not in line with recognised industry standards for a parking charge.
f) No contract with the Claimant. Any contract must have offer, acceptance and consideration both ways. There is no consideration from Claimant to motorist; the gift of parking is the landowner’s, notClaimant. There is no consideration from motorist to Claimant. Any fees for parking are due to the landowner and not Claimant. With regard to this point it is trite law.
I deny that I am liable to the Claimant for the sums claimed, or any amount at all. I invite the Court to strike out the claim as being without merit, and with no realistic prospect of success.
All evidence referred to will be provided at least 14 days before any hearing date.
All times are GMT. This page was generated at 06:40:AM. | https://legalbeagles.info/forums/forum/legal-forums/motoring-parking/ppc-s-parking-charge-notices/parking-live-court-claims/1409715-scs-law-s-claim-form-received-ukpc-private-parking |
Biological And Ethical Ideas In Never Let Me Go And The Handmaid’s Tale
The restriction of self-expression, colour and language in ‘The Handmaid’s Tale’ could be linked to Kathy’s interest in art and self-expression in her youthful years, which contradicts with her later loss of identity in ‘Never Let Me Go’. Ishiguro’s ‘Never Let Me Go’ is narrated by Kathy. H, a previous student at Hailsham, who’s now a “carer” who helps “donors” recuperate after they give away their organs. In the novel, Kathy has been a carer for almost twelve years at the time of narration, and she often reminisces about her time spent at Hailsham attempting to come to terms with her tragic fate. ‘Never Let Me Go’ is written in three parts, with a before, during and after structure, indicating life before the disclosure of their fate at Hailsham school, during the acknowledgement, and following the loss of Kathy’s friends as a result of the demand for their vital organs. This basic structure parallels with the structure of ‘The Handmaid’s Tale’, where Atwood follows a non-chronological time sequence, narrating Offred’s austere everyday experiences and tasks. Similar to the fundamental structure following Offred’s restriction, entrapment and loss of identity in the strict totalitarian regime, Ishiguro’s three-part structure highlights the monochrome, limited lives and lack of identity of Kathy and the children at Hailsham. Arguably, Ishiguro’s basic structure conveys comprehensibility to the structure of the narrator’s life, where Kathy’s life at Hailsham, followed by life at the cottages, and then Kingsfield hospital where Kathy’s narration dissipates implies the instantaneous, minuscule and seasonal nature of the donor’s lives.
It is possible to draw a similarity between the construction of the handmaid’s names in the Gileadean regime, and Kathy’s emphasis that a “possible” is the terminology for a potential “clone parent” for one of the clones – the potential model from which the cloned DNA was originally taken. In ‘The Handmaid’s Tale’, the possessive preposition ‘of’ combined with the name of the Commander sets a chilling, destructive aspect to the regime, used in order to strip the handmaids from their past and former identity. Similarly, the tentative language used to describe clones being known as a “possible” could convey their lack of validity over their futures, emphasising the ignorance of clones in the contemporary society, suggesting their suppressed identities and absence of individuality. Alternatively, it could also embody the societal lack of familiarities and expertise over clones themselves, and the incomprehensible attitude in the means to conclude that a person is their clone, however similar the resemblances are. Therefore it highlights the lack of identity, freedom and individuality of the clones, as their lives are too brief to be fulfilled.
In the late 1990’s, scientists in the western world began work on cloning, with the first “clone” ever created being a sheep named Dolly, as part of the development of “stem cell research”. These developments provoked a great deal of discussion among the general public, in government, and at universities regarding humankind’s moral obligation to cellular life. ‘Never Let Me Go’ presumes a more complex and widespread system of organ-farming, with the clones being human, but with lives existing solely to create and provide organs for “real” humans. Ishiguro allows these biological and ethical ideas to absorb in the background, while a human plot of love, loss, and maturation occurs in the front line. | https://literatureessaysamples.com/biological-and-ethical-ideas-in-never-let-me-go-and-the-handmaids-tale/ |
The authors of this volume have answered critical questions about the experiences of diverse women and girls in the U.S. justice system and highlighted the complex interactions of gender, race, and class and their impact on legal decisions and interventions in family court, drug court, law enforcement, community corrections, and detention facilities. The chapters of this book have drawn attention to a number of themes that are specific to justice-involved women and girls and relate to their distinct social and psychological experiences and concerns.
The Relationship between Systemic Processes and Women’s and Girls’ Entanglement with the Justice System
First, women’s and girls’ presence in various justice arenas is a multisystemic issue: Micro- and macro-level social processes are linked to individual behaviors that bring women and girls into contact with justice officials. These processes occur both outside and inside a justice system that is generally oblivious to or not equipped to take into consideration human differences and contextual factors that influence individual actions. In particular, the lack of awareness about gender, race, and class produces adverse consequences for women and girls: Social biases and disadvantages go unchallenged and complicate women’s and girls’ ability to resume independent living outside the purview of the legal system; they also contribute to women’s and girls’ further victimization while in the hands of the law.
The chapters have identified several contextual factors that influence girls’ and women’s contact with the justice system. Gender violence is a pervasive theme and a primary risk for women’s and girls’ involvement in various legal arenas. Specifically, family and intimate partner violence, parent-child conflict, as well as neglect and abuse, sexual and physical, increase the likelihood that women and girls will engage in “survival crimes.” For example, running away from abusive homes increases girls’ vulnerability to homelessness and the likelihood that they will engage in criminal activities—theft and prostitution—to provide for their basic needs. Similarly, women who have experienced intimate partner violence may reach out to the justice system for assistance and protection. Unfortunately, women often come face to face with legal officials’ social biases and victim-blaming practices that further traumatize them and put them and their children at risk for continued exposure to family violence. Blaming is also apparent in the social discourses that describe justice-involved women as individuals who prefer to depend on public resources and as “bad mothers” who make poor choices. Blaming is a mechanism that deflects attention from the socio-structural inequities that contribute to women’s entanglement with the justice system.
Economic deprivation is an additional contextual factor that accounts for women’s participation in crime and their subsequent contact with the justice system. In particular, poverty has a significant influence on women’s and girls’ experiences in the justice system: It limits their access to strong legal counsel, their ability to seek health- and mental health—care, and their ability to avoid incarceration.
Recent legal policies and guidelines have also played a critical role in women’s justice involvement: They have redefined women’s and girls’ attempts to cope with violence and social disadvantage as delinquent or criminal behaviors worthy of legal punishment. School-based fights, parent-child conflict, breaking curfew, sex trafficking, and addictions have become the target of law enforcement and offender rehabilitation, and have increased women’s and girls’ vulnerability to arrest, prosecution, and sentencing regardless of the circumstances that have led to their participation in illegal activities.
Since the 1980s, federal initiatives to address the problem of drug abuse have resulted in record-high incarceration rates, with Hispanic and African American women being the most impacted by the far-reaching and get-tough approach of the United States’ war on drugs. Upon returning home, these women face additional legal challenges—state laws that bar individuals with a felony conviction from housing and financial assistance, and thus perpetuate their economic disadvantage and diminish their chance of success in the community. In sum, laws, justice policies, and guidelines have largely contributed to the social disenfranchisement and legal entanglement of diverse women and girls, and contributed to the reproduction of institutionalized discrimination.
Institutionalized discrimination operates through gender, class, and racial biases that influence legal decisions and interventions. In the criminal justice system, these biases shape the perception that non-gender-conforming behaviors are suspicious and unlawful, and lead to the profiling and arrest of law-abiding citizens. They also explain why girls and women who do not conform to gender role expectations receive longer jail and prison sentences; in this case, incarceration represents an attempt to punish and control behaviors that deviate from gender norms. The intersections of racial, ethnic, and class biases with gender stereotypes determines justice officials’ assessment of female offenders and the level of monitoring and intervention aimed at diverse women and girls under correctional supervision. For example, Black girls are frequently perceived as more aggressive and crime prone than White girls, particularly when they act in ways that are not gender conforming, and are thus subject to harsher punishment.
Social prejudices and myths pervade the unsubstantiated theories and assumptions that often guide the behaviors and decisions of justice officials and mental health professionals. These include age-old notions that women are hysterical, emotionally unstable, manipulative, and somehow deserving of abuse. Many of these myths perpetuate victim blaming and give rise to harmful decisions such as reversing custody and sending children to live with an abusive parent, as well as arresting minor-aged girls for prostitution and women domestic violence victims. These stereotypes, biases, and myths justify legal practices that further expose women and girls to violence and victimization while they are under legal supervision. They also restrict justice officials’ ability to consider contextual variables and integrate information about gender, race, class, and culture into justice decisions and interventions.
Institutionalized Discrimination Exacerbates the Concerns of Justice-Involved Women and Girls.
In addition to gender-based myths and stereotypes, the authors of this book have described how the justice system often involves a “one size fits all” approach to law enforcement and criminal offending and have shown that gender “blindness,” or the expectation that treatment for men can be generalized to women, remains a challenge. In particular, women involved in the criminal justice system are offered services originally designed for men with violent offenses. These services do not match the nature and severity of women’s and girls’ criminal behaviors (e.g., nonviolent, property, and drug-related crimes); they marginalize women’s and girls’ unique social, medical, and psychological concerns. For example, women’s primary caregiving role and its implications for rehabilitation in the community as well as in prison have yet to receive full consideration. Correctional institutions provide limited opportunities for ongoing and health-promoting parent-child contact, and imprisonment creates significant distress and difficulties for mothers behind bars, including psychological pain associated with prolonged separation from children and the termination of their parental rights based on the Adoption and Safe Families Act enacted by Congress in 1997. In turn, this distress exacerbates their mental health problems and diminishes their ability to comply with the rules and policies of correctional settings, hence resulting in technical violations that prolong their involvement with the justice system.
When legal practices do not take into account the interconnected systemic factors that constitute women’s pathways into the justice system, they often reproduce the abuse dynamics that many girls and women experience in their families and communities. Instead of finding protection and restitution in the legal system, girls and women are often punished and held accountable for the consequences of actions that are not fully under their control. For example, women in contested custody disputes with violent ex-partners find the same abuse dynamics perpetuated in family court where the behaviors and decisions of justice officials and mental health professionals are influenced by dual relationships, economic gain, and limited expertise in intimate partner violence. Likewise, incarcerated girls in juvenile detention facilities are often subject to the same violence they experienced at home: Being yelled at, demeaned, threatened, and physically restrained by staff further exacerbates the trauma-related impairment of girls and women exposed to interpersonal violence outside the justice system.
Lastly, the authors of this book note that justice involvement is associated with social stigma and that this stigma is particularly significant for girls and women whose contact with justice officials is perceived as evidence of “bad” and “un-ladylike” behavior or as a violation of gender norms. Shame intensifies psychological distress and increases social marginalization in ways that reduce girls’ and women’s chance of success in the community. Together with legal punishment and marginalization, it is a social mechanism that serves to enforce gender conformity.
There Is a Dire Need for Psychological Research, Interdisciplinary Partnerships, and Evidence-Based, Gender- and Culturally Responsive Legal Interventions.
The authors of this volume have drawn attention to the lack of research on gender- and culturally responsive programming for diverse women and girls in various arenas of the justice system. A number of obstacles to conducting such research must be acknowledged. A primary obstacle concerns the fact that gender- and culturally responsive interventions are not necessarily standard practice. Moreover, such research requires consideration of context, including the micro- and macro-level dynamics described in the chapters (Ancis 2004). Such variables are not always measured due to methodological challenges or lack of consideration on the part of the researchers. Careful attention to the process and outcome of interventions as relates to multiple aspects of psychosocial functioning is also indicated, requiring careful and time-intensive longitudinal observations.
The paucity of empirical knowledge as it applies to legal policies, procedures, and treatment interventions perpetuates mythology, stereotyping, and discriminatory practices that harm girls and women. This book’s authors have outlined a number of areas requiring further investigation. For example, there is a critical need for data on the concerns and needs of lesbian, bisexual, queer, gender-nonconforming, and transgender girls, who, for the most part, have remained invisible in the juvenile justice system. Transnational research on both sides of the U.S./Mexico border is also necessary to document mental health trends and resources for immigrant Mexican women and their children. Such research will require the development of interdisciplinary partnerships to gain a deeper understanding of women’s and girls’ involvement in and interactions with the legal system, using an ecological, social justice approach to the study of legal practices and their outcomes for female populations.
In addition to the need for research, it is important that existing empirical evidence be accessible and applicable to various justice settings. The authors’ analyses of girls’ and women’s experiences in various justice settings highlight the need for attention to guidelines that promote sensitivity and responsiveness to gender and cultural diversity. Guidelines, such as those developed by various APA task forces, include recommendations for further research as well as principles for best practice with diverse women and girls. For example, there is now a substantial body of scientific knowledge detailing domestic abuse and its impact, including the ways in which abuse dynamics manifest in family court. Yet, mental health practitioners, attorneys, and judges continue to rely on discredited theories such as parental alienation. The tendency to blame women and girls or hold them accountable for conditions for which they have no or limited control makes attention to their experiences of victimization and, relatedly, gender-responsive practice less likely. Increased awareness and application of guidelines is warranted.
Ensuring that clinicians remain informed of the current literature and translate it into evidence-based recommendations for justice officials is essential to promoting gender and cultural responsivity, equity, and fairness for diverse justice-involved women and girls. It is also imperative that mental health professionals work with judges, attorneys, correctional personnel, and other legal officials to implement an intersectional approach to the treatment of justice-involved girls and women. At a minimum, they should be prepared to educate justice staff about existing guidelines for the treatment of diverse populations, including the American Psychological Association’s (APA) guidelines for psychological practice with girls and women (2007); the APA guidelines for multicultural education, training, research, practice, and organizational change (2003); the APA guidelines for psychological practice with lesbian, gay, and bisexual clients (2012); the APA guidelines for psychological practice with older adults (2014); and the APA guidelines for psychological practice with transgender and gender-nonconforming people (2015). These guidelines may serve to create gender- and culturally affirming environments in various justice systems, with a focus on empowering diverse girls and women and enabling them to participate in treatment decisions, rather than defining them as the object of mental health and legal interventions.
Translating evidence-based psychological guidelines into recommendations for legal practice is a critical next step: This will involve promoting a relational approach to justice interventions that interrupts the dynamics of abuse in girls’ and women’s family and social history; advancing a comprehensive approach to treatment that combines psychological and support services in order to address girls’ and women’s various social and mental health concerns; and using culturally sensitive assessment strategies to identify and target the contextual factors that influence girls’ and women’s mental health and legal outcomes. Strategies that harness girls’ and women’s strengths rather than stress punishment and self-reformation are needed.
When psychological evidence serves as a foundation for legal policies, practices, and interventions, it is possible to shift attention from girls’ and women’s intrapersonal variables to the conditions of their involvement in the justice system, including the structural inequities and pervasive trauma that shape their lives, from their criminal behaviors to men’s role in female offending, and from retribution to prevention at individual, family, community, and social levels. Systemic change through advocacy and prevention is necessary to improve the psychological and social outcomes of justice-involved women and girls. This may involve, for example, the development and implementation of antidiscrimination policies that promote the rights of diverse girls and women—the right to safety, equal opportunities in employment, education, health care, and housing.
It is equally important that psychologists and other mental health clinicians be better prepared to work with justice-involved women and girls during and after their graduate training. APA-accredited programs in psychology are required to provide evidence that their curriculum addresses issues related to individual and cultural diversity in ways that promote the development of multicultural competencies. However, these programs often adopt a single-course approach to multicultural education that has limited capacity to increase gender and cultural responsiveness in clinical practice (Petierse et al. 2009). The lack of a unifying framework for multicultural training is also a concern and an obstacle to advanced multicultural competencies (Ancis and Rasheed Ali 2005). The chapters of this book provide specific recommendations for training that support the development of the knowledge and skills necessary to work with diverse women and girls in the U.S. justice system, including knowledge of the pathways that lead to women’s involvement in the justice system, their diverse experiences of legal interventions, as well as awareness of the gender, sexual, racial, and class biases that permeate legal decisions and interventions.
Conclusion
The authors and editors of this book propose a contextualized and evidence-based approach to the treatment of women and girls in the justice system. This approach takes into consideration the ecological processes that influence justice policies and practices and that have a primary impact on girls’ and women’s legal, social, and psychological outcomes. Gender- and culturally responsive justice depends on the translation of sound psychological research into new and revised legal and therapeutic frameworks for working with women and girls in various arenas of the justice system.
References
American Psychological Association. 2015. “Guidelines for Psychological Practice with Transgender and Gender-Nonconforming People.” American Psychologist 70 (9): 832—64. doi: 10.1037/a0039906.
American Psychological Association. 2014. “Guidelines for Psychological Practice with Older Adults.” American Psychologist 69 (1): 34—65. doi: 10.1037/a0035063.
American Psychological Association. 2012. “Guidelines for Psychological Practice with Lesbian, Gay, and Bisexual Clients.” American Psychologist 67 (1): 10—42. doi: 10.1037/a0024659.
American Psychological Association. 2007. “Guidelines for Psychological Practice with Girls and Women.” American Psychologist 62 (9): 949—75.
American Psychological Association. 2003. “Guidelines on Multicultural Education, Training, Research, Practice, and Organizational Change for Psychologists.” American Psychologist 58 (5): 377.
Ancis, Julie, ed. 2004. Culturally Responsive Interventions: Innovative Approaches to Working with Diverse Populations. New York: Brunner-Routledge.
Ancis, Julie R., and Saba Rasheed Ali. 2005. “Multicultural Counseling Training Approaches: Implications for Pedagogy.” In Teaching and Social Justice: Integrating Multicultural and Feminist Theories in the Classroom. Edited by Carolyne Z. Enns and Ada L. Sinacore, 85—97. Washington, DC: American Psychological Association.
Pieterse, Alex L., Sarah A. Evans, Amelia Risner-Butner, Noah M. Collins, and Laura Beth Mason. 2009. “Multicultural Competence and Social Justice Training in Counseling Psychology and Counselor Education.” Counseling Psychologist 37 (1): 93—115. doi: 10.1177/0011000008319986. | https://psychologic.science/gender/justice/13.html |
Beaver Dam Analogues (BDAs) are wooden structures that both mimic the benefits of beaver dams and encourage beaver activity. Functioning much like beaver dams, BDAs are constructed by placing a channel-spanning line or multiple lines of vertical pilings in the streambed, with live cuttings woven across. In OHA's BDAs, streambank plantings are also woven across the line of pilings. This creates a live dam, increasing stability as the plantings become established. BDAs back up and slow down the water, which raises the water table, helping restore incised channels, and creating the moist soil conditions needed for riparian vegetation to flourish. BDAs can also help build up incised streambeds by facilitating the deposition of sediment. Water that moves slower drops more sediment. This technique provides multiple benefits at a relatively low installation cost, making it a great solution for Myers Creek and other incised streams.
The purpose of the Stream Habitat Restoration Guidelines (SHRG) is to promote process based natural stream restoration, rehabilitating aquatic and riparian ecosystems. These guidelines advance a watershed scale assessment of the stream system, establishing goals, objectives and design for restoring optimum sustainable native biodiversity, using principles of landscape ecology and integrated aquatic ecosystem restoration.
This literature review addresses the following issues: design and ecological considerations for new channels, habitat restoration and mitigation, channel relocation and realignment, channel modification for habitat and stability, placement of large woody debris (including removal and relocation), placement of boulders (including smaller rocks and substrate), off-channel ponds (rearing and other), off-channel channels (new floodplains, high flow by-pass), gradient control structures, habitat enhancement activities and structures.
Proper functioning condition (PFC) is a qualitative method for assessing the condition of riparian-wetland areas. The term PFC is used to describe both the assessment process, and a defined, on the-ground condition of a riparian-wetland area. The PFC assessment refers to a consistent approach for considering hydrology, vegetation, and erosion/deposition (soils) attributes and processes to assess the condition of riparian-wetland areas. A checklist is used for the PFC assessment (Appendix A), which synthesizes information that is foundational to determining the overall health of a riparian-wetland system.
An EPA guide for the public containing background on wetlands and restoration; information on project planning, implementation, and monitoring; and lists of resources, contacts, and funding sources.
An overview of wetland functions and Ecology's role in protecting, restoring and managing wetlands, with many links to wetland resources on the left sidebar.
This EPA document provides background information on classifying wetlands, selecting criteria variables, designing monitoring programs, building a database analyzing nutrient and algal data, deriving regional criteria, and implementing management practices. The wetlands modules (at the bottom of the website) provide "state-of-the-science" information to help develop biological assessment methods to evaluate both the overall ecological condition of wetlands and nutrient enrichment (one of the primary stressors on many wetlands).
PowerPoint lectures that teach about stream morphology, hydrology, physical factors, organisms, ecosystem processes and more.
This rating system was designed to differentiate between wetlands in eastern Washington based on their sensitivity to disturbance, their significance, their rarity, our ability to replace them, and the functions they provide. The rating system, however, does not replace a full assessment of wetland functions that may be necessary to plan and monitor a project of compensatory mitigation.
Hruby, T. 2004. Washington State wetland rating system for eastern Washington – Revised. Washington State Department of Ecology Publication # 04-06-15. | http://okanoganhighlands.org/restoration/resources/ |
Brain May Play Key Role in Blood Sugar Metabolism and Development of T2 DiabetesMonday, December 09, 2013
A growing body of evidence suggests that the brain plays a key role in glucose regulation and the development of type 2 diabetes, researchers recently wrote in the journal Nature. If the hypothesis is correct, it may open the door to entirely new ways to prevent and treat this disease, which is projected to affect one in three adults in the United States by 2050.
In the paper, lead author Dr. Michael W. Schwartz, director of the Diabetes and Obesity Center of Excellence at the University of Washington in Seattle, and his colleagues from the Universities of Cincinnati, Michigan, and Munich, note that the brain was originally thought to play an important role in maintaining normal glucose metabolism With the discovery of insulin in the 1920s, the focus of research and diabetes care shifted to almost exclusively to insulin. Today, almost all treatments for diabetes seek to either increase insulin levels or increase the body`s sensitivity to insulin.
"These drugs," the researchers write, "enjoy wide use and are effective in controlling hyperglycemia [high blood sugar levels], the hallmark of type 2 diabetes, but they address the consequence of diabetes more than the underlying causes, and thus control rather than cure the disease."
New research, they write, suggests that normal glucose regulation depends on a partnership between the insulin-producing cells of the pancreas, the pancreatic islet cells, and neuronal circuits in the hypothalamus and other brain areas that are intimately involved in maintaining normal glucose levels. The development of diabetes type 2, the authors argue, requires a failure of both the islet-cell system and this brain-centered system for regulating blood sugar levels.
In their paper, the researchers review both animal and human studies that indicate the powerful effect this brain-centered regulatory system has on blood glucose levels independent of the action of insulin. One such mechanism by which the system promotes glucose uptake by tissues is by stimulating what is called "glucose effectiveness." As this process accounts for almost 50 percent of normal glucose uptake, it rivals the impact of insulin-dependent mechanisms driven by the islet cells in the pancreas.
The findings lead the researchers to propose a two-system model of regulating blood sugar levels composed of the islet-cell system, which responds to a rise in glucose levels by primarily by releasing insulin, and the brain-centered system that enhances insulin-mediated glucose metabolism while also stimulating glucose effectiveness.
The development of type 2 diabetes appears to involve the failure of both systems, the researchers say. Impairment of the brain-centered system is common, and it places an increased burden on the islet-centered system. For a time, the islet-centered system can compensate, but if it begins to fail, the brain-centered system may decompensate further, causing a vicious cycle that ends in diabetes.
Boosting insulin levels alone will lower glucose levels, but only addresses half the problem. To restore normal glucose regulation requires addressing the failures of the brain-centered system as well. Approaches that target both systems may not only achieve better blood glucose control, but could actually cause diabetes to go into remission, they write. | http://www.diabetescare.net/article/title/brain-may-play-key-role-in-blood-sugar-metabolism-and-development-of-t2-diabetes |
How Covid May Have People Ordering Vegetables for Home Delivery Permanently
The COVID-19 pandemic may have permanently changed the way that we buy food. Home delivery services for fruits, vegetables and other grocery items has been around for decades. Many people with limited mobility, such as the elderly or differently-abled, have been taking advantage of these services for a long time. However, during the pandemic, these services became more widely used since people had to stay home as much as possible to prevent the spread of the COVID-19 virus. This article will discuss how some habits are changing regarding ordering fruits and vegetables for home delivery due to COVID-19.
Beginning of the Pandemic
When the numbers of COVID-19 cases first began to rise, grocery stores had to adjust to a sharp rise in demand for home delivery services of vegetables, fruit and other food items. A service that had previously been very underused was suddenly necessary for a much higher percentage of the population to live out their lives safely. To meet this new need, many food delivery services and grocery stores had to evolve and expand their business in new directions.
For some shops, this problem would be handled with existing staff by sectioning out their deliveries by areas or rotating time when people in certain neighbourhoods could get deliveries. This could work during the pandemic because everyone was working from home, and people could accept home delivery of vegetables almost any time of the day, any day of the week.
Establishing Structures
Because the pandemic lasted for so long, more permanent home delivery structures for fruits and vegetables were put into place. Systems were incorporated into the structure of many grocery stores to handle a large number of online orders and provide enough human resources to pick out and bag the items purchased.
These new systems made some permanent changes to the structure of shops that distribute produce. Many of them were able to hire extra workers and develop organisational systems that boost efficiency for home deliveries. Even as COVID-19 lockdown restrictions are easing in many places, these systems remain in place today.
The Post-COVID
While we have yet to arrive in a fully post-COVID world, we are reaching a stage where our lives are returning to a more normal state. As this transition occurs, people do not necessarily need to order home delivery to get their fruits and vegetables. However, many providers and shops will continue offering services as long as demand remains high.
The pandemic may have been just the nudge the people needed to start using existing services. It’s possible that following the pandemic, these services will still be more available and widely used because people are aware of them. It will be interesting to see what kinds of permanent effects the pandemic has had on how we buy groceries. | https://www.abcmoney.co.uk/2022/04/20/is-home-delivery-of-vegetables-here-to-stay/ |
The natural crystalline lens of the eye plays a primary role in focusing light onto the retina for proper vision. However, vision through the natural lens may become impaired because of injury, or due to the formation of a cataract caused by aging or disease. To restore vision, the natural lens is typically replaced with an artificial lens. An artificial lens may also be implanted as a replacement or a supplement to the natural lens in order to make a refractive or other vision correction.
A natural lens is generally removed through the use of a slender implement which is inserted through a small incision in the eye. The implement includes a cutting tool that is ultrasonically vibrated to emulsify the lens. The emulsified fragments of the lens are aspirated out of the eye through a passage provided in the cutting tool. The slender nature of the implement enables extraction of the lens through a small incision in the eye. The use of a small incision over other procedures requiring a large incision can lessen the trauma and complications experienced during surgery and postoperatively.
The artificial lens is composed of a flexible material so that the lens can be folded and/or compressed to a smaller cross-sectional size, and thus avoid enlargement of the incision during implantation of the lens. To this end, inserters ordinarily include a lens reducing structure which functions to reduce the cross-sectional size of the lens, and a cannula with a lumen to direct the lens into the eye. The lens reducing structure has taken many different forms including, for example, hinged sections which close about a lens and tapering lumens which compress the lens as it is advanced toward the eye. The cannula is a slender, thin-walled tube at its distal end that guides the lens through the incision and into the eye. The lumen along the distal portion of the cannula generally has a substantially uniform configuration and size (i.e., with only a slight taper for molding purposes) to avoid additional high forces needed to further compress the lens. By maintaining a substantially uniform lumen, the risk of rupturing the thin walls is alleviated.
While there is great interest in making the distal end of the inserter as narrow as possible, there are practical considerations which have limited the extent to which the size of the cannula can be reduced. For instance, as mentioned above, large inwardly directed forces are needed to further reduce a lens which is already tightly compressed. As a result, merely reducing the diameter of the lumen at its distal end to achieve a smaller cannula will at some point increase the inwardly directed forces so as to impede the advance of the lens or cause rupture of the walls. Also, further thinning of the walls to reduce the cannula without narrowing the lumen will also at some point lead to rupture of the cannula walls during use.
| |
Future value nominal interest rate formula
29 Jan 2020 The nominal interest rate formula can be calculated as: r = m × [ ( 1 + for such stimulus measures is that inflation should not be a present or a As the interest rate ( discount rate) and number of periods increase, FV increases or PV decreases. Key Terms. discounting: The process of finding the present And if the effective interest rate, E, is applied once a year, then future value, F2, E, is known and equivalent period interest rate i is unknown, the equation 2-1 24 Jun 2019 The opposite of such a nominal interest rate is the effective interest rate. Formula. When we have information about present value (PV), future Calculate the nominal interest rate for a known initial investment which amounts to a known future value in a specified period of time. The NIR is usually More Interest Formulas. Nominal and Effective Interest Rates. Go to questions covering topic below. An interest rate takes two forms: nominal interest rate and Calculating simple and compound interest rates are . So from a simple interest calculation, we have, if you remember, the F5 or the future value of the amount,
This amount is called the future value of P dollars at an interest rate r for time t in years. When using the formula for future value, as well as all other formulas in this *When applied to consumer finance, the effective rate is called the annual
Future value is the value of an asset at a specific date. It measures the nominal future sum of money that a given sum of money is "worth" at a specified time in the future assuming a certain interest rate, or more generally, rate of return; it is the present value multiplied by the accumulation function. Nominal Interest Rate Formula is used to calculate the rate of interest on the debt which is obtained without considering the effect of inflation and according to formula the nominal interest rate is calculated by adding the real interest rate with the inflation rate. Nominal interest rate is the interest rate which includes the effect of inflation. It approximately equals the sum of real interest rate and inflation rate. Loans and investments mostly quote a nominal interest rate because it is the rate which is applied to the principal balance to arrive at interest expense. It is important when using the formula for the future value factor to match the rate per period with the number of periods. The number of periods should also match how often an investment is compounded. For example, assume that the nominal interest rate is 12% per year compounded monthly. | https://bestftxediries.netlify.app/sebree8059sih/future-value-nominal-interest-rate-formula-242.html |
In "Bug Reports" I have listed a query about a possible issue connected with running surveys. Full details set out here - https://e-voice.org.uk/help/forums/message-view?message_id=43605037
Has anyone else had any issues in running surveys where emails go to everyone with "Admin Rights" rather than just the author of the survey (as implied in the online manual)?
This is the first time I have used this functionality, but it did not go exactly to plan. I have an meeting of the Management Committee next Wednesday and I would like to report back.
I don't think notifications go just to the survey creator, or to all admins. I believe it's configurable per-admin and per-survey. If you go to the survey admin page, Email and Notifications tab, you should be able to choose whether or not you get notifications from that survey.
Thanks
Joe - Voice Admin
Hi Joe,
I'm familiar with the email tab referred to in your reply. I have tried checking and unchecking the tick box and it made no difference. The manual is quite clear that email alerts should go to the person running the Survey not to all those with local site administration profiles. The fact that we have 8 site administrators is another issue which I'm about to change. Either there is a bug on the system and email alerts are going to all administrators rather than the survey creator or the system parameters have been changed and the online manual has not been updated.
Secondly, I have had difficulty in changing the interval period away from instant.
Finally there is the matter of the missing send encrypted email facility that again is referenced in the online manual, but missing from this tab.
This is a copy of the original text fir raised ad a potential bug on the system.
This is an extract from the online manual relating to Surveys (http://cambridgeopensystems.com/documentation/applications/surveys/) the text in blue is from the website -
Email and Notifications
This tab of the administration interface offers management tools to support your survey.
Firstly it allows the author of the survey to configure notification emails for themselves whenever a visitor responds to the survey. The "Interval" drop down box specifies how often the system should send the emails to you, such that if hourly or daily is selected, emails will be grouped into batches sent with that frequency.
Secondly the author is able to create and send out two types of emails regarding the survey.
- Send Bulk Mail - This button will send the mail automatically to all registered users who have responded to the survey.
- Send Authenticated Link - This button will send the mail to all registered members of the website. The mail will include a link that will automatically log the recipient in and take them to the first page of the survey.
I manage the ICT for a small Choir. We have, for the first time, used the Survey functionality on the system to create a very simple questionnaire. Reading the manual it would imply that the person who is the "author" of the survey will be the person to whom emails are sent as and when the survey is completed. However, it transpires that other colleagues who also have Admin rights in the website receive email notifications each an everytime a survey is competed. The manual implies that the author of the survey is able to configure notification emails for themselves whenever a visitor responds to the survey - there is no reference to other Website Administrators receiving emails. I have tried ticking and unticking the "Notify Me" box in the Emails & Notifications tab, but that doesn't seem to stop the emails going to the other Administrators. Other website Administrators do not want to be bothered by such emails.
The fact that all Administrators are receiving email notifications each and everytime a survey is completed seems to imply that (a) there has been a change to the system and the online manual has not been updated; or (b) there is a bug on the system.
Also under this tab there is supposed to be a Send Authenticated Link button that seems to be missing. Again is this a change to the system or a bug? | https://e-voice.org.uk/help/forums/message-view?message_id=43832893 |
The images that appear on our website have been obtained from various methods including: photos taken by members in some of our events, purchasing licensing rights from services like Shutterstock, and using Bing's search for public domain or free to use and share images.
Whenever appropriate, we have given proper credit to the source of the images, and, to the best of our knowledge, have not used any images without permission of licensing rights. If you believe any image that appears on our site belongs to you and has been used without your permission or licencesing, please contact us here.
Copyright law protects original creative works, such as software, video games, books, music, images, and videos. Copyright law varies by country. Copyright owners generally have the right to control certain unauthorized uses of their work (including the right to sue people who use their copyrighted work without permission). As a result, certain images and other copyrighted content may require permissions or licenses, especially if you use the work in a commercial setting. For example, even if you have permission to use an image, you may need additional permission to use what is in the image (e.g., a photo of a sculpture, a person, or a logo) because someone else's copyright, trademark, or publicity rights might also be involved. You are responsible for obtaining all of the permissions and licenses necessary to use the content in your specific context.
However, even copyright-protected works can be lawfully used without permission from the copyright holder in certain circumstances. The Wikipedia entry on copyright law contains a useful overview of copyright law, including fair use and other exceptions to copyright law.
Are all creative works protected by copyright?
No. Not all creative works are protected by copyright. There are many exceptions to and limits on copyright protection. For example, copyright only protects creative works for limited periods of time. After the period of protection expires, the copyrighted work enters the public domain. If a work is in the public domain, the work may be freely used without permission from the creator of the work. However, just because a work is available online does not mean it's in the public domain or free to use. You can read more about the public domain on Wikipedia.
The copyright laws of many countries have specific exceptions and limitations to copyright protection. For example, in the United States, "fair use" allows you to use a copyrighted work without permission in certain circumstances (e.g., a book review that includes some of the book being reviewed). Wikipedia and the Electronic Frontier Foundation have useful descriptions of fair use.
Some content available online, such as public domain content, is free to use because it is not subject to copyright protection. Other content might be subject to copyright but the copyright holder licenses content with certain restrictions, such as under the Creative Commons license. Bing's image search lets you limit results only to Creative Commons-licensed images (after running an image search, click "license"). Other copyrighted content may be used without permission because a limitation or exception to copyright applies (see above discussion of fair use).
Of course, some online content is not free to use, is not licensed by the copyright holder, and your use will not qualify as fair use. Unfortunately, we can't provide specific guidance regarding the use of particular content, so be careful to select the works you use carefully. | https://concernedcitizensforchange.org/disclaimer/ |
Perhaps you are unable to reach your behind to scratch that itch on the back. Or you feel a sharp shoulder pain as you try reaching for something on the upper shelf. If either of these feels familiar, you can be suffering from what is called shoulder impingement.
What is Shoulder Impingement?
This is a condition affecting the shoulders in which the pain may occur after an injury, a poor posture, or repetitive use, which puts pressure on the tendons and muscles around/in the shoulder joint.
Usually, rotator cuff muscles weaken and properly don’t work to move the shoulders that cause further pain and compression. If you don’t seek help from a shoulder surgeon, this may result in restriction and worsening pain in the rotator cuff.
You can feel pain in the shoulder’s back, side, or front, especially with raising the arms overhead. It might feel weak and prevent you from everyday tasks at work, school, or home.
The Common Causes
Shoulder impingement is as well as called subacromial impingement because the bursa, ligaments, and tendons under the acromion may become compressed or pinched.
Shoulder impingement may occur when microtrauma and compression harm the tendons. It can also be caused by the following:
- Thickening of ligaments
- Osteoarthritis around the shoulder region
- Repetitive overhead movements, like golfing
- Injuries like a fall
- Thickening of bursa
- Bony abnormalities of acromion
Diagnosis
Physical therapists can perform evaluations and ask questions about the kind of pain you feel as well as other symptoms. A physical therapist may conduct motion and strength tests on the shoulder, determine your moisture, check for muscle weaknesses/imbalances, and ask about the job duties.
Special tests that involve gentle shoulder and arm movements can be carried out to determine which tendons are involved. An X-ray can as well be taken to know other conditions which may continue to discomfort, like arthritis or bony abnormalities/spurs.
How Physical Therapists May Help
It is vital to get treatment for the condition as soon as possible. If it is not treated, secondary conditions may result from it. This may include rotator-cuff tears/tendinitis or irritation of a bursa.
A physical therapist may help to treat the condition successfully. The expert will work with you to devise a suitable treatment plan specific to your goals and condition. The treatment plan may include pain management, functional training, muscle strengthening, manual therapy, range-of-motion exercises, and patient education.
Exercises Suitable for Shoulder Impingement
Shoulder impingement is very commonly seen in athletes and people who do a lot of overhead arm movements. The activity irritates the rotator cuff tendon as it rubs against the acromion repeatedly. Unless you take part in different forms of exercises, you won’t be able to relieve symptoms associated with shoulder impingement syndrome. Some of these exercises include the following:
- Scapula squeeze
- Lying external rotation
- Bank shoulder stretch
- Front shoulder stretch
- Chest stretch
Concluding Remarks!
The shoulder is one of the complex pieces of machines. It is an elegant design giving the shoulders a range of motion. The shoulder will move painlessly and freely if every part is in good working condition. But if it doesn’t move freely because of pain, you might want to talk to a doctor to know the way forward. | https://healthmedicaladvisor.com/shoulder-impingement-a-complete-guide-to-physical-therapy/ |
Vizcaya Museum and Gardens will present the museum’s work funded by a Knight Foundation Museums and Technology grant. Through this project Vizcaya created a model for adapting 3D documentation technologies to interactive experiences that expand the community’s access to our collections and increase opportunities for discovery. Bridging established preservation technologies with interpretive digital technologies we created an innovative approach to conservation, accessibility and interpretation.
Related to conservation 3D documentation provides a permanent archive for predictive modeling, digital restoration, and other conservation efforts. Moving forward this data can be used to recreate, reimagine, visualize, and even reconstruct objects and architectural elements for visitor engagement and research purposes. For accessibility, 3D documentation and printing not only allows the original architectural element or object to be preserved and safe from close contact, but also enables visitors to be more active and engaged participants in exploring these elements. Moreover, the touchable, 3D replicas will transform the experience for visitors with vision impairments or other related disabilities. For interpretation, using 3D documentation and interactive technology allows visitors to freely explore and learn about aspects of the museum. As visitors virtually explore parts of Vizcaya they learn about its history and narratives along with ideas related to conservation and sustainability.
The presenter will discuss challenges, share practical and technical constraints, and present examples of 3D digital engagement, aiming to inspire colleagues to effectively apply these innovative digital technologies in their institutions. | https://mw18.mwconf.org/proposal/3d-documentation-enhancing-conservation-interpretation-and-accessibility/ |
1. Introduction {#s1}
===============
Research on picture recognition has concentrated on one stimulus more than others---the human face (Calder and Young [@R2]; Farah et al [@R4]; Johnston and Edmonds [@R8]; Kanwisher [@R9]; Tsao and Livingstone [@R21]). Moreover, the characteristic pattern of eye movements when viewing pictures of faces, established by Yarbus ([@R29]), has been replicated many times and has become an accepted canon in face research (see Tatler et al [@R20]). Despite this, the eye movement dimension has been somewhat neglected (but see Bindemann et al [@R1]; Williams and Henderson [@R26]). In almost all studies of face perception the pictures are readily recognisable as belonging to that class of objects (faces), and often the observer\'s task is to provide the identity of the face, or to discriminate between the equivalence of pairs of pictures. We report two experiments in which the objects represented were not immediately recognisable, as they were embedded in geometrical carrier designs. That is, the pictorial images were partially hidden in geometrical patterns and it took some seconds before they could be recognised. Moreover, two classes of objects were pictorially embedded in concentric circular designs---faces and cars. When viewed frontally, pictures of faces and cars share symmetry about a vertical axis and both have prominent paired parts, like eyes or headlights. Indeed, comparisons of both processing and scanning frontally viewed upright faces and cars display similarities (Windhager et al [@R27], [@R28]). Introducing uncertainty about the class of objects embedded (faces or cars) in the geometrical carriers, rather than restricting all the pictorial stimuli to a single class, enables more detailed determination of factors that might be differentially associated with processing one of them (faces). Because the pictorial images take some seconds to be recognised, more subtle issues of eye movements can be addressed. This technique allows us to consider inspection behaviour before and after the observer recognises the category of object that they are viewing. At first, the viewer is unable to determine the class of object embedded in the geometric pattern and it is several seconds before this awareness emerges. As such, these stimuli offer a unique opportunity to study inspection behaviour for face stimuli when perception is dissociated from awareness.
In general, pictures of objects are recognised more readily in an upright orientation. This applies particularly to pictures of faces (Köhler [@R10]; Rock [@R13]; Rossion [@R14]; Yin [@R30]) and it has been an aspect of artistic manipulation for centuries (Wade [@R22]; Wade and Nekes [@R23]; Wade et al [@R24]). When discussing the optical inversion experiments of Stratton ([@R18]), Wolfgang Köhler described the effect clearly: "For this experiment I select a picture, or outline-drawing of an object, which shows a conspicuous change in appearance when it is upside down. This is the case, for instance, with photographs of known or unknown persons. They change so much that what we call facial expression disappears almost entirely in the abnormal orientation" ([@R10], pages 25--26). As Goffaux and Rossion ([@R6]) remarked, "inverting a face stimulus has become one of the most widely used stimulus transformations to prevent the processing of facial configuration" (page 995). Face inversion has been used as an index of the cognitive processing style for face perception. It is widely accepted that face processing involves two distinct processing styles: configural or holistic processing of the global structure and arrangement of features in the face, and featural or piecemeal processing of the individual components (eg Collishaw and Hole [@R3]; Lui et al [@R11]; Schwaninger et al [@R17]; Watier et al [@R25]). When a picture of a face is inverted, the normal arrangement of features is disrupted (the eyes now appear below the mouth etc) so disrupting the configural information in the face stimulus. In contrast the individual features are not as disrupted by this inversion. As such, face inversion disrupts the configural information in a face, but spares featural information, thus specifically hindering the possibility to employ a configural processing style on the face stimulus. Evidence for this specificity of effect comes from comparing the ability of participants to discriminate face pairs in which very subtle manipulations of either featural or configural information are introduced (Freire et al [@R5]). For upright face pairs participants are highly sensitive to both types of subtle manipulation. For inverted faces participants remain very sensitive to featural differences between the pairs, but not to configural differences. The common finding that inversion has a less detrimental effect on recognition for nonface pictures has been used to argue that a dominance of configural processing of objects is specific to face stimuli, with a greater reliance on featural processing for nonface stimuli (eg Yin [@R30]).
The specificity of inversion for disrupting configural but not featural facial information has been used to explore the neural mechanisms that might underlie these two processing styles (Itier et al [@R7]; Yoval and Kanwisher [@R31]). This approach has been used to argue that the right fusiform gyrus is intimately involved in the configural processing of faces, although the face specificity of this region is controversial (see McKone et al [@R12]).
While much attention has been paid in the literature to upright and inverted faces, fewer studies have examined intermediate orientations. Schwaninger and Mast ([@R16]) found that performance around the horizontal was inferior to that for both upright and inverted faces. Thus, in addition to upright (0°) and inverted (180°) pictorial images, those inclined at orientations of 90° and 270° were examined. Using pictures of faces and cars, Rossion and Curran ([@R15]) found that inversion effects were greater for the former (measured in terms of accuracy), even amongst car experts. Examining pictorial images of faces and cars inclined at 90° and 270° to the normally upright orientation breaks down the symmetry around the vertical axis. That is, comparison of performance at these orientations with those at 0° and 180° will provide an indication of the importance of the axis of symmetry.
We have previously reported a study using pictures of embedded faces as stimuli (Tatler et al [@R19]): comparisons were made between the geometrical carrier pattern alone, and with the embedded face in both an upright and inverted orientation. Each stimulus was presented for 10 s, and the time taken to recognise whether a face was present or not was measured, as were observers\' eye movements throughout the observation period. Twenty such combinations of pattern alone, pattern with upright face, and pattern with inverted face were presented. Participants took around 4 s to respond and found it much easier to report the absence of an embedded face than its presence. For the latter, participants took longer to report the presence of an inverted face than an upright one. Moreover, observers were faster at detecting the presence of an upright face than the absence of a face in the pattern. The characteristics of eye movements also differed between the three configurations: fixation times were longer and saccade amplitude was smaller when an upright face was present than for either of the other conditions. Interestingly, differences in inspection behaviour depended crucially on the perceptual experience of the observer: differences in inspection behaviour between upright and inverted faces arose only when the face was correctly detected by the observer; there was no difference in inspection behaviour for upright and inverted embedded faces when they were not detected by the observer. As such, while the physical presence of an embedded face influenced inspection behaviour irrespective of the viewer\'s percept, the orientation of the face had an effect only when the viewer was aware that a face was present.
In summary, our previous study suggested three main findings. First, the presence of an embedded stimulus can exert influences on inspection even when an observer was unaware of the embedded image. Second, the results pointed to the influence of inversion on viewing behaviour and its contingency upon the emergence of a conscious percept of the embedded face. Third, for these embedded faces, the orientation influenced inspection behaviour. However, our previous work did not allow us to look at face-specific perceptual effects on inspection behaviour because only faces were used as the embedded images. In addition, we used a variety of patterns into which we embedded faces and inspection behaviour was influenced by the nature of the carrier pattern.
The geometrical carrier patterns were different for each embedded face, half being rectilinear and the other half being curvilinear. Faces embedded in rectilinear patterns were easier to detect (they were detected more accurately and faster) than faces embedded in curvilinear patterns, in which the pattern contains a strongly defined centre. Not only did the two types of pattern change the ability of the observer to detect the hidden face, but they also gave rise to different eye movement strategies. Longer fixations, separated by smaller saccadic relocations, occurred when viewing curvilinear patterns. In order to remove differences based upon the geometry of the carrier patterns, the same (concentric circular) pattern was used for all embedded images in the present experiments.
Two experiments aim to compare detection and inspection for face and nonface (car) objects all embedded in the same carrier pattern. Moreover we will compare detection and inspection for objects embedded at four orientations: with rotations of 0°, 90°, 180°, and 270° clockwise from upright (see [figure 1](#F1){ref-type="fig"}). The experiments involve different tasks for the observers which differentially emphasise the relevance of the embedded object class (experiment 1) or the orientation of the embedded object (experiment 2). As such, we are able to see whether rotation and object-specific detection and inspection effects depend upon the viewer\'s task.
These experiments allow us to address four key questions. First, are inspection patterns different for symmetrical pictures drawn from different categories (faces/cars)? Second, do embedded faces and cars show any evidence for inversion effects, and do these differ between the two classes of object? Third, does the task of the viewer influence the recognition and inspection behaviour of the viewers, and does this depend upon the class of embedded object? Fourth, are there effects of the class of the embedded object that are expressed before recognition by the observer?
2. Experiments {#s2}
==============
Two experiments are reported which examined recognition and oculomotor behaviour when viewing patterns containing embedded images. All the geometrical carrier patterns were the same---concentric circles. In addition, uncertainty initially existed between the nature of the objects represented---faces or cars---as well as their initial visibility, examples of which are shown in [figure 1](#F1){ref-type="fig"}. All the pictures of faces and cars were derived from photographs of frontal views and they were approximately centred symmetrically about the carrier pattern. The same stimuli were used for both experiments, but the task required of the observer differed. In experiment 1 the task was to respond differentially when a car or a face was detected, with no requirement to signal orientation. In experiment 2 the task was to report the orientation of the embedded image, with no requirement to signal the nature of the embedded figure.
{#F1}
2.1. Method {#s2-1}
-----------
### 2.1.1. Participants. {#s2-1-1}
Each experiment involved sixteen different participants who were naive to the purposes of the study and took part on a voluntary unpaid basis. All had normal or corrected-to-normal vision.
### 2.1.2. Stimuli. {#s2-1-2}
Ten perceptual portraits were created by embedding pictures of faces (viewed from the front) in a pattern of concentric circles (the carrier pattern). This was done by varying the local structure of elements in the carrier pattern, and as such the pictorial image was not defined by visible outline contours. The same procedure was applied to ten pictures of cars which had been photographed from the front. The concentric circles were drawn, photographed on lith (high-contrast) film, and then scanned. Like the concentric carrier pattern, the pictures of faces and cars were reduced to black and white and all background detail was removed. Negatives of the high-contrast images were superimposed on negatives of the concentric circles so that the objects were represented by arcs of circles alone. Both the concentric circles and arc components were rendered positive so that when the latter was exactly superimposed on the former the arcs were invisible. The arc components were then shifted slightly relative to the concentric circles so that the faces/cars were minimally visible. Essentially, the embedded objects are defined by minor variations in the thickness of the carrier lines corresponding to the features of the pictorial image. The photographs of faces and cars were taken by us, and, since they are not standard images, all the embedded designs used in the experiment are shown (in an upright orientation) in [figure 2](#F2){ref-type="fig"}. The face/car image was then contained in the low spatial frequency content of the design whereas the carrier pattern consisted of high spatial frequencies. For the present experiment they were scanned (at 200 dpi) and presented on the monitors as described below. (A general strategy for seeing the embedded faces in the figures printed in figures [1](#F1){ref-type="fig"} and [2](#F2){ref-type="fig"}, if they are initially difficult to detect, is to view them from several metres rather than from reading distance.)
{#F2}
Each pictorial image could be presented in one of four orientations (see [figure 1](#F1){ref-type="fig"}) and over the course of the experiment each participant viewed all four orientations of each embedded image, in random order. Each participant therefore viewed eighty experimental stimuli. The size of the display is crucial to the ease of detecting the embedded face/car: the smaller the overall pictorial image the more easily a face or car can be detected. Pilot data suggested that embedded images were detectable but only after several seconds for designs subtending approximately 15°. While the pictorial images subtended 15°, they were displayed on a computer screen with an area subtending 40° horizontally and 30° vertically and had a resolution of 1600 × 1200 pixels. The position of the pictorial image within the display area varied randomly on each trial.
### 2.1.3. Eye movement recording. {#s2-1-3}
Eye movements were recorded using an SR Research Ltd EyeLink II eye tracker, sampling pupil position at 500 Hz. The spatial accuracy of the tracker was calibrated using a nine-point target grid and assessed using a further nine-point grid. The eye tracker was recalibrated if the second nine-point grid revealed a spatial accuracy worse than ±0.5 deg. Eye position data were collected for the eye that produced the better spatial accuracy as determined using the calibration. Saccades and fixations were defined using the saccade detection algorithm supplied by SR Research: saccades were identified by deflections in eye position in excess of 0.1 deg, with a minimum velocity of 30 deg s^−1^ and a minimum acceleration of 8000 deg s^−2^, maintained for at least 4 ms. The eye tracker is head mounted and no chin rest was employed; participants were asked to keep the head relatively still.
3. Experiment 1 {#s3}
===============
The task for the sixteen participants in experiment 1 was to distinguish between the class of pictorial images presented---whether they were pictures of faces or cars. No instruction concerning the orientation in which the embedded images were presented was given.
3.1. Procedure {#s3-1}
--------------
Following calibration of the eye tracker, participants were told that they would see a series of pictorial images that could appear at any location on the screen. Each trial was preceded by a fixation marker located randomly within a radius of 10 deg from the centre of the screen. The particular pictorial image was then presented for 10 s. Participants were instructed to indicate whether a face or a car was being displayed on the screen by pressing the appropriate button on a Microsoft gamepad controller: button A to indicate a car, or button B to indicate a face. Subjects were asked to respond as soon as they recognised whether it was a picture of a face or a car. Presentation time of the stimuli did not vary according to participant response time. The eighty images were displayed in a different random sequence for each participant.
3.2. Results {#s3-2}
------------
### 3.2.1. Recognition. {#s3-2-1}
The behavioural data for comparisons between faces and cars will be presented before the patterns of eye movements are considered. While the participants\' task was simply to decide whether the embedded object was a face or a car, and did not involve any judgment of orientation, we will analyse performance and inspection data in terms of both the object class and its orientation. Overall, the embedded images were recognised correctly on around 87% of trials, with the best performance (for upright faces) being 93% and embedded pictures of faces being recognised more accurately than those of cars in all orientations ([table 1](#T1){ref-type="table"}). A two (object class: face, car) by four (orientation: 0°, 90°, 180°, 270°) repeated measures ANOVA confirmed that the proportion of embedded faces that were recognised was significantly higher than that for cars: *F*(1, 15) = 5.27, *p* = 0.037, partial η^2^ = 0.260. However, there was no main effect of orientation, *F*(3, 45) = 0.60, *p* = 0.622, partial η^2^ = 0.038, suggesting that the orientation of the embedded object did not influence the accuracy of deciding whether the embedded object was a car or face. There was no interaction between object class and orientation: *F*(3, 45) = 0.49, *p* = 0.691, partial η^2^ = 0.032.
###### Proportion of correct responses for embedded images of faces and cars at each orientation.
Stimulus type Orientation (°) Mean Standard deviation
--------------- ----------------- ------ --------------------
Face 0 0.93 0.12
90 0.88 0.16
180 0.88 0.17
270 0.90 0.15
Car 0 0.83 0.22
90 0.86 0.19
180 0.81 0.23
270 0.83 0.11
A similar pattern of results was found for the time taken to respond to the upright faces, but not to cars ([table 2](#T2){ref-type="table"}). Embedded faces were detected not only more accurately than cars but also considerably faster: F(1, 15) = 33.31, p \< 0.001, partial η^2^ = 0.690. On average it required 3294 ms to respond to the face stimuli whereas the value for cars was 4142 ms. There was a main effect of orientation on response time: *F*(3, 45) = 5.55, *p* = 0.003, partial η^2^ = 0.270; and a significant interaction between object class and orientation: *F* (3, 45) = 12.34, *p* \< 0.001, partial η^2^ = 0.451. This interaction is evident in [table 2](#T2){ref-type="table"}: for faces, response time was shortest for upright faces and longest for inverted (180° rotation) faces; for cars, response time was longer for upright cars than for cars embedded at any other orientation. Post hoc Bonferroni-corrected t-tests confirmed that responses were faster for upright faces than for faces embedded at 90° (*p* = 0.024), 180° (*p* = 0.001), or 270° (*p* = 0.024). Faces embedded upside down were responded to slower than faces embedded at 270° (*p* = 0.019). No other contrasts were significant for embedded faces. For embedded cars, post hoc t-tests revealed no differences in response times between any of the embedded orientations.
###### Response times (in ms) for embedded images of faces and cars at each orientation.
Stimulus type Orientation (°) Mean Standard deviation
--------------- ----------------- ------ --------------------
Face 0 2439 1386
90 3515 1995
180 4041 2087
270 3182 1978
Car 0 4337 2113
90 3975 2026
180 4140 1942
270 4116 2134
### 3.2.2. Eye movements. {#s3-2-2}
[Figure 3](#F3){ref-type="fig"} shows that at least qualitatively there were clear differences in how observers looked at the patterns in the eight different conditions of the first experiment. Distributions of gaze time are superimposed as 'heat maps' on the stimuli for which these data were collected. Data are plotted for one example face stimulus set (top row) and one example car stimulus set (bottom row) for each of the four orientations; data were combined across all sixteen observers. The distributions of gaze time were constructed from the fixation events in the eye tracker data, adding a Gaussian with half-width-at-half-maximum of 1 deg and a magnitude equal to the duration of the fixation; that is the distributions plot gaze time, and not the number of fixations. We present these data as a demonstration that viewing behaviour did vary according to the object class and orientation. However, we do not present quantitative analysis of gaze locations due to the heterogeneity of the embedded faces: while these were similar, the precise placement of features within the patterns showed sufficient variability and so conclusions from these data would be hard to make.
{#F3}
Following our previous work (Tatler et al [@R19]) we describe eye movements in terms of the fixation durations and saccade sizes. We also present data for the periods before and after responses to the embedded face or car ([figure 4](#F4){ref-type="fig"}). A two (object class: face, car) by four (orientation: 0°, 90°, 180°, 270°) by two (decision state: before, after response) repeated measures ANOVA showed that fixation durations were longer on stimuli in which faces were embedded than on stimuli containing embedded cars: *F*(1, 15) = 31.99, *p* \< 0.001, partial η^2^ = 0.681. However, there were no main effects of orientation or decision state, and no significant interactions, suggesting that only the type of object embedded in the pattern influenced fixation durations during inspection. It should be noted that mean fixation durations on this task were quite a bit longer than the typical 300 ms found in picture viewing. This is in line with our previous work (Tatler et al [@R19]) for circular patterns and presumably reflects a combination of the difficulty of the task and the difficulty associated with the circular nature of the patterns.
{#F4}
Saccade sizes showed a slightly different pattern of results ([figure 5](#F5){ref-type="fig"}). Saccades were smaller when viewing patterns with embedded faces than when viewing patterns with embedded cars: *F*(1, 15) = 8.22, *p* = 0.012, partial η^2^ = 0.354. There was also a main effect of decision state, *F*(1, 15) = 4.37, *p* = 0.054, partial η^2^ = 0.225, with smaller saccades after making a response than before. There was no main effect of the orientation of the embedded object. There was a significant three-way interaction: *F*(3, 45) = 3.99, *p* = 0.013, partial η^2^ = 0.210.
{#F5}
3.3. Discussion {#s3-3}
---------------
The data from experiment 1 suggest that, when viewing patterns with embedded faces and cars, the type and orientation of the embedded object influence inspection behaviour and detection performance. For the response time data we found a typical inversion effect: with pictures of faces, detection times when they were upright were faster than when the pictures were embedded at any other orientation. However, for cars, detection times were unaffected by the orientation at which they were embedded. When considering inspection behaviour, we found a three-way interaction between object class, orientation, and decision state for saccade amplitudes. This interaction suggests that the amplitude of saccades during viewing is sensitive to aspects of the stimulus (the object class and orientation) and to the perceptual experience of the observer (whether the observer has identified the embedded object). Sensitivity of eye movement behaviour to both the physical image and perceptual experience is consistent with findings from our previous study (Tatler et al [@R19]). However, we found that fixation durations were influenced significantly only by whether the embedded object was a face or a car. The absence of orientation effects and whether or not the observer had responded is in contrast to our previous findings for fixation duration. This may be due to the different nature of the task in the two studies: here, participants were simply asked to judge the class of object, ignoring the orientation at which it was embedded. However, in our previous study the task was to judge the orientation of the embedded object. Experiment 2 in the present study allows us to determine whether the differences between experiment 1 and our previous work can be attributed to this difference in task instructions for the observers.
4. Experiment 2 {#s4}
===============
The task for the sixteen participants in experiment 2 was to report the orientation of the embedded pictorial image---whether they were inclined at 0°, 90°, 180°, or 270°---irrespective of whether the embedded object was a car or a face. Participants were instructed to indicate the orientation of the embedded image displayed on the screen; this task was performed by pressing the appropriate button on the Microsoft gamepad controller with the buttons orientated to reflect the four possible rotations (ie buttons were arranged in a diamond shape). To disambiguate the horizontally rotated (90° and 270°) objects, participants were asked to press the button corresponding to where the top of the embedded object (ie the top of the head or roof of the car) was located. Participants were asked to respond as soon as they recognised the orientation of the embedded image. Otherwise the stimuli and procedure were the same as for experiment 1.
4.1. Results {#s4-1}
------------
### 4.1.1. Recognition. {#s4-1-1}
When observers were reporting the orientation of an embedded image alone ([table 3](#T3){ref-type="table"}), their performance was poorer than when distinguishing between faces and cars (experiment 1, [table 1](#T1){ref-type="table"}). This may be expected since there were four possible responses in this as compared with two in the first experiment. Nonetheless, embedded faces were correctly detected on 67% of trials which exceeded those for cars (52%); this difference was confirmed by a two-way repeated measures ANOVA which revealed a main effect of object class: *F*(1, 15) = 11.38, *p* = 0.004, partial η^2^ = 0.413. There was no main effect of orientation, but there was a significant interaction between object class and orientation: *F*(3, 45) = 3.68, *p* = 0.019, partial η^2^ = 0.197. Post hoc Bonferroni-corrected t-tests were used to break down this interaction. Performance for correctly identifying the orientation of upright faces was better than that for faces embedded at 90° (*t* = 0.009). No other comparisons were significant.
###### Proportion of correct responses for embedded images of faces and cars at each orientation.
Stimulus type Orientation (°) Mean Standard deviation
--------------- ----------------- ------ --------------------
Face 0 0.78 0.17
90 0.56 0.29
180 0.65 0.32
270 0.69 0.29
Car 0 0.52 0.22
90 0.55 0.22
180 0.48 0.29
270 0.53 0.26
A similar numerical pattern can be seen for response times ([table 4](#T4){ref-type="table"}). Even when the response was in terms of orientation, embedded pictures of faces were detected more rapidly than those for cars (2799 ms as compared with 3795 ms): *F*(1, 15) = 36.72, *p* \< 0.001, partial η^2^ = 0.710. However, for response times there was no interaction between object class and orientation, nor was there a main effect of orientation.
###### Response times (in ms) for embedded images of faces and cars at each orientation.
Stimulus type Orientation (°) Mean Standard deviation
--------------- ----------------- ------ --------------------
Face 0 2483 738
90 3132 1176
180 2851 1009
270 2736 873
Car 0 3789 1217
90 3868 1158
180 3709 1403
270 3811 2134
### 4.1.2. Eye movements. {#s4-1-2}
[Figure 6](#F6){ref-type="fig"} plots cumulative gaze time distributions for the same face and car stimuli that were shown in [figure 3](#F3){ref-type="fig"} above. As before, clear qualitative differences are apparent according to the class and orientation of the embedded object.
{#F6}
As in experiment 1, the eye movements were recorded before and after a response was made in order to determine whether there were differences prior to recognition of the orientation of the embedded images. For fixation durations, there was a significant main effect of object class, *F*(1, 15) = 29.51, *p* \< 0.001, partial η^2^ = 0.663, with longer fixations on faces than on cars ([figure 7](#F7){ref-type="fig"}). No other main effects or interactions were significant, although the interaction between object class, orientation, and decision state approached significance: *F*(3, 45) = 2.57, *p* = 0.066, partial η^2^ = 0.146. Fixations in experiment 2 were longer than in experiment 1 (compare with [figure 4](#F4){ref-type="fig"}). We assume that the longer fixation durations in this experiment reflect the added difficulty of this task (which was also evident in the lower accuracies of responses in [table 4](#T4){ref-type="table"}).
{#F7}
For saccade amplitudes the pattern of results was largely similar to that found in experiment 1 ([figure 8](#F8){ref-type="fig"}). There was a main effect of object class, *F*(1, 15) = 9.53, *p* = 0.008, partial η^2^ = 0.389, with smaller saccades made to patterns with embedded faces than those with embedded cars. There was a main effect of decision state, *F*(1, 15) = 7.55, *p* = 0.015, partial η^2^ = 0.335, with smaller saccades after the decision than before. There was no main effect of orientation on saccade amplitude, but there was an interaction between the orientation of the embedded object and the decision state: *F*(3, 45) = 3.29, *p* = 0.029, partial η^2^ = 0.180. No other interactions were significant.
{#F8}
The interaction between object orientation and decision state was broken down using post hoc Bonferroni-corrected t-tests. Saccades were significantly shorter after the decision for upright (*p* = 0.001) and inverted (*p* = 0.026) objects, but not for objects embedded at 90° or 270°. In contrast to the data for fixation durations, there were no clear differences between the saccade amplitudes recorded in experiments 1 and 2. We assume that this suggests that saccade amplitudes are insensitive to the difficulty of the task and are more strongly governed by the nature of the stimuli, whereas fixation durations are sensitive to task difficulty.
4.2. Discussion {#s4-2}
---------------
Experiment 2 showed that, when judging the orientation of an object embedded in a geometric pattern, the class and orientation of the embedded object influenced both performance and inspection behaviour. In general, the effects were in line with those found in experiment 1, when participants judged the type of object embedded rather than its orientation. Accuracy data showed that people were better able to judge the orientation of faces than cars and that the best performance was for upright faces. However, in contrast to experiment 1, response time data for experiment 2 showed no inversion effect: faces were responded to quicker than cars, but the orientation of the face or car had no effect on response time.
In terms of inspection behaviour, fixation durations were shorter for embedded faces than cars, but were unaffected by the orientation of the object or whether the observer had made their decision about the orientation of the embedded object. This pattern of results for fixation duration is the same as was found in experiment 1. For saccade amplitudes, as in experiment 1, evidence was found for an effect of inversion, but this was not specific to faces. Saccades were generally shorter for patterns with embedded faces, but the inversion effect was manifest only in a two-way interaction between orientation and decision state. This interaction arose because saccades became smaller after the observer\'s response for upright and inverted objects, but not for objects embedded horizontally. The lack of three-way interaction in this experiment suggests that this effect of inversion on saccade amplitudes was not sensitive to whether the embedded object was a face or a car.
The data from experiment 2, therefore, suggest general differences in the ability and time for observers to make judgments about the patterns and the inspection behaviour when viewing patterns with embedded faces or cars, even when the class of object is not relevant to the observer\'s task. When comparing it with our previous study (Tatler et al [@R19]), we find a similar pattern of results for saccade amplitude: for both upright and inverted objects, saccade amplitudes decrease following the observer\'s judgment of orientation. However, our present results extend these previous findings in two ways. First, we can show that this effect is not specific to faces, but generalises to upright and inverted cars. Second, we can show that this decrease in saccade amplitude following a judgment of orientation is specific to upright and inverted objects and is not found when the object is embedded at 90° or 270° from upright.
5. Discussion {#s5}
=============
It is clear from both experiments that there are differences in the facility with which partially hidden pictures of faces are recognised and scanned relative to partially hidden cars. Restricting initial consideration to faces/cars in the normal upright orientation, faces are recognised more accurately and quickly and fixation durations on the patterns containing them are longer both before and after recognition. Saccade sizes, on the other hand, are similar. These results obtained whether the task required participants to report on the class of object embedded (face/car) or the orientation in which they were presented. The results contrast to those of Windhager et al ([@R28]), who found few differences in eye movement patterns when viewing pictures of clearly visible frontal faces and cars.
Upright faces were recognised more quickly than those in other orientations in both experiments and they were generally fixated for longer. These distinctions did not apply to cars: they were recognised as reliably and rapidly in all orientations and they did not display different patterns of oculomotor behaviour. Thus, the inversion effect is strong for pictures of faces but not for cars, a result consistent with a wealth of previous literature on the consequences of inverting faces and other objects (eg Yin [@R30]). These findings also demonstrate that the same pattern of inversion effects that has been found for instantly recognisable face and nonface stimuli can be found for face and nonface stimuli that are initially hidden from the viewer\'s awareness. The existence of face-specific behavioural responses to the pictures used in the present study shows that it is a suitable set of stimuli for studying face-specific processes, while at the same time offering the advantage of being able to study face viewing prior to the emergence of a conscious percept of the image of the face.
Despite the attention that has been devoted to inverted faces (see Rossion [@R14]), less concern has been given to other orientations and so the effects for 90° and 270° rotations in our experiments are of interest. Differences between 90°, 180°, and 270° rotations of the embedded images did not produce such consistent effects. This suggests that symmetry around the vertical axis is not the factor defining the superiority of upright faces, and nor is the orientation of the component features. Our results contrast to those found by Schwaninger and Mast ([@R16]) who found inferior performance for horizontally oriented faces than for either upright or inverted faces. In our data there is little to distinguish horizontally rotated and inverted faces in terms of detection performance, reaction time, or inspection behaviour. However, we do find some evidence of an inspection effect specific to upright and inverted faces. In experiment 2 saccade amplitudes decreased after the face was recognised, but only for faces embedded in the upright or inverted orientations. There was no such decrease in saccade amplitude after recognition for horizontally rotated faces. This result may imply some differentiation between horizontal and vertical rotations of the embedded faces, but it is hard to draw any conclusions about this from the present data.
By comparing the two experiments of the present study we are able to consider the influence of the viewers\' task not only on inspection behaviour, but also on face-specific processes such as the inversion effect. We found clear evidence for a face-specific inversion effect on response time in both experiments. As such, even when the task required no judgment about the type of embedded figure, there was a clear advantage present for making orientation judgments about upright faces. However, despite these response time similarities between the two experiments there were differences in the superiority of upright faces in the two experiments. When the task was to detect whether faces or cars were embedded, orientation did not differentiate between the proportion of correct responses; when the task involved discriminating orientation, then upright faces were detected more accurately. In terms of inspection behaviour, very similar patterns were found between the two experiments. In both tasks fixation durations were longer upon patterns containing faces than upon those containing cars, and it is notable that this was the case even when the task did not require recognition of the class of embedded object (experiment 2). For saccade sizes there was broad similarity between the two experiments, with smaller saccades when viewing patterns with embedded faces than when viewing patterns with embedded cars. The point of departure between the two experiments emerged in the context of interactions between object class and orientation. When the task was to discriminate between faces and cars irrespective of orientation, then both object class and orientation were involved in an interaction (specifically, a three-way interaction with decision state). Thus, in this experiment the orientation and class of the embedded figure both influenced the amplitudes of saccades during inspection. However, when the task was to recognise the orientation of the object irrespective of its class, there was no interaction that involved both object class and orientation. These findings demonstrate that in general inspection behaviour differed between faces and cars and the differences were the same irrespective of the task of the observer. However, more subtle aspects of inspection behaviour were sensitive to the task demands.
Our stimuli offer a rare opportunity to consider whether face-specific effects such as that produced by inversion are contingent on the conscious awareness of a face or can be observed in the absence of recognising that the stimulus is a face. Any interactions found in our analyses that involved the decision state of the observer would indicate that inspection depended upon what the viewer was aware of in the stimulus under inspection. Such interactions were found in both experiments for saccade amplitudes. In experiments 1 and 2 saccades were smaller following the observer\'s decision than prior to the decision. This suggests that the conscious experience of the observer can play a role in the inspection behaviour. In experiment 2 there was an interaction between the orientation of the embedded face and the decision state of the observer. This arose because of a specific decrease in saccade amplitude following a decision when viewing an upright or inverted object. No such decrease in saccade amplitude was found following a decision about a horizontally embedded object. One possible interpretation of the three-way interaction between object class, orientation, and decision state found for saccade sizes in experiment 1 might be that the face-specific viewing effects that we have described above are dependent upon the awareness of the observer. However, there is insufficient power in our data to dissect this interaction fully. In both experiments the finding that the observer\'s decision state influenced saccade sizes is suggestive of an importance of awareness upon inspection and thus might suggest that face-specific viewing behaviour is contingent upon recognition of the embedded image. In contrast to the data for saccade sizes, in both experiments fixation duration was found to depend only upon the object class and there were no main effects or interactions involving the observer\'s decision state. As a result this implies that, while there were differences between how faces and cars were viewed, these differences were present both before and after recognition. These results therefore raise the possibility that face-specific viewing effects upon fixation duration exist even prior to the recognition of the face. It is also notable that in the first experiment there is a numerical trend toward an influence of orientation on fixation duration. Both before and after recognition, there is a trend toward longer fixations for patterns with upright faces embedded than for faces embedded at any other orientation. No such numerical trend is evident for embedded cars. As such, there is an indication that the orientation of the embedded face can exert some influence on viewing prior to recognition of the face.
One of the factors that has been implicated in configural as opposed to featural processing of faces is the spatial frequency spectrum of the pictures (Watier et al [@R25]). The partially hidden faces used in our experiments are of interest in this context because they are defined in terms of the low spatial frequencies in the stimuli. Performance, in terms of both recognition and oculomotor behaviour, distinguishes between pictures of faces and cars. However, unlike most other studies of pictorial face perception, the task of observers was not to identify particular faces nor to discriminate between them but to determine whether a particular low spatial frequency target belonged to the category of faces. Under these circumstances, low spatial frequency content is adequate to distinguish pictures of faces from cars and to influence the durations for which each category is fixated.
6. Conclusion {#s6}
=============
Experiments on picture recognition of faces typically involve identifying particular instances of the category rather than the category itself. In our experiments uncertainty existed regarding the category of objects because faces or cars were partially hidden in carrier patterns of concentric circles. Under these conditions the inversion effect was strong for pictures of hidden faces but not for cars; this is consistent with previous literature on the consequences of inverting faces and other objects. Upright pictures of faces were processed more rapidly and accurately than those in other orientations, and there were few differences between horizontal and inverted faces; variations in oculomotor behaviour were less clear cut. The effects were obtained in both experiments even though the tasks were different---responding to categories (face/car) or to orientations. We suggest that the use of partially hidden figures can pose questions for both recognition and inspection behaviour that would be otherwise difficult to address.
We are grateful to Niamh Malins and Karen Johnston for assistance with collecting the data for the experiments.
 **Nick Wade** received his degree in psychology from the University of Edinburgh and his PhD from Monash University, Australia. This was followed by a postdoctoral fellowship from the Alexander von Humboldt Stiftung, at Max-Planck-Institute for Behavioural Physiology, Germany. His subsequent academic career has been at Dundee University, where he is now Emeritus Professor. His research interests are in the history of vision research, binocular and motion perception, and the interplay between visual science and art. For more information visit [www.dundee.ac.uk/psychology/people/academics/njwade/](http://www.dundee.ac.uk/psychology/people/academics/njwade/).
 **Ben Tatler** read Natural Sciences at the University of Cambridge, where he worked with Professor Simon Laughlin FRS on fly vision, before moving to the University of Sussex for his PhD, where he worked with Professor Mike Land FRS. Under Land\'s supervision, he developed an interest in human eye movement behaviour during natural tasks. After spending three years as a postdoc in Sussex, Ben moved to the University of Dundee in 2004, where he is now a senior lecturer. His main research interest is in how we use our eyes to support our everyday activities, including issues of eye guidance, scene representation, and object memory. For more information visit [www.activevisionlab.org](http://www.activevisionlab.org).
| |
Two sites of inflammation
Typically, it is associated with five symptoms: The affected parts of the body heat up and become reddened, they swell up and hurt. Loss of function is another symptom.
In an acute inflammation, the body responds to infections, tissue injuries or harmful substances such as chemicals. When the immune defence detects, for example, the presence of microorganisms, proteins and white blood cells leave the blood vessels and migrate towards the afflicted tissue site. Once in place, the neutrophils, a sub-group of white blood cells, kill the microorganisms. For this purpose, they release substances, which can, however, not distinguish between foreign invaders and the body's own cells and therefore damage the surrounding tissue as well. In the next step, scavenger cells of the immune system dispose of the killed bacteria and the damaged tissue is rebuilt. This shows that inflammation is a desirable process that removes the causative stimulus and re-establishes homeostasis–the state of equilibrium in the tissue.
As useful as this reaction is–it comes at a price because inflammation can become chronic and damage the tissue massively. This is the case, for example, when the immune system fails to eliminate the invader. In addition, there are also numerous chronic inflammations that are not as well understood and are related to chronic diseases in humans.
One trigger might be an autoimmune disease, such as rheumatoid arthritis or multiple sclerosis. Here, the immune system erroneously detects endogenous body structures as foreign and attacks them. Since the stimulus is present permanently, the immune cells cannot remove it and thus damage the own body. Cancer, Alzheimer's disease and various cardiovascular and lung diseases are also related to inflammations.
Scientists at the HZI are interested in this topic for various reasons. They, for example, examine how the signalling cascades leading to chronic inflammation can be prevented therapeutically. They also investigate how errors in the regulation of the complex immune system can lead to inflammation. The influence of infections and associated inflammatory reactions on the development of neurodegenerative diseases is also a focus of a research group at the HZI. This group mainly studies how inflammatory processes contribute to the development of Alzheimer's disease, the most common type of dementia.
(Dr Birgit Manno)
Audio Podcast
- Chronische Entzündungen an der Signal-Wurzel gepackt – Eine neue Strategie gegen Rheumatoide Arthritis.Die Gelenke sind heiß, geschwollen, entzündet und jede Bewegung ist eine Qual. Über 100 Millionen Menschen leben in Europa mit Rheumatoider Arthritis und nur bei einem Teil der Patienten helfen die Medikamente, die auf dem Markt sind. Gerhard Gross und Virginia Seiffart haben eine Strategie entwickelt, die der Entzündung die Wurzeln kappt – und zwar effektiver als alle bisherigen Medikamente. Kommen Sie mit... | https://www.helmholtz-hzi.de/en/info-centre/topics/our-immune-system/inflammation/ |
Minute and second of arc
A minute of arc, arcminute (arcmin), arc minute, or minute arc is a unit of angular measurement equal to 1/60 of one degree. Since one degree is 1/360 of a turn (or complete rotation), one minute of arc is 1/21600 of a turn. The nautical mile was originally defined as a minute of latitude on a hypothetical spherical Earth so the actual Earth circumference is very near 21 600 nautical miles. A minute of arc is π/10800 of a radian.
|Arcminute|
An illustration of the size of an arcminute (not to scale). A standard association football (soccer) ball (with a diameter of 22 cm or 8.7 in) subtends an angle of 1 arcminute at a distance of approximately 775 m (848 yd).
|General information|
|Unit system||Non-SI units mentioned in the SI|
|Unit of||Angle|
|Symbol||′ or arcmin|
|In units||Dimensionless with an arc length of approx. ≈ 0.2908/1000 of the radius, i.e. 0.2908 mm/m|
|Conversions|
|1 ′ in ...||... is equal to ...|
|degrees||1/60° = 0.016°|
|arcseconds||60″|
|radians||π/10800 ≈ 0.000290888 rad|
|milliradians||≈ 0.2908 mil|
|gons||600/9g = 66.6g|
|turns||1/21600|
A second of arc, arcsecond (arcsec), or arc second is 1/60 of an arcminute, 1/3600 of a degree, 1/1296000 of a turn, and π/648000 (about 1/206265) of a radian.
These units originated in Babylonian astronomy as sexagesimal subdivisions of the degree; they are used in fields that involve very small angles, such as astronomy, optometry, ophthalmology, optics, navigation, land surveying, and marksmanship.
To express even smaller angles, standard SI prefixes can be employed; the milliarcsecond (mas) and microarcsecond (μas), for instance, are commonly used in astronomy.
The number of square arcminutes in a complete sphere is 148510660 square arcminutes (the surface area of a unit sphere in square units divided by the solid angle area subtended by a square arcminute, also in square units – so that the final result is a dimensionless number).
The names "minute" and "second" have nothing to do with the identically named units of time "minute" or "second". The identical names reflect the ancient Babylonian number system, based on the number 60.
Symbols and abbreviations
The standard symbol for marking the arcminute is the prime (′) (U+2032), though a single quote (') (U+0027) is commonly used where only ASCII characters are permitted. One arcminute is thus written 1′. It is also abbreviated as arcmin or amin or, less commonly, the prime with a circumflex over it ().
The standard symbol for the arcsecond is the double prime (″) (U+2033), though a double quote (") (U+0022) is commonly used where only ASCII characters are permitted. One arcsecond is thus written 1″. It is also abbreviated as arcsec or asec.
|Unit||Value||Symbol||Abbreviations||In radians, approx.|
|Degree||1/360 turn||° (degree)||deg||17.4532925 mrad|
|Arcminute||1/60 degree||′ (prime)||arcmin, amin, am, , MOA||290.8882087 μrad|
|Arcsecond||1/60 arcminute = 1/3600 degree||″ (double prime)||arcsec, asec, as||4.8481368 μrad|
|Milliarcsecond||0.001 arcsecond = 1/3600000 degree||mas||4.8481368 nrad|
|Microarcsecond||0.001 mas = 0.000001 arcsecond||μas||4.8481368 prad|
In celestial navigation, seconds of arc are rarely used in calculations, the preference usually being for degrees, minutes and decimals of a minute, for example, written as 42° 25.32′ or 42° 25.322′. This notation has been carried over into marine GPS receivers, which normally display latitude and longitude in the latter format by default.
Common examples
The full moon's average apparent size is about 31 arcminutes (or 0.52°).
An arcminute is approximately the resolution of the human eye.
An arcsecond is approximately the angle subtended by a U.S. dime coin (18 mm) at a distance of 4 kilometres (about 2.5 mi). An arcsecond is also the angle subtended by
- an object of diameter 725.27 km at a distance of one astronomical unit,
- an object of diameter 45866916 km at one light-year,
- an object of diameter one astronomical unit (149597870.7 km) at a distance of one parsec, by definition.
A milliarcsecond is about the size of a dime atop the Eiffel Tower as seen from New York City.
A microarcsecond is about the size of a period at the end of a sentence in the Apollo mission manuals left on the Moon as seen from Earth.
A nanoarcsecond is about the size of a penny on Neptune's moon Triton as observed from Earth.
Also notable examples of size in arcseconds are:
- Hubble Space Telescope has calculational resolution of 0.05 arcseconds and actual resolution of almost 0.1 arcseconds, which is close to the diffraction limit.
- crescent Venus measures between 60.2 and 66 seconds of arc.
Uses
Astronomy
Since antiquity the arcminute and arcsecond have been used in astronomy. In the ecliptic coordinate system, latitude (β) and longitude (λ); in the horizon system, altitude (Alt) and azimuth (Az); and in the equatorial coordinate system, declination (δ), are all measured in degrees, arcminutes and arcseconds. The principal exception is right ascension (RA) in equatorial coordinates, which is measured in time units of hours, minutes, and seconds.
The arcsecond is also often used to describe small astronomical angles such as the angular diameters of planets (e.g. the angular diameter of Venus which varies between 10″ and 60″), the proper motion of stars, the separation of components of binary star systems, and parallax, the small change of position of a star in the course of a year or of a solar system body as the Earth rotates. These small angles may also be written in milliarcseconds (mas), or thousandths of an arcsecond. The unit of distance, the parsec, named from the parallax of one arc second, was developed for such parallax measurements. It is the distance at which the mean radius of the Earth's orbit would subtend an angle of one arcsecond.
The ESA astrometric satellite Gaia, launched in 2013, can approximate star positions to 7 microarcseconds (µas).
Apart from the Sun, the star with the largest angular diameter from Earth is R Doradus, a red giant with a diameter of 0.05 arcsecond.[note 1] Because of the effects of atmospheric seeing, ground-based telescopes will smear the image of a star to an angular diameter of about 0.5 arcsecond; in poor seeing conditions this increases to 1.5 arcseconds or even more. The dwarf planet Pluto has proven difficult to resolve because its angular diameter is about 0.1 arcsecond.
Space telescopes are not affected by the Earth's atmosphere but are diffraction limited. For example, the Hubble Space Telescope can reach an angular size of stars down to about 0.1″. Techniques exist for improving seeing on the ground. Adaptive optics, for example, can produce images around 0.05 arcsecond on a 10 m class telescope.
Cartography
Minutes (′) and seconds (″) of arc are also used in cartography and navigation. At sea level one minute of arc along the equator or a meridian (indeed, any great circle) equals exactly one geographical mile along the Earth's equator or approximately one nautical mile (1,852 metres; 1.151 miles). A second of arc, one sixtieth of this amount, is roughly 30 metres (98 feet). The exact distance varies along meridian arcs because the figure of the Earth is slightly oblate (bulges a third of a percent at the equator).
Positions are traditionally given using degrees, minutes, and seconds of arcs for latitude, the arc north or south of the equator, and for longitude, the arc east or west of the Prime Meridian. Any position on or above the Earth's reference ellipsoid can be precisely given with this method. However, when it is inconvenient to use base-60 for minutes and seconds, positions are frequently expressed as decimal fractional degrees to an equal amount of precision. Degrees given to three decimal places (1/1000 of a degree) have about 1/4 the precision of degrees-minutes-seconds (1/3600 of a degree) and specify locations within about 120 metres (390 feet).
Property cadastral surveying
Related to cartography, property boundary surveying using the metes and bounds system relies on fractions of a degree to describe property lines' angles in reference to cardinal directions. A boundary "mete" is described with a beginning reference point, the cardinal direction North or South followed by an angle less than 90 degrees and a second cardinal direction, and a linear distance. The boundary runs the specified linear distance from the beginning point, the direction of the distance being determined by rotating the first cardinal direction the specified angle toward the second cardinal direction. For example, North 65° 39′ 18″ West 85.69 feet would describe a line running from the starting point 85.69 feet in a direction 65° 39′ 18″ (or 65.655°) away from north toward the west.
Firearms
The arcminute is commonly found in the firearms industry and literature, particularly concerning the accuracy of rifles, though the industry refers to it as minute of angle (MOA). It is especially popular with shooters familiar with the imperial measurement system because 1 MOA subtends a sphere with a diameter of 1.047 inches at 100 yards (2.908 cm at 100 m), a traditional distance on U.S. target ranges. The subtension is linear with the distance, for example, at 500 yards, 1 MOA subtends a sphere with a diameter of 5.235 inches, and at 1000 yards 1 MOA subtends a sphere with a diameter of 10.47 inches. Since many modern telescopic sights are adjustable in half (1/2), quarter (1/4), or eighth (1/8) MOA increments, also known as clicks, zeroing and adjustments are made by counting 2, 4 and 8 clicks per MOA respectively.
For example, if the point of impact is 3 inches high and 1.5 inches left of the point of aim at 100 yards (which for instance could be measured by using a spotting scope with a calibrated reticle), the scope needs to be adjusted 3 MOA down, and 1.5 MOA right. Such adjustments are trivial when the scope's adjustment dials have a MOA scale printed on them, and even figuring the right number of clicks is relatively easy on scopes that click in fractions of MOA. This makes zeroing and adjustments much easier:
- To adjust a 1⁄2 MOA scope 3 MOA down and 1.5 MOA right, the scope needs to be adjusted 3 × 2 = 6 clicks down and 1.5 x 2 = 3 clicks right
- To adjust a 1⁄4 MOA scope 3 MOA down and 1.5 MOA right, the scope needs to be adjusted 3 x 4 = 12 clicks down and 1.5 × 4 = 6 clicks right
- To adjust a 1⁄8 MOA scope 3 MOA down and 1.5 MOA right, the scope needs to be adjusted 3 x 8 = 24 clicks down and 1.5 × 8 = 12 clicks right
Another common system of measurement in firearm scopes is the milliradian or mil. Zeroing a mil based scope is easy for users familiar with base ten systems. The most common adjustment value in mil based scopes is 1/10 mil (which approximates 1⁄3 MOA).
- To adjust a 1/10 mil scope 0.9 mil down and 0.4 mil right, the scope needs to be adjusted 9 clicks down and 4 clicks right (which equals approximately 3 and 1.5 MOA respectively).
One thing to be aware of is that some MOA scopes, including some higher-end models, are calibrated such that an adjustment of 1 MOA on the scope knobs corresponds to exactly 1 inch of impact adjustment on a target at 100 yards, rather than the mathematically correct 1.047". This is commonly known as the Shooter's MOA (SMOA) or Inches Per Hundred Yards (IPHY). While the difference between one true MOA and one SMOA is less than half of an inch even at 1000 yards, this error compounds significantly on longer range shots that may require adjustment upwards of 20–30 MOA to compensate for the bullet drop. If a shot requires an adjustment of 20 MOA or more, the difference between true MOA and SMOA will add up to 1 inch or more. In competitive target shooting, this might mean the difference between a hit and a miss.
The physical group size equivalent to m minutes of arc can be calculated as follows: group size = tan(m/60) × distance. In the example previously given, for 1 minute of arc, and substituting 3,600 inches for 100 yards, 3,600 tan(1/60) ≈ 1.047 inches. In metric units 1 MOA at 100 metres ≈ 2.908 centimetres.
Sometimes, a precision firearm's accuracy will be measured in MOA. This simply means that under ideal conditions i.e. no wind, match-grade ammo, clean barrel, and a vise or a benchrest used to eliminate shooter error, the gun is capable of producing a group of shots whose center points (center-to-center) fit into a circle, the average diameter of circles in several groups can be subtended by that amount of arc. For example, a 1 MOA rifle should be capable, under ideal conditions, of shooting an average 1-inch groups at 100 yards. Most higher-end rifles are warrantied by their manufacturer to shoot under a given MOA threshold (typically 1 MOA or better) with specific ammunition and no error on the shooter's part. For example, Remington's M24 Sniper Weapon System is required to shoot 0.8 MOA or better, or be rejected.
Rifle manufacturers and gun magazines often refer to this capability as sub-MOA, meaning it shoots under 1 MOA. This means that a single group of 3 to 5 shots at 100 yards, or the average of several groups, will measure less than 1 MOA between the two furthest shots in the group, i.e. all shots fall within 1 MOA. If larger samples are taken (i.e., more shots per group) then group size typically increases, however this will ultimately average out. If a rifle was truly a 1 MOA rifle, it would be just as likely that two consecutive shots land exactly on top of each other as that they land 1 MOA apart. For 5-shot groups, based on 95% confidence, a rifle that normally shoots 1 MOA can be expected to shoot groups between 0.58 MOA and 1.47 MOA, although the majority of these groups will be under 1 MOA. What this means in practice is if a rifle that shoots 1-inch groups on average at 100 yards shoots a group measuring 0.7 inches followed by a group that is 1.3 inches, this is not statistically abnormal.
The Metric System counterpart of the MOA is the milliradian or mil, being equal to one 1000th of the target range, laid out on a circle that has the observer as centre and the target range as radius. The number of mils on a full such circle therefore always is equal to 2 × π × 1000, regardless the target range. Therefore, 1 MOA ≈ 0.2908 mil. This means that an object which spans 1 mil on the reticle is at a range that is in metres equal to the object's size in millimetres (e.g. an object of 100 mm @ 1 mrad is 100 metres away). So there is no conversion factor required, contrary to the MOA system. A reticle with markings (hashes or dots) spaced with a one mil apart (or a fraction of a mil) are collectively called a mil reticle. If the markings are round they are called mil-dots.
In the table below conversions from mil to metric values are exact (e.g. 0.1 mil equals exactly 1 cm at 100 metres), while conversions of minutes of arc to both metric and imperial values are approximate.
|Angle
|
adjustment
per click
|Minutes
|
of arc
|Milli-
|
radians
|At 100 m||At 100 yd|
|mm||cm||in||in|
|1⁄12′||0.083′||0.024 mrad||2.42 mm||0.242 cm||0.0958 in||0.087 in|
|0.25⁄10 mrad||0.086′||0.025 mrad||2.5 mm||0.25 cm||0.0985 in||0.09 in|
|1⁄8′||0.125′||0.036 mrad||3.64 mm||0.36 cm||0.144 in||0.131 in|
|1⁄6′||0.167′||0.0485 mrad||4.85 mm||0.485 cm||0.192 in||0.175 in|
|0.5⁄10 mrad||0.172′||0.05 mrad||5 mm||0.5 cm||0.197 in||0.18 in|
|1⁄4′||0.25′||0.073 mrad||7.27 mm||0.73 cm||0.29 in||0.26 in|
|1⁄10 mrad||0.344′||0.1 mrad||10 mm||1 cm||0.39 in||0.36 in|
|1⁄2′||0.5′||0.145 mrad||14.54 mm||1.45 cm||0.57 in||0.52 in|
|1.5⁄10 mrad||0.516′||0.15 mrad||15 mm||1.5 cm||0.59 in||0.54 in|
|2⁄10 mrad||0.688′||0.2 mrad||20 mm||2 cm||0.79 in||0.72 in|
|1′||1.0′||0.291 mrad||29.1 mm||2.91 cm||1.15 in||1.047 in|
|1 mrad||3.438′||1 mrad||100 mm||10 cm||3.9 in||3.6 in|
- 1′ at 100 yards equals 22619/ 21600 = 1.04717593 in ≈ 1.047 inches
- 1′ ≈ 0.291 mil (or 2.91 cm at 100 m, approximately 30 mm at 100 m)
- 1 mil ≈ 3.44′, so 1/10 mil ≈ 1/3′
- 0.1 mil equals exactly 1 cm at 100 m, or approximately 0.36 inches at 100 yards
Human vision
In humans, 20/20 vision is the ability to resolve a spatial pattern separated by a visual angle of one minute of arc. A 20/20 letter subtends 5 minutes of arc total.
Materials
The deviation from parallelism between two surfaces, for instance in optical engineering, is usually measured in arcminutes or arcseconds. In addition, arcseconds are sometimes used in rocking curve (ω-scan) x ray diffraction measurements of high-quality epitaxial thin films.
Manufacturing
Some measurement devices make use of arcminutes and arcseconds to measure angles when the object being measured is too small for direct visual inspection. For instance, a toolmaker's optical comparator will often include an option to measure in "minutes and seconds".
See also
Notes
- Some studies have shown a larger angular diameter for Betelgeuse. Various studies have produced figures of between 0.042 and 0.069 arcseconds for the star's diameter. The variability of Betelgeuse and difficulties in producing a precise reading for its angular diameter make any definitive figure conjectural.
References
- Volume Library Vol. 1, Southwestern
- "ASME Y14.5-2009 Dimensioning" (PDF). Retrieved 22 February 2017.
- "CELESTIAL NAVIGATION COURSE". International Navigation School. Retrieved 4 November 2010.
It is a straightforward method [to obtain a position at sea] and requires no mathematical calculation beyond addition and subtraction of degrees and minutes and decimals of minutes
- "Astro Navigation Syllabus". Retrieved 4 November 2010.
[Sextant errors] are sometimes [given] in seconds of arc, which will need to be converted to decimal minutes when you include them in your calculation.
- "Shipmate GN30". Norinco. Archived from the original on 24 January 2008. Retrieved 4 November 2010.
- Filippenko, Alex, Understanding the Universe (of The Great Courses, on DVD), Lecture 43, time 12:05, The Teaching Company, Chantilly, VA, USA, 2007.
- "The Diffraction Limit of a Telescope".
- Amos, Jonathan (14 September 2016). "Celestial mapper plots a billion stars". BBC News. Retrieved 31 March 2018.
- "ESO Telescope Images Stunning Central Region of Milky Way, Finds Ancient Star Burst". www.eso.org. Retrieved 18 December 2019.
- NASA.gov Pluto Fact Sheet
- Kaplan, George H. (1 January 2003). "Nautical mile approximates an arcminute". Ocean Navigator. Navigator Publishing. Retrieved 22 March 2017.
- Mann, Richard (18 February 2011). "Mil, MOA or inches?". Shooting Illustrated. Retrieved 13 April 2015.
- Wheeler, Robert E. "Statistical notes on rifle group patterns" (PDF). Archived from the original (PDF) on 26 September 2006. Retrieved 21 May 2009.
- Bramwell, Denton (January 2009). "Group Therapy The Problem: How accurate is your rifle?". Varmint Hunter. 69. Retrieved 21 May 2009. | https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/Minute_and_second_of_arc |
SPLM-N agrees to resume talks after suspension
Peace talks between Sudan’s government and a key rebel faction would resume on Friday morning after they were suspended on Wednesday.
The Sudan People’s Liberation Movement-North (SPLM-N) faction led by Gen. Abdel-Aziz al-Hilu, a rebel group in Blue Nile and South Kordofan States, suspended direct talks in protest at a government assault.
It accused the transitional government of occupying new areas in the Nuba Mountains region and violating the agreed ceasefire.
But the government delegation denied the accusations and said it was ready to investigate the incident to ensure peace negotiations continue uninterrupted.
In a press conference in Juba on Thursday evening, SPLM-N chief negotiator Ammar Amoun said that his group decided to return to the negotiating table after the transitional government started to take positive steps in an attempt to address some concerns.
The Sudanese rebel group had called for the immediate release of prisoners of war, the withdrawal of government troops from areas they have captured and a halt to hostilities.
“We have talked to the mediation team to engage the government so that our demands are met, but this cannot stop us from returning to the negotiating table. So we are now ready to negotiate with the Khartoum government,” Ammar said.
Tut Gatluak, who is the chief mediator in the peace talks, said peace negotiations that stalled would resume on Friday morning.
“The SPLM-N team has accepted to resume peace talks with the government so that the parties can go ahead. I can announce that the direct talks would resume on Friday morning,” he said.
South Sudan’s President Salva Kiir is hosting the talks in the capital, Juba, where the Sudanese government and armed opposition groups from several areas signed a roadmap for the talks last month.
Fighting between the Sudanese army and rebels in the Kordofan and Blue Nile regions broke out in 2011, and conflict in Darfur began in 2003. | http://wqw.radiotamazuj.org/en/news/article/splm-n-agrees-to-resume-talks-after-suspension |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.