content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
CROSS-REFERENCE TO RELATED APPLICATIONS
BACKGROUND OF THE INVENTION
SUMMARY OF THE INVENTION
DETAILED DESCRIPTION OF THE DRAWINGS
METHOD OF USE
This application is a Continuation of U.S. Ser. No. 14/362,306 filed on Jun. 2, 2014, entitled TEAR DUCT RESISTANCE MEASURING SYSTEM (Attorney Docket No. 2926.005), published as US 2014/0358039 A1, on Dec. 4, 2014, which is a 371 National Phase filing of International Application No. PCT/GB2012/052964, filed on Nov. 30, 2012, and published in English on Jun. 6, 2013, as WO 2013/079959, which claims the priority of GB application No. 1120771.9 filed on Dec. 2, 2011, the entire disclosures of which are incorporated herein by reference.
The present invention relates to a system and to a method for measuring the flow properties of a tear duct, to ascertain the flow resistance of the tear duct.
Watering from the eyes due to narrowing or occlusion of the tear ducts, that is to say the tear drainage ducts, is a common problem. In a healthy individual the tear drainage system collects the tears from the inner corner of the eye through a small opening (punctum) in the margin of the eyelid, there being one punctum in each of the upper and lower eyelids. Each punctum leads to a canaliculus which passes horizontally through the medial end of the eyelid towards the nose, the canaliculi usually joining to form a single common canaliculus as they reach the lacrimal sac. Here the tear duct changes to an inferior direction passing downward to become the lacrimal duct and finally exiting into the lower part of the nose.
Narrowing or occlusion of the tear duct can occur at any point in its course from the eye to nose. Typically, the evolution of tear duct obstruction involves a progressive narrowing of the tear duct from an initially fully open state, through in some cases to complete occlusion. The consequent reduction in tear drainage leads to troublesome watering from the eye, soreness of the eyelids, and sometimes infections.
Well-established techniques of tear duct surgery are available to improve drainage. Surgery is usually of value where complete obstruction exists and is often indicated before the system is completely obstructed as troublesome watering can still be corrected. It is known that many patients with eye watering do not have completely occluded tear systems. In these cases it can often be difficult to ensure that the tear duct is the cause of watering from the eye, to monitor the process of narrowing, to decide when to intervene, and to assess the response to treatment. In general, the greater the degree of narrowing the higher is the likelihood of a successful outcome from surgery. A test that could accurately measure the degree of narrowing in a simple and safe way would be very useful.
Several clinical tests can be used to help decide how narrow the tear duct is. For example, basic information can be derived by examining the tear film height and estimating the speed of clearance of a drop of fluorescein colouring in the tear film. Jones tests, which rely on identifying passage of fluorescein to the nose, have been advocated for assessing watering where the tear system is at least partly open, but are known to have high levels of inaccuracy. Radiological tests looking at the anatomy and physiological function of the tear system are also known, but can be expensive and time-consuming, and are subjective and prone to errors of administration or interpretation.
In practice the mainstay of clinical examination is to use a lacrimal cannula inserted into the punctum and connected to a syringe to irrigate fluid down the tear system. The syringe and cannula are hand-held, fluid is irrigated under pressure and the passage of fluid to the nose or regurgitation back from the same or, because they are connected, the opposite punctum, is identified. With experience a subjective estimate can be made of the level of resistance to fluid flow.
Tucker et al (Ophthalmology, Vol. 102, No. 11 (November 1995) p. 1639) has described a more objective measure of lacrimal resistance, where resistance=pressure/flow. By sealing an irrigating cannula tip at the punctum, irrigating with water at a known flow rate, and recording the pressure generated, figures for resistance were derived in normal subjects and those with open tear ducts following successful lacrimal surgery. However the research equipment used has a number of drawbacks which would prevent application in a clinical environment and use in those where tear duct narrowing or occlusion is present, as is usually the case.
Accordingly the present invention provides a system for measuring the flow properties of a tear duct, to ascertain the flow resistance of the tear duct, the system comprising: [0009] a means to generate a flow of liquid, communicating with a cannula to supply liquid to a punctum of an eye, the cannula defining a tip and being able to seal to the punctum; [0010] a motor to actuate the flow-generating means; [0011] a pressure sensor to monitor the pressure of the liquid supplied to the punctum; [0012] a monitoring circuit to which signals from the pressure sensor are provided, arranged to provide an indication of the flow resistance from those signals; and [0013] a feedback circuit to control the motor in accordance with signals from the pressure sensor, either to maintain a preset liquid pressure, or to ensure that the liquid pressure does not exceed a preset threshold.
The system may be portable, or may be mounted on a microscope, or indeed may be usable in either way. The system may be battery-powered. Preferably the system also includes means to close the other punctum of the eye, for example a clip or a plug. The closure means ensure that the liquid introduced into the punctum must flow through the tear duct. Without such a closure means, leakage of injected liquid might otherwise occur through the other punctum, giving misleadingly low values for fluid pressure. If the tear duct is completely obstructed there can be no through flow, and the feedback circuit ensures the liquid pressure does not exceed the preset threshold. This ensures the patient is not subjected to pain or damage to the tear duct, as could otherwise occur.
Thus for safely and accurately testing the tear duct where narrowing or occlusion are suspected the system provides feedback control of the irrigation and ensures that the lacrimal system is closed apart from the nasal exit point of the tear duct. Further features ensure that the system is practical for clinical use.
In one embodiment the flow-generating means is a syringe. The motor is arranged to actuate the syringe.
In some cases it may be realistic to assume the flow rate has a predetermined value, for example as determined by the voltage applied to a motor. Hence, for a predetermined voltage, the monitored pressure is indicative of the flow resistance of the tear duct, and may be used as a parameter representing the flow resistance. However, more accurate measurements may be obtained if the flow rate is also monitored, as this will enable the flow resistance to be calculated, as explained below. Hence the system may also comprise means to monitor the flow rate of the liquid supplied to the punctum. Signals from the flow rate monitor may then be supplied to the monitoring circuit.
The system is capable of irrigating the tear duct, while monitoring both the pressure applied and the flow rate through the lacrimal system. By eliminating leakage except at the nasal end of the tear duct the lacrimal system acts as a closed conduit such that, with pressure (P) and flow (F) both known, resistance (R) can be calculated as R=P/F. The system uses a syringe driver so that the rate of flow of the liquid is controlled electronically in response to continuous pressure recordings from a pressure transducer in the fluid delivery system to the tear duct. A certain motor speed on the syringe driver will propel the plunger of the syringe at a known linear rate from which the rate of delivery of fluid from any particular size or type of syringe can be ascertained. The flow rate may therefore be monitored by monitoring the movement of the syringe plunger, or by monitoring the motor which drives the syringe plunger. Alternatively the liquid flow may be directly monitored. (As mentioned above, in some cases the liquid flow rate need not be monitored.)
The syringe communicates with the cannula either directly, or through one or more components that define a flow path, for example through a flexible tube. The pressure sensor may be within the syringe, or within another part of the flow path, in order to monitor the pressure of the liquid supplied to the punctum. In one embodiment the flow path is defined in part by a short tubular element to which the cannula is attached, and this short tubular element is preferably rigid. The short tubular element may be less than 150 mm long, more preferably less than 100 mm long, but preferably at least 5 mm long, and more preferably at least 20 mm long; it therefore provides a convenient way for the operator to manipulate the cannula, for example with his fingertips. The cannula may be readily detachable from the short tubular element, so it can be replaced by a cannula of a different shape or size. Alternatively the cannula may be integral with the short tubular element. The short tubular element may include means to activate the system, such as a touch-sensitive switch. As a preferred option, the pressure sensor is within the short tubular element, which may be referred to as a transducer module.
In a second aspect, the invention therefore provides a tubular element which may be used in such a flow resistance measuring system, the tubular element being adapted to communicate at one end with a cannula to supply liquid to a punctum of an eye, the cannula defining a tip and being able to seal to the punctum, and the tubular element being adapted to communicate at the other end with a source of liquid; the tubular element being adapted to be handheld; the tubular element comprising a pressure sensor to monitor the pressure of the liquid supplied to the punctum; and comprising a switch to activate the liquid supply source.
Measurements are preferably only made when the pressure and flow rate are stable, if only for a few seconds, as measurements made in a non-steady-state condition may give inaccurate results. With two variables it would be possible to keep either one constant and measure the other. Thus if there is a constant rate of flow the pressure could be recorded, and the system is arranged to reduce or cut off the flow if the pressure becomes excessive. A preferred alternative is to specify the preset pressure at which the system will irrigate the tear duct and to vary the speed of the syringe driver, and so the flow rate, to provide this. This is closer to the natural physiological process of tear drainage, especially if the selected pressure is relatively low; and it avoids the risk of leakage, pain for the patient, or damage to the tear duct or syringe driver system that may occur if the irrigation pressure were allowed to rise to a high level.
A substantial advantage of the system arises where it is portable, so it can be used hand-held, but where it can also be mounted on the slit-lamp biomicroscope used for eye examination. Irrigating the tear ducts with the patient seated at the microscope is currently very difficult in view of the limited space available to work in. A further problem is the necessarily long length of a filled syringe attached to a currently-available lacrimal irrigating cannula. Such length makes positioning the tip of the cannula in the punctum and applying pressure to the syringe very awkward and there is potential for damage to the tear duct, eyelid or eye. Typically, syringing of the tear duct therefore takes place away from the microscope, often requiring transfer to a couch.
There are clear advantages in being able to perform this test at the microscope. Not only is it simpler in not requiring the patient to be moved, but also the illumination and magnification provided by the microscope allow ready visualization of the punctum, simple placement of the tip of the cannula in to the required position and the ability to check for leaks when irrigation commences as well as accurate placement of the closure means on the opposite punctum. The system of the invention allows for testing at the microscope by incorporating a number of features. The syringe driver is portable and of a compact size. It is designed to fit within the limited available space on the microscope, and to be removably mounted on the base plate of the microscope used for other ophthalmic work. A length of flexible tubing connects the tip of the syringe to the short tubular element, to which is attached a short cannula. The combination of the short tubular element and the cannula is sufficiently compact to allow ready manipulation in the narrow confines of the microscope and simple placement of the cannula. Achieving this however requires the use of both of the operator's hands, one to hold the eyelid stable, the other to hold the cannula, to insert it and hold it within the punctum to generate a seal. To allow control of the irrigation, the system may therefore include a switch in the vicinity of the cannula for convenient use by the operator; such a switch may be provided in the short tubular element and designed to be operated by the fingers holding the short tubular element without causing movement of the tip of the cannula.
Such a switch may be arranged to initiate operation of the flow-generating means; it may also open a valve to allow flow to occur; it may additionally activate the pressure sensor.
Under some circumstances, for example when the patient is unable to sit at the microscope or when the system is being used in an operating theatre, it will be necessary to ensure it can also be used hand-held. To achieve this, the system may be used without the connecting flexible tubing. The short tubular element is attached directly to the tip of the syringe and the irrigation is again controlled by the switch component of the short tubular element. Advantageously, the syringe driver unit can be designed to be capable of being held like a pen, the optimum position to achieve the stability needed for safe positioning of the cannula, thereby avoiding the inherent difficulties of holding a syringe carefully in position whilst simultaneously pressing the plunger. To assist in identifying the punctum and checking for fluid leakage the syringe driver unit or the short tubular element can incorporate a light directed at the tip of the cannula.
FIG. 1
10
10
11
17
12
13
11
12
13
14
15
16
18
17
Referring to , in healthy individuals, tear fluid (that is “lacrimal” fluid) is normally supplied continuously to their eyes (only one is shown) from lacrimal glands. The lacrimal fluid subsequently washes the cornea and conjunctival components of the eye . Under healthy conditions, excess lacrimal fluid that cannot be retained by the eye and conjunctiva tends to be drained from the inner-canthus , at the corner of the eye, to the nasal passages . The fluid passes through a network of passages, starting at puncta , which are at the center of small papillae adjacent to the inner-canthus , at the margin of the eyelid. The puncta , communicate via canaliculi , with the lacrimal sac , and the tear fluid then drains through the nasolacrimal duct into the nasal passage .
14
15
16
18
10
If there is a partial or total blockage of one or more of the drainage channels , , , , excess lacrimal fluid can no longer drain away in the usual fashion. Such a blockage may result from congenital anomalies, accidents, inflammation, and so forth, and will tend to cause the eye to continuously brim over with tears, with concomitant discomfort to the individual, and with a potential risk of infection. Surgical treatment can correct this problem, but it is desirable to be able to check accurately the degree of blockage.
FIG. 2
20
22
23
23
22
23
24
25
25
25
25
25
23
23
23
26
22
27
26
27
28
23
27
29
30
a
b
a
b
b
Referring now to there is shown an ophthalmic microscope system . This consists of an illumination system including a short focus projector to project an image of an illuminated slit onto a patient's eye. The eye is observed through a binocular microscope . In normal use the focal position of the microscope is at the same position as the focal position of the illumination system . In front of the microscope is a support frame with a curved rest for a patient's forehead, and a chin rest for the patient's chin. In use the patient places his head resting against the curved rest and the chin rest ; the height of the chin rest can be adjusted so that the patient's eyes are at the level of the microscope . Hence the surgeon can view the patient's eye through the microscope and can ensure that the eye is satisfactorily illuminated. The microscope is supported on an L-shaped bracket , and the illumination system is supported on a shorter L-shaped bracket , both the L-shaped brackets and being mounted on a support and being rotatable about a vertical axis. This enables the surgeon to adjust the relative orientations of the illumination and of the microscope . Immediately above the lower portion of the L-shaped bracket is a plate or platform onto which may be mounted an irrigation system of the invention.
FIG. 3
30
32
33
33
34
36
32
38
40
40
42
44
45
45
46
47
40
48
44
Referring now to , an irrigation system of the invention comprises a syringe with a plunger . The plunger can be driven by a linear actuator which is powered by a battery . The liquid outlet of the syringe is connected by a flexible tube to a transducer unit . The transducer module defines a flow channel within which is a pressure sensor and a one-way valve adjacent to an outlet . The outlet is connected to a cannula which, in this example, tapers to a tip . The transducer module also includes a push-button switch . In a modification, the one-way valve is omitted.
30
50
51
52
48
30
50
57
50
42
56
34
50
34
50
52
The irrigation system also includes a microprocessor connected to a display module and to a loudspeaker . The push-button switch provides on and off signals for operation of the irrigation system , and these are provided to the microprocessor , through a wire . The microprocessor is also provided with pressure-indicating signals from the pressure sensor , through a wire , and is provided with flow-rate-indicating signals from the linear actuator . The microprocessor provides control signals to actuate the linear actuator . In a modification, the microprocessor may be connected to a light display instead of, or in addition to, the loudspeaker . (The electrical connections are shown schematically.)
48
50
34
50
50
34
51
If the surgeon (or other medical professional) presses the push-button switch to provide an “on” signal, the microprocessor initiates movement of the linear actuator . The microprocessor monitors both the flow rate and the fluid pressure. In a first mode of operation the pressure rises to a preset value P1, the microprocessor then controls the linear actuator to maintain the pressure at that value P1, and the flow rate F1 is measured for that preset value of pressure. The microprocessor can consequently calculate the resistance as R1=P1/F1, and the value of this resistance R1 is displayed on the display module .
50
34
51
50
In an alternative mode of operation, the flow rate and pressure gradually increase until a preset flow rate F2 is obtained, the microprocessor then controls the linear actuator to maintain the flow rate at this value F2, and the corresponding pressure P2 is then measured. The microprocessor can consequently calculate the resistance as R2=P2/F2, and the value of resistance R2 can be displayed on the display module . When operating in this mode the microprocessor must also monitor the pressure, to ensure that the pressure does not exceed a threshold P3 at which the patient may experience pain or damage to the tear duct.
34
34
In another alternative mode of operation, the flow rate is not measured. Instead the flow is set to a predetermined value, for example by supplying a preset voltage to the linear actuator . Without monitoring the linear actuator and without measuring the flow rate, the pressure P can be measured. This pressure may be taken as indicative of the flow resistance. The medical professional can readily distinguish between normal values of flow resistance, and abnormal values. If the pressure becomes excessive, the flow may be reduced or cut off.
30
32
20
29
20
29
29
38
32
38
32
40
46
12
13
25
25
38
30
38
40
32
a
b
Considering the components of the irrigation system in more detail, the syringe may be a standard syringe, for example of capacity 5 ml or 10 ml. In some cases the microscope system may provide sufficient space above the plate that a 10 ml syringe can be used. However, with some microscopes systems a filled 10 ml syringe may be too long, obstructing the view of the eye when fitted vertically on the plate . In a modification the syringe may extend at least partly below the plate , for example being inclined from the vertical. In another modification the tubing is connected to the syringe through a 90 degree connector, reducing the overall height. The tube must be sufficiently long to connect the outlet of the syringe to the transducer unit , with the cannula able to reach the punctum or when the patient is positioned adjacent to the curved rest and the chin rest ; it must be flexible but non-kinking. The tube would typically be of silicone tubing. If the irrigation system is to be used when hand-held, the tube may be omitted, the transducer unit being attached directly to the end of the syringe .
46
12
13
20
46
46
12
13
12
13
46
47
47
46
12
13
a
c
FIGS. 5to
The cannula needs to be able to seal at the punctum or ; to be short (to allow easy positioning in the narrow confines around the ophthalmic microscope system ); to have the maximum possible lumen diameter (to ensure resistance to flow is largely due to the tear duct rather than the cannula ); to have a short length of narrow diameter (to minimize pressure drop within the cannula ); and to have the minimum possible outside diameter (to minimize or avoid the need for dilation of the punctum or to allow insertion, avoiding the need for an additional step with patient discomfort and/or risk of damage to the punctum or ), implying a thin wall. Consequently the cannula preferably has a broad diameter lumen tapering smoothly to the narrower tip of external diameter no more than 2 mm, for example approximately 0.6 mm, with a constant wall thickness throughout, and a total length of approximately 5-10 mm. Alternative designs are possible, for example the provision of a cone or ball at the outer surface towards the tip of the cannula , to help it seal to the punctum or . Other designs of cannula are described below in relation to . The cannula would typically be of stainless steel.
47
12
13
Preferably the tip is sufficiently narrow that no preliminary dilation of the punctum or is required.
48
48
50
The switch may be activated, either fully on or fully off, on whilst finger pressure applied, off when pressure released. The switch may also constitute a valve which, when it is in the “on” position opens the passage to fluid flow; as described above its major role is to provide a signal to the microprocessor to initiate flow of liquid.
40
42
42
40
46
40
38
20
40
22
48
47
12
13
40
47
12
13
40
FIG. 3
The transducer module includes the pressure sensor , which may for example use a piezo-electric transducer, which must be of appropriate sensitivity to provide continuous readouts within the anticipated range of pressures. The pressure sensor may be directly exposed to the lumen within the transducer module and so to the liquid flowing through it. As shown in , the cannula is attached to the distal end of the transducer module , while the flexible tube is attached to the proximal end (when being used in conjunction with the ophthalmic microscope system ). The transducer module must be sufficiently small that it can be easily held in the surgeon's hand, and manipulated in the restricted space between the microscope and the patient's eye, and the switch must be sufficiently sensitive that it can be activated without causing movement of the tip engaged in the punctum or . Clearly the transducer module must not restrict the surgeon's ability to place the tip into the punctum or . Typically the transducer module would be of a length between 10 mm and 100 mm, for example 15 mm, 20 mm or 25 mm.
33
34
32
33
33
36
30
34
33
34
The mechanism to activate the plunger may be a linear actuator as described above, acting directly as a syringe driver, but other systems to generate liquid flow are possible. For example an electric motor may drive liquid from a reservoir using a pump. Where a syringe with a plunger is used, a screw thread may propel a bracket arranged to move the plunger . The mechanism may be powered by a battery , which may be provided with a recharging circuit (not shown) and means to warn when recharging is required; while as an alternative the irrigation system may instead be powered from the mains. The actuator may include other sensors, such as a motor overload detector. The sensing of flow rate may be based on the movement of the plunger , or on the speed of the actuator , for example using optical sensors, or from measurements on the motor itself, such as armature voltage.
34
36
50
52
55
56
57
42
48
55
55
32
32
55
53
12
13
30
55
29
20
32
FIG. 3
FIG. 3
The linear actuator , the battery , the microprocessor and the display module are housed within a casing (indicated in broken lines in ), which would typically be of a moulded thermoplastic. Electric wires and from the pressure sensor and the switch may therefore be in the form of a flexible lead which connects to the casing with a plug. The casing incorporates means to mount the syringe , and from which the syringe can be removed. The casing may be ergonomically shaped for hand-held operation, preferably with a pen-like grip, and may include a light source to illuminate the punctum or during hand-held operation. In addition the irrigation system includes a bracket for connecting the casing on to the plate of the ophthalmic microscope system in the orientation shown in , preferably with the outlet of the syringe at the top.
30
12
13
12
13
12
13
60
61
61
60
12
13
62
63
64
64
62
63
12
13
a
FIG. 4
b
FIG. 4
The irrigation system also includes a punctal occluder, that is to say a clip or plug that can be applied to one punctum or that simply, reliably, safely, painlessly and reversibly closes off the punctum or , before liquid is injected into the other punctum or . Referring to the punctal occluder may be a tapered plug with a larger head at one end; the operator would hold the head and insert the tapered plug into the punctum or to prevent any liquid flow. As shown in the punctal occluder may be a plug with a narrow shaft with a bulbous portion , and with a larger head at one end; the operator would hold the head , and insert the plug until the bulbous portion had blocked the punctum or .
14
15
12
13
c
d
e
FIGS. 4, 4and 4
As an alternative and more convenient approach to plugging the punctum, the punctum or adjacent canaliculus may be closed by externally applied pressure with a clip. This may be achieved by squeezing the canaliculus or between two opposed jaws. The jaws may be brought together using a screw thread; but preferably such jaws are mounted resiliently. This squeezing approach may be applied directly to the punctum or itself. Suitable occluders are shown in . In each case the jaws would typically be of a plastic material, whereas the spring would be of a metal such as stainless steel.
c
FIG. 4
65
66
66
67
68
67
67
69
70
68
71
68
67
66
66
69
70
66
66
66
66
12
13
69
70
66
66
14
15
a
b
a
b
a
b;
a
b
a
b
As shown in a punctal occluder may comprise a pair of opposed jaws and , one attached to the end of a rod and the other projecting from a sleeve that can slide along the rod . To the opposite end of the rod is fixed a projecting finger-plate , and a second finger-plate projects from the sleeve ; a compression spring urges the sleeve along the rod so as to urge the jaws and together. The operator would squeeze the finger-plates and together to separate the jaws and place the jaws and inside and outside the eyelid on the medial side of the punctum or ; and then release the finger-plates and , so the jaws and squeeze the corresponding canaliculus or closed.
d
FIG. 4
73
74
74
75
75
76
76
77
78
76
76
73
65
76
76
74
74
12
13
76
76
74
74
14
15
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
Alternatively, as shown in , a punctal occluder may comprise a pair of opposed jaws and at the ends of two pivoted arms and which define finger plates and at their other ends, linked by a pivot pin , and with a compression spring arranged to urge the finger plates and apart. This occluder resembles a small-scale pair of spring-loaded tongs or scissors. It is used in a similar way to the occluder , in that the operator would squeeze the finger plates and together; place the jaws and inside and outside the eyelid on the medial side of the punctum or ; and then release the finger-plates and so the jaws and squeeze the corresponding canaliculus or closed.
e
FIG. 4
80
81
81
82
82
83
80
65
73
a
b
a
b
Alternatively, as shown in , a punctal occluder may comprise a pair of opposed jaws and integral with finger plates and , held together by a part-cylindrical spring . This occluder resembles a small-scale bulldog clip. It is used in a similar way to the occluders and described above.
30
32
32
55
55
29
38
40
32
56
57
40
55
46
40
The irrigation system is prepared by filling the syringe with a suitable liquid, typically water or saline. The syringe is then fixed on to the casing , and the casing mounted on the plate ; the tube and the transducer module are connected to the syringe ; the wires and from the transducer module are plugged into the casing ; and the cannula is connected to the end of the transducer module .
55
42
48
42
47
52
The casing may also be provided with an on/off switch, and a priming/calibration button (not shown). In this case the switch would be switched on, and the system primed and checked. This ensures air is expelled from the system and that the pressure sensor is responding as anticipated. When the priming function is activated, the operator may for example be allowed a few seconds to press the switch to the “on” position. Fluid flow is then initiated at a defined rate or rates, and the output of the pressure sensor is monitored; the pressure rises due to the small diameter of the tip . If the pressure or pressures reach values within required limits, the priming and testing is considered satisfactory. This may be indicated by a sound from the speaker (or by a coloured light). The resistance so recorded represents the inherent resistance of the system when not irrigating the tear duct and the system can therefore be calibrated such that this level of resistance represents free flow.
10
12
22
40
47
13
48
34
The operator then uses a punctual occluder to block one punctum of the patient's eye , for example the upper punctum . While viewing the eye through the microscope , the operator with one hand holds the patient's eyelid, and with the other hand holds the transducer module , and carefully engages the cannula tip with the other punctum, in this case the lower punctum . The operator can then initiate fluid flow by holding down the switch , which actuates the linear actuator as described above.
50
42
50
50
13
17
1
30
In a suitable mode of operation, the microprocessor initially assumes that the resistance will have a normal value, and initiates liquid flow at a preset rate. The pressure is monitored using the signals from the sensor , and if the pressure is too low the fluid flow is increased, while if the pressure is too high the fluid flow is reduced. Hence the microprocessor , using the feedback of pressure values, brings the pressure to a value in a pre-determined range and the fluid flow to a steady state. Damping circuitry in the microprocessor is arranged to avoid wide swings or overshoots, so as to rapidly reach a steady state and thereby minimize the amount of fluid irrigation required. This ensures a more comfortable test for the patient and less need to replenish the fluid in the syringe. If the values are steady for a sufficient time, preferably at least 1 second and more preferably at least 2 seconds, then the values of pressure and flow are measured, and the microprocessor can deduce the resistance of the patient's tear duct (between the punctum, in this case the lower punctum , and the nasal channel ). The flow rate is typically in the range from 5 to 10 ml/min; and the injection pressure might for example be in the range from 10 to 20 cm water, that is 1 to 2 kPa. For a normal, healthy person the resistance of the tear duct is around 6.7 kPas/ml, although there can be a wide variation between individuals, typically between about 4.4 kPas/m and 9.0 kPas/ml. The feedback control system however ensures that the irrigation system is able to be used safely and accurately in individuals where the resistance is normal, significantly greater than normal, or where the drainage system is completely obstructed.
50
34
Thus if the microprocessor detects that the pressure has exceeded a threshold value, it reduces the flow, or switches off fluid flow completely to avoid discomfort to the patient or damage to the tear duct or to the linear actuator .
50
50
52
34
50
51
If, as mentioned above, a steady state is achieved for an adequate duration, then one or more measurements of flow rate and pressure are recorded by the microprocessor . The microprocessor may then provide an audible signal through the loudspeaker to indicate that the test has been successful, and switches off the linear actuator . The microprocessor then calculates and averages results for resistance (Resistance=Pressure/Flow), and displays the result on the display unit , optionally in comparison with the corresponding figures for a healthy individual.
34
40
34
38
40
Safety in use is ensured firstly by ceasing actuation of the linear actuator if the pressure sensor within the transducer module rises above a pre-set limit or threshold. In addition, the provision of a motor overload detector on the linear actuator guards against the flexible tube being blocked, for example by kinking, as this would provide a high resistance to fluid flow without generating high pressure in the transducer module .
30
30
47
46
30
38
47
46
The irrigation system provides advantages both for the surgeon and for the patient. For the surgeon, the system provides objective data on lacrimal resistance, and the measurements are made under conditions that are closer to the natural physiological state. The measurements are easier, as they can be made using the microscope to provide excellent visibility for inserting the occluder and for inserting the tip of the cannula , and for checking for any leaks. The measurements are also easier when using the irrigation system hand-held, without the flexible tube , as the system is lightweight, providing an ergonomic hand position, and a comparatively short distance from the hand to the tip of the cannula . From the patient's perspective the measurements are safer, with less risk of damage to the canaliculi either during insertion or arising from excess pressure; and the measurements are less uncomfortable, as the flow rate and quantity of liquid is less.
44
38
32
38
40
46
44
32
38
40
46
The one-way valve minimizes the risk of contamination reaching the flexible tube , or the syringe if the flexible tube is not provided. Consequently, after use the transducer module and the cannula would typically be disposable, whereas the other components can be reused without risk of transferring contamination. Alternatively, where no one-way valve is provided, then the syringe , flexible tube , transducer module and cannula may all be disposable.
30
46
It will be appreciated that the irrigation system described above may be modified in various ways. For example the cannula may be replaced by a differently-shaped cannula.
a
FIG. 5
90
91
92
45
40
93
93
12
13
Referring to , an alternative cannula comprises a thin-walled tube provided with a hub at one end, for connection to the outlet of the transducer module , and at the other end having a smooth transition down to a short narrow tube section . This is used as described above, with the short section being inserted into the punctum or .
b
FIG. 5
94
95
96
94
96
12
13
95
95
As shown in , another alternative cannula comprises a stainless steel tube of uniform outside diameter e.g., 0.64 mm, and of length for example 4 or 5 mm, inserted into a hub that provides a rounded surface towards the tip of the cannula . The internal diameter may be around 0.54 mm for a 0.05 mm wall thickness. This size generally does not need dilation before insertion, and the hub may provide the seal to the punctum or . Alternatively the tube might have an external diameter 0.57 mm, again with a wall thickness of 0.05 mm. Although the tube is shown as straight, it might instead have a slight curve along its length.
c
FIG. 5
98
94
99
14
15
16
96
12
13
18
12
13
94
18
14
15
98
18
94
Referring to , there is shown a cannula which is a modification of the cannula , differing in having a longer tube which in this example is slightly curved. This may be around 9-10 mm long, so it would be possible to pass it along the canaliculus or and into the lacrimal sac , with the hub sealing at the punctum or . This can provide an advantage. Tear duct narrowing is known to most commonly occur in the lacrimal duct , and when measuring lacrimal resistance what one really would like to know is the resistance of this part. Irrigating from the punctum or , using for example cannula , would indicate the total resistance beyond this point; so to identify narrowing of the lacrimal duct one has to assume the canaliculus or is normal. The cannula enables the flow resistance of the lacrimal duct to be measured more precisely, and by comparison with measurements using for example the cannula , an objective measurement can be obtained indicating where the narrowing of the tear duct occurs.
40
48
57
The transducer module may also be modified. For example it may include a mechanical valve to prevent liquid flow, this being actuated by pressing the push-button switch that also provides the electrical signal in the wire .
BRIEF DESCRIPTION OF THE DRAWINGS
The invention will now be further and more particularly described, by way of example only, and with reference to the accompanying drawings, in which:
FIG. 1
shows a schematic diagram of the anatomy of a normal and healthy tear duct system;
FIG. 2
shows a perspective view of an ophthalmic microscope system for use with an irrigation system;
FIG. 3
shows a schematic diagram of an irrigation system of the invention; and
a
e
FIGS. 4to 4
show different punctual occluders of the invention; and
a
c
FIGS. 5to 5
FIG. 3
show sectional views of alternative cannulas for the system of . | |
Noah Hanft is the President and CEO of the International Institute for Conflict Prevention and Resolution (CPR). He previously served as General Counsel and Chief Franchise Officer for MasterCard, where he was responsible for overseeing legal and regulatory affairs, public policy and compliance. While at MasterCard, he spearheaded the successful resolution of major litigations through mediated settlements. He currently serves on the boards of the Legal Aid Society and the Network for Teaching Entrepreneurship (NFTE) and is a member of the Council on Foreign Relations. Mr. Hanft has a LL.M. from NYU School of Law, a J.D. from Brooklyn Law School, and a B.A. from American University.
Q: How has the public perception of ADR changed over the years? Specifically, how has it changed among lawyers versus among the general public?
A: A meaningful change has occurred over time as the public’s perception of ADR has evolved. Indeed, there is a proliferation of mediation in different settings whether it be business, community, divorce, labor, etc. Today not only does the public recognize the value and importance of ADR, but lawyers as well. What they share in common is a recognition of the importance of ADR and a desire to sustain long term client relationships. That recognition takes into account the fact that they want to be focused on what their clients care about, which is finding the best solutions in the most efficient and effective manner possible.
Q: What’s do you find to be the core cause of business conflicts? Is there a “best practice” for approaching this kind of conflict? If so, please explain.
A: Business conflict is inevitable. Sometimes it is caused by honest disagreement and sometimes by wrongful conduct. For the most part it is driven by the reality that differing interests will very often give rise to different perspectives, creating friction and the potential for conflict.
I have a strong belief that conflict is often caused by the failure of parties to have a clear understanding as to objectives, outcomes and processes. Clarity in a contract and a genuine meeting of the minds is critical. That is why I believe best practice calls for agreements that clearly articulate the expectations of the parties on the overarching business issues, as well as how potential business disagreements can be avoided and, if unavoidable, how they can be effectively managed. A focus on early dispute identification and resolution is increasingly becoming the hallmark of successful companies. I often say that your adversary of today (even if it is your current competitor) might be your licensee, joint venture partner or even acquisition target of tomorrow so avoiding fractious and expensive legal battles is good business.
Q: To the degree that you can disclose, please describe the most difficult conflict you have encountered. What made it so difficult? How did you approach finding a resolution? Were you successful?
A: At MasterCard I dealt with many conflicts each of which was difficult for different reasons. They included conflicts with competitors, governments, the merchant community, and occasionally customers. As you would expect, I can’t provide detail on the many confidential mediations and other negotiations I encountered, but I’m happy to discuss the commonalities between them and approaches for dealing with them in general terms.
The most difficult disputes tend to be those where either or both parties feel they were wronged and there is a matter of principle involved in addition to dollars. Just like disputes are inevitable, so is emotion. What is critical to dealing with challenging disputes is to do several things. First, it is to find the right time and process under which the parties can dialogue. Generally, I have found that the earlier in the process a dialogue is established the better. In most situations the use of a trained mediator is the best way to go, especially where both principle and emotions are at stake. Next, finding the right mediator is extremely important and, in making that decision, one must take into account the type of dispute it is, the need for subject matter expertise and many other factors. That is why organizations like CPR, that have carefully vetted highly qualified mediators and arbitrators, play such an important role. Approaches to resolution tend to be very case specific. Having said that, as a mediator I have found that regardless of the nature of the dispute, the mediation process is most effective when the parties embark on the process with openness to finding solutions and avoid locking in to litigation positions. In this respect, I think the key is for all parties (not only the mediator) to truly take the time to explore in a genuine way what the interests are of all parties (as opposed to the purely legal arguments) and ultimately to find ways to address those interests.
I believe that if we are as zealous about finding solutions as we tend to be with respect to fighting lawsuits, success rates will be astonishing.
Q: You worked with MasterCard for over 20 years in various legal counsel positions. What role does ADR play in the day-to-day work of in-house counsel? How have you seen the role of ADR change over the past 20 years?
A: ADR skills play an important role for all in-house lawyers. All lawyers handling disputes and litigation must understand and know how to effectively utilize ADR. In many mediations the most important participant is the in-house counsel, particularly where relationships between outside counsels are extremely adversarial and/or strained.
At MasterCard, I renamed the litigation department, the “Dispute Resolution” team, to emphasize the importance of a solution focus. Perhaps the most important role of in-house counsel is what I refer to as “preventative lawyering.” Skills that allow in house counsel to identify potential disputes and utilize mechanisms to address them early on are invaluable to corporations.
When I first began practicing as a corporate counsel at MasterCard (well over 20 years ago) there was little reference to ADR. Twenty years ago we were just beginning to utilize mediations, but candidly it was then more of an afterthought as opposed to a focus and objective as it is today. Today the companies that are leaders in their industries and have a focus on innovation and customer service see ADR as an important element of an integrated commercial approach.
Q: As general counsel did you encourage the use of arbitration? Why or why not?
A: As general counsel I viewed arbitration as one alternative for disputes arising from cross border transactions. My decision would take into account a number of factors including the counterparty and what jurisdiction we were doing business in. Of course, that pre-dates CPR’s new rules for administered international arbitrations which came into effect at the end of last year. Because of the benefits they bring to the international arbitration community from an efficiency, expense and integrity perspective I would have encouraged the use of CPR arbitration generally, but particularly in the case of international transactions. I also was, depending on the magnitude and importance of a matter, not always supportive of using arbitration because of the absence of a right of appeal. But now that CPR, as well as other providers, offers an appellate right that would be less of a deterrent. I was never a fan of party appointed arbitrators. The notion of arbitrators feeling some degree of loyalty to the party that chose them always gave me pause. That is another reason why I believe CPR and its screened selection process, by which parties can suggest arbitrators, but they are appointed by CPR without the arbitrator knowing which party designated them, provides a basis for encouraging utilization of arbitration.
Q: A recent article was released in the New York Times that was heavily critical of corporate arbitration. Do you think criticism of corporate arbitration, especially that related to class actions, is warranted? Why or why not?
A: I think any adjudication process is subject to abuse, but I don’t see that as a basis for condemning the practice. I have yet to find a general counsel that did not find themselves in a situation where abusive litigation practices, including lawyer-generated class actions, caused major anguish and often harm. And, of course, as we all know in so many of those cases, consumer benefit is absent from the equation and the only beneficiaries are counsel.
I did find the New York Times article to raise some significant concerns regarding abusive arbitration practices, but found the presentation somewhat slanted with both the abuses of litigation and the benefits of arbitration buried deep at the end of the articles. But perhaps most important, I think, is the obligation of providers to ensure its rules and practices are above reproach. CPR arbitrations have very clear due process requirements and it is no surprise that none of the examples provided were CPR cases.
Q: Some have said that many companies don’t imagine relationships ever souring, which is why so the dispute resolution clauses could often be tighter. Have you found that more companies are contemplating dispute resolution mechanism when drafting agreements now compared to in the past?
A:An increasing number of companies, particularly CPR members, are treating disputes with a commercial mindset and as a component of doing business. As indicated earlier, progressive companies are recognizing the inevitability of disputes and, as such, a thoughtful approach to disputes incorporating contract language providing for a multi-step approach to contractual disputes is becoming the norm. These agreements generally call for a negotiation process followed by internal escalation. Failing a resolution, the next step is for a mediator to be utilized. There are many variations on the theme, but in most instances, should mediation fail, the next step is some form of adjudication, arbitration or litigation. CPR has in its toolkit an “economical litigation agreement” allowing for the parties to elect litigation, but control the process by agreeing on streamlined litigation in advance of disputes.
Q: Do you find that mediation is gaining importance in cross-border disputes? If so, how? For either answer, why do you think this is?
A: Absolutely. In Europe, the EU Mediation Directive is encouraging member states to foster the development of mediation and in jurisdictions such as Brazil, where courts are terribly congested and delays of many, many years are commonplace, mediations are proliferating. There is, unsurprisingly, an increasing number of commercial transactions extending across borders and due to the added complexity of adjudicating disputes between foreign parties, mediation provides obvious benefits. This is particularly the case where there is an interest in preserving business relationships.
Q: What do you see as the future of international arbitration?
A: As commerce between nations will undoubtedly continue to grow so too will the importance of effective dispute resolution mechanisms. We at CPR see international arbitration as a very clear growth opportunity, which is one of the reasons we went through a major effort to get leading practitioners and users to work with us in developing and implementing innovative and practical rules and protocols. The future of international arbitration is, I believe, very bright and can be that much more compelling if the concerns about arbitration are addressed. Those concerns relate to expense, efficiency and the integrity of the overall process and it’s been a priority for us to find workable solutions to those concerns, which we are confident we have done.
Q: You recently took the position as the President & CEO of the International Institute for Conflict Prevention & Resolution. How has the work with CPR differed from your work with MasterCard? How is it similar?
A:The work is similar in the sense that managing people and a budget requires the same skill set and rigor whether it is with a nonprofit entity or a public corporation. It differs in a very predictable way in that the two organizations have very different missions and objectives. MasterCard is appropriately focused on maximizing shareholder value, while CPR exists to pursue its mission, which is to continually seek better ways to prevent, manage, and resolve disputes. One ironic similarity is that CPR is all about finding better alternatives to traditional litigation, while MasterCard is focused on electronic payments as a superior alternative to cash. What I enjoy about both roles is that they call for making strategic decisions, getting involved in public policy issues and staying abreast of legal developments. I am fortunate to have worked with smart, strategic, and high integrity folks at MasterCard and have experienced the same quality of individuals in the CPR organization.
Q: I know CPR is headquartered in NY. Does its work extend to Canada and elsewhere?
A: Absolutely. CPR is a global organization seeking to fulfill its mission around the world. Many of our member organizations have major operations in Canada and we have law firm members based in Canada as well as those based elsewhere with offices in Canada. A strategic focus for CPR in 2016 is growing its presence and initatives in Canada. To that end CPR will be holding a Regional Meeting in Toronto in June focused on how in house counsel and outside counsel can best collaborate in driving effective dispute resolution. This has been a consistent area of focus for us and of interest to our members. Of course, we will also cover developments in Canada on the ADR front. CPR has been active throughout Europe as well. A good example of our work is the newly released CPR European Mediation & ADR Guide for in house counsel which is an informative and practical resource to help corporate counsel understand and realize the benefits of ADR and the many resources CPR can provide.
Q: How have you drawn on your experience with MasterCard in your new role?
A: At MasterCard I learned the importance of both strategy and execution and of having a strong customer value proposition in order to be successful and the need to communicate it well. At CPR today we’ve expanded our member benefits in terms of tools, training and incentives and experienced a resulting dramatic increase in membership.
Q: What advice would you lend to law students looking to make ADR a significant part of their careers after they’ve finished their education?
A: My advice to law students and young lawyers regardless of their perceived interests is to be open to experiencing different types of practices. For example, getting involved in an international arbitration early on in your career is, at a minimum, great experience and might serve to stoke your interest in establishing a practice. I strongly believe that increasingly dispute resolution skills will only become more important and serve to increase the value you bring to your clients, your firm, and your individual development. | https://mjdr-rrdm.ca/interview-with-noah-hanft-the-future-of-adr-in-business/ |
Field of the Invention
Background of the Invention
Summary of the Invention
Detailed Description of the Invention
Example A
Preparation of 4-methyl-3-decene-5-one
Example B
Incorporation of 4-methyl-3-decene-5-one into a fragrance formulation.
Example C
Preparation of 4,5-Dimethyl-3-decene-5-ol
Example D
Preparation of Alpha-1-Methyl-1-Butenyl]-Cyclopentanemetanol
[
Example E
Preparation of 1-Phenyl-4-Methyl-4-Hepten-3-one
Example F
Preparation of 1-cyclohexyl-3-methyl-3-hexene-1-ol
The present invention relates to new chemical entities and the incorporation and use of the new chemical entities as fragrance materials.
There is an ongoing need in the fragrance industry to provide new chemicals to give perfumers and other persons ability to create new fragrances for perfumes, colognes and personal care products. Those with skill in the art appreciate how differences in the chemical structure of the molecule can result in significant differences in the odor, notes and characteristics of a molecule. These variations and the ongoing need to discover and use the new chemicals in the development of new fragrances allows perfumers to apply the new compounds in creating new fragrances.
The present invention provides novel chemicals, and the use of the chemicals to enhance the fragrance of perfumes, toilet waters, colognes, personal products and the like. In addition, the present invention is directed to the use of the novel chemicals to enhance fragrance in perfumes, toilet waters, colognes, personal products and the like.
More specifically, the present invention is directed to the novel compounds, represented by the general structures of Formula I and Formula II set forth below:
wherein R is a hydrocarbon moiety consisting of 2 to 10 carbon atoms, including cyclopentyl, cyclohexyl, phenyl, benzyl, or phenylethyl . R1 is either methyl or ethyl.
Another embodiment of the invention is a method for enhancing a perfume by incorporating an olfactory acceptable amount of the compounds provided above.
These and other embodiments of the present invention will be apparent by reading the following specification.
In Formula I and Formula II above, R represents a hydrocarbon, a cyclic, or an aromatic group consisting of 2 to 10 carbon atoms, most preferably, R is a pentyl group. Hydrocarbon, cyclic or aromatic R groups include, but are not limited to the straight alkyl, cyclic, and aromatic chains. Suitable straight hydrocarbon moieties include ethyl, propyl, butyl, cyclopentyl, cyclohexyl, and the like. Suitable branched hydrocarbon moieties include isopropyl, sec-butyl, tert-butyl, 2-ethyl-propyl, and the like. Suitable hydrocarbon moieties containing double and triple bonds include ethene, propene, 1-butene, 2-butene, penta-1-3-deine, hepta-1,3,5-triene, butyne, hex-1-yne and the like. Suitable aromatic moieties include phenyl, benzyl, phenylethyl and the like. In Formula II above, R1 represents a methyl or an ethyl group. Those with skill in the art will recognize that the compound of Formula I of the present invention has a chiral center, thereby providing several isomers of the claimed compound. As used herein the compounds described herein include the isomeric mixtures of the compounds as well as those isomers that may be separated using techniques known to those with skill in the art. Suitable separation techniques include chromatography, particularly gel chromatography.
The compounds of the present invention may be prepared from the following compound of Formula III:
The preparation and use of the compound of Formula III is discussed in U.S. Patent No. 4,585,662, the contents of which are incorporated herein by reference. In the Formula III, R has the same definition as set forth above.
The compound of Formula I may be prepared from the compound of Formula III by following the Oppenauer oxidation reaction procedure (see Example A). The amount of ketone recovered after the reaction is completed is from about 70% to about 95% by weight of the product mixture. We have discovered that the compounds of Formula I have green, pleasant notes that are well suited for use as a fragrance ingredeint.
The compound of Formula II may be prepared by nucleophilic addition of an appropriately substituted alkyl, cyclic or aromatic Grignard reagent or alkyl lithilum to the compound of Formula I (see Example C). We have discovered that the compounds of Formula II have a banana fruity note with violet, soft green tones that are well suited for use as a fragrance ingredient.
The use of the compound of the present invention is widely applicable in current perfumery products, including the preparation of perfumes and colognes, the perfuming of personal care products such as soaps, shower gels, and hair care products as well as air fresheners and cosmetic preparations. The present invention can also be used to perfume cleaning agents, such as, but not limited to detergents, dishwashing materials, scrubbing compositions, window cleaners and the like.
In these preparations, the compounds of the present invention can be used alone or in combination with other perfuming compositions, solvents, adjuvants and the like. The nature and variety of the other ingredients that can also be employed are known to those with skill in the art.
Many types of fragrances can be employed in the present invention, the only limitation being the compatibility with the other components being employed. Suitable fragrances include but are not limited to fruits such as almond, apple, cherry, grape, pear, pineapple, orange, strawberry, raspberry; musk, flower scents such as lavender-like, rose-like, iris-like, carnation-like. Other pleasant scents include herbal and woodland scents derived from pine, spruce and other forest smells. Fragrances may also be derived from various oils, such as essential oils, or from plant materials such as peppermint, spearmint and the like.
Perfumes, Cosmetics and Soaps
A list of suitable fragrances is provided in US Pat. No. 4,534,891, the contents of which are incorporated by reference as if set forth in its entirety. Another source of suitable fragrances is found in , Second Edition, edited by W. A. Poucher, 1959. Among the fragrances provided in this treatise are acacia, cassie, chypre, cyclamen, fern, gardenia, hawthorn, heliotrope, honeysuckle, hyacinth, jasmine, lilac, lily, magnolia, mimosa, narcissus, freshly-cut hay, orange blossom, orchid, reseda, sweet pea, trefle, tuberose, vanilla, violet, wallflower, and the like.
Olfactory effective amount is understood to mean the amount of compound in perfume compositions the individual component will contribute to its particular olfactory characteristics, but the olfactory effect of the perfume composition will be the sum of the effects of each of the perfumes or fragrance ingredients. Thus the compounds of the invention can be used to alter the aroma characteristics of the perfume composition, or by modifying the olfactory reaction contributed by another ingredient in the composition. The amount will vary depending on many factors including other ingredients, their relative amounts and the effect that is desired.
The level of compound of the invention employed in the perfumed article varies from about 0.005 to about 10 weight percent, preferably from about 0.5 to about 8 and most preferably from about 1 to about 7 weight percent. In addition to the compounds other agents can be used in conjunction with the fragrance. Well known materials such as surfactants, emulsifiers, polymers to encapsulate the fragrance can also be employed without departing from the scope of the present invention.
Another method of reporting the level of the compounds of the invention in the perfumed composition, i.e., the compounds as a weight percentage of the materials added to impart the desired fragrance. The compounds of the invention can range widely from 0.005 to about 70 weight percent of the perfumed composition, preferably from about 0.1 to about 50 and most preferably from about 0.2 to about 25 weight percent. Those with skill in the art will be able to employ the desired level of the compounds of the invention to provide the desired fragrance and intensity.
The following are provided as specific embodiments of the present invention. Other modifications of this invention will be readily apparent to those skilled in the art. Such modifications are understood to be within the scope of this invention. As used herein all percentages are weight percent unless otherwise noted, ppm is understood to stand for parts per million and g is understood to be grams. IFF as used in the examples is understood to mean International Flavors & Fragrances Inc.
3
To a dry 2 liter multi-neck round bottom flask fitted with an air stirrer, nitrogen inlet condenser and an addition funnel 208 g of a 98% solution of Aluminum Isopropyloxide and 400 g of acetone (obtained from the Acros Organics) was added. The resulting mixture was stirred and gently heated. As the temperature of the mixture reached 64°C, 378 g of 90% solution of 3-decene-4-methyl-5-ol was slowly added over 90 minutes. The resulting mixture was aged for 90 minutes. At this point the temperature reached 80°C and a first sample of the product was taken. Two hours later, as the temperature reached 85°C, a second sample was taken. The mixture was maintained at a constant temperature of 85°C for 35 minutes, the heating source was removed and 100 ml of acetone was added. After 25 minutes, as the mixture reached 80°C, the mixture was cooled and quenched with 1L of 10% hydrochloric acid. The products were allowed to settle. Then the organic layer was separated from the acid layer, washed with water and neutralized with 10% NaHCO solution. The NMR spectrum of the 4-methyl-3-decene-5-one is as follows: 0.88 ppm (m, 3H); 0.96 ppm (m 3H); 1.39 ppm (m, 2H); 1.51 ppm (m, 2H); 1.82 ppm (s 3H); 2.21 ppm (m, 2H); 2.71 ppm (m, 2H); 6.27 ppm (m, 1H)
<b>Material</b>
<b>Parts</b>
TRIPLAL EXTRA
1.00
AMBROXAN DIST
10.00
AMYL SALT
5.00
GERANIUM EGYPT SPECIAL
4.00
METH OCTIN CARBONATE 10% DPG
2.00
METH IONONE BETA COEUR
10.00
TIMBEROL DRAG
5.00
TONALID
50.00
ISO E SUPER
100.00
IONONE BETA EXTRA
8.00
ISO GAMMA SUPER
40.00
LYRAL
50.00
MANDARIN OIL YELLOW GATTO
30.00
POLYSANTOL (ELINCS)
5.00
VERAMOSS
5.00
ANETHOLE USP
1.00
PATCHOULI INDONESIA MD REF A LMR
2.00
PEACH ALD COEUR SPECIAL 10% DPG
0.50
LIFFAROME "PFG" 10% DPG
7.00
COUMARIN
5.00
ORANGE OIL SWEET GUINEA PECT +BHA
35.00
BERGAMOT OIL DEFUROCOUMARINIZED GATTO
42.00
FLORHYDRAL (ELINCS)
0.50
HEXENYL ACET, CIS-3
1.00
ETH LINALOOL HLR
45.00
ADOXAL
0.50
STYRALYL ACET
2.00
SANJINOL
50.00
LAVANDIN SUPREME CHAU
8.00
DIHYDRO MYCENOL
60.00
ROSEMARRY FRENCH VILLECROZE
1.00
ALLYL AMYL GLYCOLATE
5.00
HELIONAL
15.00
CANTHOXAL
15.00
CYCLOGALBANATE
3.00
FLORALOZOLE
5.00
LILIAL
50.00
NONADIENAL, 2-TR-6-CIS-"F+F" 0.1% DEP
6.00
RHODINOL COEUR
9.00
GALAXOLIDE BENZ SAL 50 PCT
250.00
SANDALWOOD RECO 2004 YC-973
15.00
GALBASCONE 1% DPG
5.00
MANDARINAL 32048 SAE
3.00
DAMAROSE
0.50
CARVONE SPECIAL L-10% DPG
8.00
AURANTIOL GIV 10% DPG
6.00
SAGE CLARY FRENCH OIL REF A LMR
4.00
HEXENYL SAL, CIS-3
15.00
AMBREINE PURE 181400/3 BROWN 1% DEP
3.00
3-DECENE-4-METHYL-5-ONE
5.00
A fragrance was prepared according to the following formulation:
The above fragrance was found to be a pleasing fragrance with pleasing green notes. The above fragrance formulation was presented to demonstrate the effectiveness of the compounds of the present invention was enhancing, improving or modifying the performance of the formulations in which they are incorporated.
3
2
3
To a dry 5 liter multi-neck round bottom flask fitted with an air stirrer, nitrogen inlet condenser and an addition funnel 1,617 g of CHLi was added and stirred. 336 g of 4-methyl-3-decene-5-one (see example A for preparation of 3-decene-4-methyl-5-one) was added dropwise over 105 minutes. The temperature of the reaction rose to 63°C. The reaction mixture was aged for 150 minutes and a first sample was taken at 37°C. 30 minutes later, a second sample was taken at 30°C. The mixture was quenched with acetic acid, allowed to settle and layers separated. The aqueous layer was washed twice with 100 ml of toluene. The toluene extracts were added to the organic layer and washed with NaCO.
4,5-Dimethyl-3-decene-5-ol
The NMR spectrum of the is as follows: 0.88 ppm (t, 3H); 0.94 ppm (t, 3H); 1.28 ppm (s, 3H); 1.15-1.35ppm (m, 6H); 1.50 ppm (s, 1H); 1.55ppm (s,1H), 2.05 ppm (m, 2H); ; 5.45ppm (m, 1H)
4,5-Dimediyl-3-decene-5-ol
-1
-1,
-1
The IR spectrum of the is as follows: OH-stretch broad at 3416cm, CH-stretch saturated at 2960, 2933, 2872 cm double bond stretch at 1680 cm, 1462 and 1372 due to CH stretch.
To a dry 2 liter multi-neck round bottom flask fitted with an air stirrer, nitrogen inlet condenser and an addition funnel 800 ml of 2 M of cyclopentyl magnesium chloride was added and stirred. 139 g of 2-methyl-2-pentenal was added over the next 90 minutes. The re action mixture was aged for another 90 minutes and the first sample was taken. 25 minutes later the reaction mixture was quenched with water, aged for 30 minutes and the organic layer was separated and washed with 2 one liter portions of water.
The NMR spectrum of the Alpha-[1-Methyl-1-Butenyl]-Cyclopentanemetanol is as follows: 1.00 ppm (s, 3H); 1.1-1.2 ppm (s, 1H); 1.4-1.5 ppm (s, 2H); 1.5-1.7 ppm (m, 4H); 1.8 ppm (s, 1H); 2.1 ppm (m, 3H); 3.7 ppm (d, 1H); 5.4 ppm (t, 1H)
To a dry 2 liter multi-neck round bottom flask fitted with an air stirrer, nitrogen inlet condenser and an addition funnel 168 g of 65% 1-Phenyl-4-Methyl-4-Hepten-3-ol, 51 g of 98% Aluminum Isopropoxide, (obtained from the Acros Organics) and 200 g of Acetone and 200 g of Toluene were added and stirred. The reaction mixture was slowly heated at reflux to 85°C. The samples were collected every our when the temperature of the reaction mixture was between 70°C and 80°C.
The NMR spectrum of the 1-Phenyl-4-Methyl-4-Hepten-3-one is as follows: 1.0 ppm (t, 3H); 1.8 ppm (s, 3H); 2.2 ppm (m, 2H); 2.9-3.0 ppm (m, 2H); 6.6 ppm (t, 1H) 7.2 ppm (m, 3H); 7.28ppm (s,1H), 7.3 ppm (s, 1H).
To a dry 2 liter multi-neck round bottom flask fitted with an air stirrer, nitrogen inlet condenser and an addition funnel 800 ml of 2 M of cyclohexyl magnesium chloride was added and stirred. The flask was cooled to 10°C. 146 g of 99% 2-methyl-2-pentenal was added over the next 135 minutes. The cooling was removed. The first sample was taken 50 minutes later at 13°C. The second sample was taken 35 minutes later at 18°C. 75 minutes later the reaction mixture was quenched with 1000 ml of 20% HAc with cooling. The layers were allowed to settle and the organic layer extracted with 100 ml of toluene.
1-cyclohexyl-3-methyl-3-hexene-1-ol
The NMR spectrum of the is as follows: 0.7-0.9 ppm (q, 1H); 0.9-1.0 ppm (t, 4H); 1.1-1.3 ppm (m, 3H); 1.6 ppm (s, 3H); 1.6-1.8 ppm (m, 4H); 2.0-2.1 ppm (m, 3H); 3.7 ppm (d, 1H); 5.4 ppm (t, 1H) | |
1.. Introduction
================
Wireless body sensor networks (WBSNs) are a special case of wireless sensor networks (WSNs) that consist of multiple nodes to be attached to clothing, on the body or even implanted under the skin. Their main functionality is to sense, process and transmit a dataset of measured vital signals to a base station for the monitoring and the healthcare of chronic patients or the tracking of professional sportsmen\'s performance. In general, the sensor nodes used in these networks are tiny devices characterized by limited memory and processing resources, as well as strong battery power constraints. Therefore, an efficient use of resources is mandatory for maximizing the life time of sensors, while guaranteeing the good performance and high reliability of these networks.
The power consumed by sensor nodes in a WBSN depends on the particular medical application and the transmission channel. According to the applications and depending on the measured variables, data transmission in WBSNs can require different sampling and data transmission rates. Therefore, for slow biosignals, sensor nodes transmit few data, and it is possible to reduce the energy consumption of the transceiver by turning it to the sleeping mode. However, if the biosignals are continuously time varying, sensor nodes transmit a larger volume of data, and hence, the power consumption of the transceiver will be increased.
On the other hand, in a WBSN scenario, the human body plays an important role in the performance of the communication, where the on-body link can be highly dynamic due to the following reasons: (i) the human body introduces temporal variations in the quality of the wireless link, because body movements and postures can change the direction of the antennas, causing detuning and distorted radiation pattern \[[@b1-sensors-15-05914]\]; (ii) it can also obstruct the propagation of the signal, causing non-line-of-sight (NLOS) conditions in an intermittent way \[[@b2-sensors-15-05914]\]; (iii) it introduces attenuation due to the signal absorption by the tissue, dissipating heat \[[@b3-sensors-15-05914],[@b4-sensors-15-05914]\]; (iv) it causes fluctuation in the path loss that can reach 30 dB on average \[[@b5-sensors-15-05914]\].
Due the variability of on-body links and the application characteristics, the use of fixed transmission power can be inadequate. Then, data transmission at high power levels guarantees reliable links, but can result in an unnecessary energy waste. On the other hand, transmitting at low power levels provides energy savings, but at the expense of reducing the reliability and increasing retransmissions. Therefore, there is a trade-off between the energy consumption and the link reliability, and the transmission power should be adaptive in an energy-efficient fashion according to the current state of the link.
The transmission power control techniques allow one to tune the power levels dynamically according to the changing conditions of the links. Therefore, these techniques allow meeting the reliability constraints, while at the same time saving energy. In this paper, we extend our previously proposed transmission power control algorithms (\[[@b6-sensors-15-05914],[@b7-sensors-15-05914]\]) to a complex and realistic scenario. For that purpose, a state-of-the-art WBSN simulator has been upgraded with our model of the biologic channel and an efficient implementation of our reactive and predictive control techniques. The obtained results validate our optimization mechanism and show how our proposals can be used efficiently in deployed WBSNs, where the movement and position of the human subjects affect the transmission properties.
Our main contributions are: An extensive experimental data base of link quality metrics, which includes a human sample with diverse body features and different postures.An extensive statistical analysis of the impact on the link quality metrics and their relation with the body shape and the body composition, from our experimental database.An extensive validation by simulation of the transmission power control policies proposed, thanks to an enhanced version of the Castalia simulator.
The remainder of the paper is organized as follows: In Section 2, we review the related work on transmission power control algorithms for WBSNs. In Section 3, we describe the experimental setup used for the characterization of on-body channel quality in human subjects, showing how the extensive dataset collected in this phase is used in the implementation of our algorithms. In Section 4, we present the ANFIS (Adaptive Neuro-Fuzzy Inference System) link quality estimator model, which is the fundamental unit of our predictive approach, evaluated in the final section. In Section 5, we present an overview of our reactive and predictive policies for transmission power control in WBSNs. Section 6 covers a brief description of the Castalia software used in the validation and simulation of both transmission power control algorithms. In Section 7, we analyze the simulation results. The final section summarizes the conclusions and discusses future works.
2.. Related Work
================
A key issue in WBNs is to reduce the energy consumption of the sensor nodes for extending battery lifetime. On average, processing data consumes less power than transmitting the data wirelessly \[[@b8-sensors-15-05914]\], whose power consumption is affected by the speed and the amount of data transmitted. In this context, the use of power optimization methods at multiple layers of the communication stack is nowadays one of the most important research issues in this field; and specifically, the strategies of transmission power control (TPC) are being widely researched, because they ensure an optimal trade-off between energy consumption and reliability requirements.
TPC techniques allow one to select the minimum transmission power level required to achieve good performance within a communication system. The use of these techniques, besides reducing the transmission power consumption, also allows one to reduce the interference problems and to reduce the average contention at the MAC layer.
Different TPC schemes have been proposed for different communication networks, including WBSNs. The type of TPC scheme most frequently used is the link quality-based scheme. Typically, this scheme consists of a closed execution loop between the transmitter and receiver nodes. The loop starts when the transmitter node sends a data packet, and after that, the receiving node takes the measures of the received signal strength indicator (RSSI) value as a quality metric of the communication link. At this point, if the measured RSSI is out of the previously-defined target RSSI margin, the receiving node computes a new transmission power level using a particular TPC algorithm. Finally, the receiver node sends a control packet specifying this new power value to the transmitter node. Some of the most relevant literature in the field is analyzed afterwards.
Xiao *et al.* \[[@b9-sensors-15-05914]\] propose two practical on-line schemes that adapt the transmit power according to the RSSI value obtained from the receiver node (piggybacked in the acknowledgment packet). Both algorithms try to maintain the RSSI of the receiver between the predetermined bounds. In the conservative scheme, if the RSSI drops below the lower configured threshold, then the transmit power is raised to the maximum, and if RSSI is consistently above the configured upper threshold along the last N sample periods, then the transmission level is reduced by a small fixed constant. In the aggressive scheme, the transmitter maintains a running average of the recent RSSI values using the exponential averaging computation, which includes a pre-configured weight value. If this running average exceeds an upper threshold, the transmit level is reduced by a small constant, while if the running average is below a lower threshold, the transmit level is doubled. The algorithms were tested using MicaZ with the CC2420 radio and their Toumaz Sensium Digital Plaster platform.
The results show that in a fast walking scenario, the conservative scheme preserves reliability and yet reduces energy consumption by 1.3% on average when compared to using maximum transmit power. The aggressive scheme saves 23.4% energy on average, at the expense of slightly increased loss. In the resting scenario, the energy savings under both schemes are substantial compared to using maximum power (18.6% and 25.4%) However, some drawbacks of these schemes are the power consumption cost associated with the listening of the feedback packets and, on the other hand, the change in characteristics that the wireless channel can suffer between transmission and feedback and between the feedback and next transmission.
In \[[@b10-sensors-15-05914]\], the authors propose a class of adaptive power control protocols, where the period between each feedback transmission is adaptively varied between 2 and 64 s to accommodate run-time variation in the quality of each channel. The algorithm was tested using a CrossBow MicaZ platform with the CC2420 radio under two different scenarios, a subject walking and resting. The results show that the sensor node radio draws 15% and 21% less power compared to full-power-no-feedback, respectively, for both environments.
Smith *et al.* \[[@b11-sensors-15-05914]\] propose mechanisms for transmission power control based on channel prediction up to 2 s in the the future. The power control methods are based on large datasets taken from ten human subjects performing every-day activities. Channel sounders with Chipcon CC2500 radios were used for collecting the dataset. The authors affirm that the dynamic transmit power control using this predictor can save between 8%--22% of the energy compared with a constant transmit power of −10 dBm.
Quwaider *et al.* \[[@b12-sensors-15-05914]\] proposed a dynamic power control mechanism, named dynamic postural position inference (DPPI), that performs adaptive body posture inference for optimal power assignments. They assume that the average RSSI values can be modeled approximately as a linear function of the transmission power. The performance of this mechanism was evaluated with the Mica2Mote using radio chip CC100. With one of the proposed PC schemes, DPPI, they can save 43%--50% of the energy for different testing persons, compared to using the maximum transmit power \[[@b13-sensors-15-05914]\]. However, in \[[@b14-sensors-15-05914]\], the authors ensure that the DPPI mechanism predicts the transmission power incorrectly for cases where the link state varies frequently.
Our reactive and predictive approaches for the transmission power control are similar to the work research of Quwaider \[[@b12-sensors-15-05914]\] from the point of view that both rely on the detection of the body position for making a decision about the optimal transmission power level; however, the proposed mechanism for posture detection is very different. For DPPI, the postural position is inferred based on the RSSI measurements at the receiver during runtime, which can fail for cases where the link state varies frequently. In our approaches, the postural position detection is done throughout the deployed accelerometers, which are more accurate. Moreover, every expected position has been completely characterized in terms of signal reception. The idea behind our proposal is to take advantage of the hardware being used already in applications, such as fall detection, abnormal movement detection and posture and human activity detection, but also to provide optimization policies that optimally adjust the power transmission level.
On the other hand, from a detailed review of the literature, we can claim that, to the best of our knowledge, our predictive approach is the first in including the real influence of anthropometric and body composition parameters in the variability of RSSI over on-body channels (both factors place a big impact on the development of optimization policies, as will be shown later). Additionally, the development of our optimization policies has required the acquisition of the largest and most exhaustive collection of experimental data using human subjects in an indoor environment that has been reported so far by researchers.
3.. Experimental Scenario
=========================
We followed a similar experimental methodology to \[[@b6-sensors-15-05914]\], but we extended the sample to a group of 37 people distributed between 13 women and 24 men, aged from 20 to 50 years. This large population sample presents different physical characteristics associated with gender and age. Thus, from the onset of puberty to menopause, women maintain a greater body fat mass percentage than men, despite smaller energy intake per kg lean mass \[[@b15-sensors-15-05914]\], and evidence indicates that estrogens contribute to the gender differences in fat mass and the gestational changes in body composition \[[@b16-sensors-15-05914]\]. Moreover, we should not forget that with age, body fat increases and fat-free mass decreases because of the loss of skeletal muscle; hence, the mean body fat of a 20-year-old man weighing 80 kg is 15% compared to 29% in a 75-year-old man of the same weight \[[@b17-sensors-15-05914]\].
For every person, we took anthropometric measurements and body composition values. These parameters, shown in [Figure 1](#f1-sensors-15-05914){ref-type="fig"}, were selected because they are directly related to the locations of the nodes and the links of interest. The measurements of body composition were acquired using the Tanita tetrapolar foot-to-foot bioelectrical impedance analyzer, Model BC-601 \[[@b18-sensors-15-05914]\]. These body composition parameters include: fat mass, muscle mass, bone mass, body mass index, total body water and the levels of body fat percentage and muscle mass of each segment (right/left arm and right/left leg). Body composition and anthropometric parameters allow one to describe the human sample accurately and their expected impact on the radio transmission.
In our experiments, we use the sensor nodes, Shimmer \[[@b19-sensors-15-05914]\]. The Shimmer node is equipped with an ultra-low-power 16-bit microcontroller (TI MSP430), 10 KB of RAM and 48 KB of Flash. This platform includes an IEEE 802.15.4-compliant CC2420 transceiver, which has a sensitivity threshold of −94 dBm and eight programmable power transmission levels from 0 dBm to −25 dBm with a current consumption of 17.4 mA to 8.5 mA, respectively \[[@b20-sensors-15-05914]\]. FreeRTOS has been ported as the real-time operating system in this platform.
The nodes were placed on the subjects\' bodies, describing a star topology, with the coordinator placed at the waist (just over the navel) and the node sensors at the right arm (Link L1) and at the right knee (Link L2), as shown in [Figure 1](#f1-sensors-15-05914){ref-type="fig"}. We took measurements of RSSI and the packet error rate (PER); these measurements were gathered for every subject, and these were repeated at every transmission power level available in the radio. All of the results were taken in controlled indoor conditions, under the ground level, to minimize the effect of electromagnetic interference (EMI) (WiFi, 3G, solar radiation, *etc*).
We planned two experimental scenarios to investigate the temporal variations in the quality of two different links in stationary positions: Scenario 1: the subject sat in a chair and performed five movements of the arms (Link 1): (1) hands on thighs, denoted as L1/P1; (2) arms crossed, L1/P2; (3) arms extended forward, L1/P3; (4) arms extended up, L1/P4; and (5) arms extended to both sides, L1/P5.Scenario 2: the subject sat in a chair and performed four movements of the legs (Link 2): (1) leg at a 90 degree angle with the body, L2/P1; (2) right leg crossed over the left knee, L2/P2; (3) left leg crossed over right knee, L2/P3; and (4) leg extended forward, L2/P4.
As a result, the largest and most complete database (to the author\'s knowledge) of human channel characterization is built after this experimental work. The univariate analysis of the data has been used in the first descriptive stage of research to know in depth the nature and the meaning, separately, for each one of the variables. Subsequently the study was continued by doing a bivariate analysis to understand how two or more variables relate. Finally, a multivariate analysis was used for explaining the relationships between the variables and RSSI as the link quality metric.
3.1.. Statistical Analysis
--------------------------
The exploratory statistical analysis was performed using the box plot tool due to its ability to provide relevant statistical information about the input dataset. In [Table 1](#t1-sensors-15-05914){ref-type="table"}, a concise summary of the quantitative variables is shown.
The dataset of the table has been examined from three main aspects: scatter, symmetry and shape of the data distribution. From the interquartile range (IQR) of the table, a large scatter of the population sample used in the experiments is observed. Specifically, the variables with higher IQR, such as the muscle mass, the body fat mass of the arm, leg and total body fat mass, showed the most dispersive behavior. On the other hand, from Fisher\'s skewness coefficient, it is possible to affirm that for some variables, such as upper arm length, the lower arm length and the mid-upper arm circumference, the skewness is close to zero, and therefore, they respond to a symmetrical distribution. Some variables, such as bone mass and muscle mass of the arm, of the leg and total muscle mass, show an asymmetrical distribution with a long tail to the left, meaning that these have a negative skew. The other variables show asymmetrical distribution with a long tail to the right, meaning that these have a positive skew. Finally, the kurtosis coefficient quantifies whether the shape of the data distribution matches the Gaussian distribution. A positive kurtosis coefficient, as is shown in [Table 1](#t1-sensors-15-05914){ref-type="table"} for all variables, indicates that the observations present a peaked distribution, which is said to be leptokurtic.
The input variable selection is an important part in the construction of any model. In order to build a simpler, reasonable and useful model, correlation tests should be applied among all of the variables of the database, with the purpose of removing the redundancy created by the correlation among variables and, at the same time, reducing the subset of inputs.
Before running this correlation test, it is important to perform a normality test to ensure that the assumptions of a parametric test are met before use. Although, according to \[[@b21-sensors-15-05914]\], for large enough sample sizes (\>30 or 40), the violation of the normality assumption should not cause major problems, and therefore, parametric procedures could be used, even when the data are not normally distributed \[[@b22-sensors-15-05914]\]. We prefer checking the assumption, because the validity for parametric tests, such as the correlation test, depends on it. The normality tests used in this study are: the Lilliefors test, Jarque Bera test and Anderson--Darling test contained in MATLAB\'s Statistics Toolbox. As the results of these test, the following four variables were identified with a non-normal distribution from the overall database: visceral fat level (VFL), body mass index (BMI), muscle mass of leg (MML), muscle mass of arm (MMA).
Due to the variety of distributions shown by the variables, we finally decided to test the overall database, using both Pearson\'s correlation test and Spearman\'s correlation test with a statistical significance of *p* \< 0.05. The results obtained from both tests were very similar. The results from Pearson\'s correlation test for the variables related to Link 1 are shown in [Table 2](#t2-sensors-15-05914){ref-type="table"}. From the table, it is possible to identify three main datasets: with weak correlation (0.5 ≥ *r* \> 0), with moderate correlation (0.5 \> *r* \> 0.8) and with strong correlation (*r* ≥ 0.8).
In conclusion, the following relationships are established from the standpoint of the correlation: The variables, lower arm circumference (LAC) and mid-upper arm circumference (MUAC), show a strong correlation between them; we call these Group 1.The variables, body fat mass of arm (BFMA), body fat mass (BFM) and total body water (TBW), show a strong correlation between them; we call these Group 2.The variables, bone mass (BM), muscle mass (MM) and muscle mass of arm (MMA), show a strong correlation between them; we call these Group 3.
The same statistical analysis was applied for variables related to Link 2. From the results, it is possible to identify two groups of variables strongly correlated amongst each other: Group I, which includes the muscle mass (MM), bone mass (BM) and muscle mass of leg (MML), and Group II, which includes the body fat mass of leg (BFML), body fat mass (BFM) and total body water.
Finally, from the groups of correlated variables that were found, a multivariate analysis was used for explaining the relationships between these and the RSSI as the link quality metric. For this analysis, the highest correlation with the measured RSSI value was used as the selection criteria, and from these results, a set of three and five preliminary variables for each one of the link models was selected.
In summary, from this complete and experimental knowledge and understanding of the behavior of the on-body channel under various scenarios, we are able to describe the effect of several parameters quantitatively on the effectivity of two different transmission power control schemes. These transmission policies are described and profusely evaluated in the next sections. Moreover, the results related to this experimental work can be used in further research on the provision of energy-aware transmission policies.
4.. Link Quality Estimator Model Based on ANFIS)
================================================
Based on our experimental study of temporal variations in the quality of on-body links, described in the previous section, we have proposed a model based on ANFIS (Adaptive Neuro-Fuzzy Inference System) to predict the link quality variations in terms of RSSI. This model has been named the link quality estimator model based on ANFIS (A-LQE). The A-LQE model involves the interaction of input parameters related to the sensor node location (upper or lower body link), the transmission power levels available in the radio, as well as the movement, shape and composition of the human body. This RSSI-prediction model is subsequently used in our proactive policy for the transmission power control.
Below, we shortly describe the A-LQE model; a more profuse description and motivation of the model can be found in \[[@b7-sensors-15-05914]\]. In this paper, we will further exploit the proposed model to develop two reactive and proactive policies for transmission power control, which have been validated in a realistic simulation scenario. Given the complexity of finding an exact analytical formula to predict the RSSI values and in order to build a reasonable model, we have followed a three-phase approach, which is explained next.
- Phase I, feature selection: Once the experimental data have been collected, overall 12 and 13 human body variables are available as input parameters for the models of Link 1 and Link 2, respectively. From the statistical analysis described previously in the Section 3, we selected a set of three and five preliminary variables for each one of the link models.
- Phase II, choice of A-LQE architecture: To assure that the selected input variables are meaningful and descriptive of the output variable (RSSI), we constructed ANFIS models for various combinations of input variables, and then, we chose the one with the best performance (lowest RMSE). The ANFIS models were trained with 1,036 vectors of input data, collected during the experimental work. Seven hundred twenty four vectors (70%) were randomly chosen for the training set, 156 (15%) vectors for testing set and the other 156 (15%) vectors for the validation set. The generalization capability of the models is assured by the proper selection of a large training set. Furthermore, 100 epochs and a training error tolerance of 0.0001 were specified for the training process to assure the achievement of the minimum error tolerance. The performance of the networks is assured for high values of the absolute fraction of variance (*R*^2^), low values of the mean absolute error (MAE) and low values of the root mean squared error (RMSE). [Table 3](#t3-sensors-15-05914){ref-type="table"} shows the best ANFIS models obtained for both links.
For Link 1, the model includes four input parameters: transmission power (Ptx), body position (BPosition), body fat mass (BFM) and mid-upper arm circumference (MUAC). This model has five membership functions of a Gaussian type for every input variable, five rules and one linear output. For Link 2, the model includes three input parameters: transmission power (Ptx), body position (BPosition) and lower leg length (LLL). This model has two membership functions of a triangular type for every input variable, eight rules and one linear output.
- Phase III, validation: For the testing dataset, the validation of the predictive accuracy of the models is analyzed through RMSE, MAE and the average percentage error (APE). From these results, our A-LQE models show satisfactory results with a low APE of 5% and 4.6% for Link 1 and Link 2, respectively.
5.. Transmission Power Control Algorithms
=========================================
In this section, we shortly describe our approaches for the transmission power control (TPC approaches) for WBSNs, which were initially introduced in \[[@b6-sensors-15-05914],[@b7-sensors-15-05914]\]. We present again the main ideas of these algorithms, as the work presented here is an experimental validation of both approaches in a realistic simulation scenario, as well as the upgrade of the Castalia simulator to support both TPC policies.
Firstly, our reactive algorithm requires that each subject/patient has been previously characterized completely with respect to the RSSI and PER metrics in all scenarios (body positions) and for all transmitted power levels of the radio. Once the communication link is correctly characterized, the computation of the optimal transmission power level is done off-line, using the experimental traces and resulting in a LUT of power levels. The control of the transmission power is done on-line by using the movement detection based on acceleration with low complexity and low overhead (see [Figure 2](#f2-sensors-15-05914){ref-type="fig"}). A detailed description of this algorithm is presented in \[[@b6-sensors-15-05914]\].
Our RSSI prediction-based transmission power control, introduced in \[[@b7-sensors-15-05914]\], consists of two blocks (see [Figure 3](#f3-sensors-15-05914){ref-type="fig"}): an A-LQE model for the specific link, which was explained in Section 4, that allows the prediction of the RSSI variations, and a block named the TPC block, which adjusts the transmission power to the minimum value found experimentally to assure that the RSSI value does not drop below a threshold.
From the RSSI value predicted by the A-LQE model, the TCP block gives as a result an adjusted power level. This power value corresponds to a fixed value that is assigned according to the type of policy chosen (conservative or aggressive) into a range determined from the simulation dataset. In the TCP algorithm, the range of RSSI values are divided into three zones: (1) Zone 1, RSSI values lower than the minimum simulation threshold (Tmin = −80 dBm); (2) Zone 2, RSSI values between −80 dBm and −75 dBm; (3) Zone 3, RSSI values over −75 dBm. The number of zones can be defined by the transmission power levels available in the radio; thus, for a larger number of power levels, a larger number of zones are defined, allowing a finer granularity in the output power.
According to the experimental and simulation results previously obtained, we have decided to implement an aggressive focus for both links, with transmission power levels from 0 dBm to −10 dBm for Link 1 and from 0 dBm to −15 dBm for Link 2. The lowest transmission power level (−25 dBm) was not selected, because there are some critical positions from the point of view of packet loss that do not admit these power values.
6.. Castalia Simulator
======================
The validation of simulators against real testbeds is not very frequent in the scientific literature, due to the big effort and cost required to implement an extensive set of experimental campaigns. The few authors who have worked on this issue agree on claiming that current simulators are unable to model many essential characteristics of the real world, because these are based on simplified assumptions, and consequently, these simulators cannot produce reliable enough results for real-time scenarios \[[@b23-sensors-15-05914]--[@b25-sensors-15-05914]\]. Some authors agree on affirming that these discrepancies are due to the fact that the operating system and layer code execution delays are not taken into account in the simulation models \[[@b26-sensors-15-05914]\].
In this context, the selection of a suitable simulator is a difficult choice. We have selected Castalia as simulation platform for our work, because, among other reasons, it is an open source simulator for WSNs, body area networks (BAN) and common networks composed of low-power embedded devices. It was developed by NICTA (National ICT Australia) as a framework of the OMNeT++ \[[@b27-sensors-15-05914]\] simulator, which has gained wide acceptance in the research community. Castalia includes an average path losses model and a temporal variations model for the modeling of body area networks, which is based on real on-body measurements; additionally, it allows one to use multiple transmission power levels \[[@b28-sensors-15-05914]\], which is a fundamental point for the development of our policies. However, the path loss model included in Castalia does not support the effect on transmission by the anthropometric human characteristics, movement and body positions. Our experimental work has shown the importance of including such information on the path loss model to derive efficient TPC policies. Therefore, Castalia will be upgraded conveniently with our experimental results.
OMNeT++ is a discrete event simulation environment written in C++, suited to supporting frameworks for specialized research fields. The main features offered by Castalia are: an advanced wireless channel model based on empirically-measured data; an advanced radio model based on real low-power radios; extended provisions for modeling sensing and the physical process; a clock-drift model; a power consumption model with state transition for the radio and multiple transmission power levels; MAC and routing protocols, including IEEE 802.15.4 \[[@b28-sensors-15-05914]\]. The basic structure of Castalia is composed of nodes, wireless channel and physical process. These are modules that can communicate through messages sent through the wireless channel \[[@b29-sensors-15-05914]\]. Our research work enhances the simulation capabilities provided by Castalia by extending the set of scenarios available for validating our experimental work and to show the opportunities of energy savings brought by our techniques.
6.1.. Wireless Channel Model
----------------------------
In the context of body area networks, the experimental data show that the actual path loss may differ very significantly from the average path loss in time. To account for these variations, the current model of Castalia computes the instantaneous path loss of a link as the sum of the average path loss and temporal signal variation at that moment. The spatial variation of the wireless channel (the average path loss) is defined during the channel initialization in the pathlossMap file, which is based on real on-body measurements. On the other hand, the temporal variation of the wireless channel is defined in another file named TemporalModel.txt. For finding the component of the path loss due to temporal variation, Castalia records the last simulated value and the time passed since that value was computed, and from these two numbers, a probability density function (pdf) is generated \[[@b28-sensors-15-05914]\].
The wireless channel model in Castalia has been enhanced with the average path loss measures derived experimentally in our work. In this case, the pathlossMap files that we have generated from experimental measurements implicitly integrate human body mobility and anthropometric information, since each one has a relation to a specific posture and body type. Castalia does not offer a mobility model for BAN; therefore, such an upgrade will facilitate the validation of our energy control policies, which must take into account the posture in order to accurately describe the radio channel energy behavior.
6.2.. Energy Consumption Model
------------------------------
The resource manager module of Castalia is responsible for calculating the energy use per operation of the node. Castalia models the energy source as an AA battery of 18,720 Joules. Energy is linearly subtracted based on the overall power drawn and simulation time. Modules that model hardware devices send messages to the resource manager in order to signal their power needs \[[@b28-sensors-15-05914]\].
Power consumption in Castalia has two components: radio consumption and baseline consumption. The default baseline consumption value is 6 mW; this corresponds to the energy consumption of a mote when the radio is off and the microcontroller is active. The power consumption corresponding to the radio modes depends on the specific radio chip. Therefore, for sensor nodes based on the CC2420 transceivers, the radio draws 62 mW per second when it is in the listening or receiving state, 1.4 mW when it is in sleeping state, while the power consumption in the transmission state will depend on the transmission power level used.
Differences in energy occur because the radio is ON for different periods of time in each node. The parameters, beacon order (BO) and superframe order (SO), define the duty cycle between the active and inactive periods. In Castalia, BO equals six and SO equals four, while a duty cycle of 25% is the default value for MAC protocol 802.15.4.
In this work, the energy model included in Castalia has been used for evaluating the impact of our transmission power control policies. We have focused on the results related to the transmission term, deriving the results under the application of our TPC policies. Moreover, the path loss model has been closely integrated with the energy model in order to relate our results to the A-LQE approach.
6.3.. Simulation Setup
----------------------
We have evaluated our TPC approaches in the Castalia simulation environment. We use Version 3.0 of Castalia on top of Version 4.1 of OMNeT++. A body sensor network of three nodes in a star topology has been defined as our basic simulation scenario in the configuration file, omnetpp.ini, resembling the experimental scenario conducted in our previous work. The sink node always operates at a maximum transmission power level (0 dBm) for ensuring that the sensor nodes receive the beacon. On the other hand, the sensor nodes transmit at a power level selected by the TPC algorithm. The nodes send constant length packets of 25 bytes at a predefined rate of 20 data packets per second to the sink node. Each simulation lasts 52 s, and results are obtained from the average of 10 simulation results.
In Castalia, the radio collision model is configured according to the InterfModel parameter, which can take three different levels: Level 0, 1 and 2. In Level 0, the simulator assumes no collision at all. In Level 1, it considers that the collision happens if more than one node is sending data at the same time and the signal is over the signal interference ratio (SIR). In Level 2, it considers the strongest signal and adds all other signals as noise; then, based on the SIR, the signal is received or ignored \[[@b28-sensors-15-05914]\]. We have used for this study Level 0 with the purpose of reproducing the the experimental scenario most exactly as possible, where each link was tested independently, with no inter-link interference.
We feed the Castalia simulator with the traces of path loss calculated from the RSSI measures obtained with the Shimmer nodes in our experimental scenarios. We define different path loss map files corresponding to each human posture and each transmission power level.
Due to the large number of simulations required, we made extensive use of scripting to automatically generate the parameter description files and to measure and collect the results from the trace files that list for each sensor node the temporal evolution of the signal detected at the node location.
We present the simulation parameters for the nodes: the CC2420 radio file with the default values provided by the simulator; 802.15.4 MAC with two transmission attempts; the temporal variation model to recreate the dynamics of the path-loss fluctuations; and the no collisions model. [Table 4](#t4-sensors-15-05914){ref-type="table"} shows the threshold values and other simulation parameters.
6.4.. Simulation Validation
---------------------------
As previously mentioned, for doing our simulation, the Castalia simulator was extended with the traces of path loss calculated from the RSSI measures obtained with the Shimmer nodes in our experimental scenarios. Different path loss map files corresponding to each human posture and each transmission power level were included in Castalia. The integration of the experimental database in Castalia has allowed ensuring that the RSSI behavior given by the simulator will be more approximate to reality compared with the case when the path loss default file of Castalia is used, such as is shown in the case study of [Figure 4](#f4-sensors-15-05914){ref-type="fig"} for a1 and a2 in comparison with b1 and b2.
7.. Simulation Results
======================
In this section, the simulation results obtained with the implementation of our TPC policies in Castalia are presented. In particular, five experimental human subjects have been considered to show how our TCP policies perform. We have tested all possible combinations of movements that simultaneously combine the positions of Scenarios 1 and 2 described previously in Section 3.
We use two metrics for the evaluation of the results obtained with both algorithms: the energy savings in data transmission and the packet error rate (PER). These metrics are recorded at the end of each simulation experiment for each node.
7.1.. Reactive Algorithm
------------------------
The reactive algorithm computes the optimal transmission power levels for each posture and each subject using the previously collected experimental dataset. We will call this algorithm "optimal reactive". On the other hand, we are also interested in testing how the optimal transmission power levels chosen for the human Subject A work when applied to another human subject, B. Therefore, we are interested in analyzing the generalization capabilities of the policy; we will call this algorithm "not optimal reactive".
The evaluation of both algorithms is done selecting two cases from all of the posture sequences simulated: the best case and a complex case of study that was previously presented in \[[@b7-sensors-15-05914]\]. The best case is referred to as the sequence that showed the greatest energy savings; typically, this sequence includes postures for which Link 2 has a direct line of sight (L2P1, L2P2), and therefore, it allows one to play with a wide range of power values. The case study is the combination of the following postures: L1/P1 + L2/P4, L1/P2 + L2/P3, L1/P3 + L2/P1, L1/P4 + L2/P2 and L1/P5 + L2/P1 (see the description of each one in Section 3).
[Table 5](#t5-sensors-15-05914){ref-type="table"} shows the results for the best case under the "optimal" and "not optimal" reactive policy. For the algorithm "not optimal", data of an extra human subject relative to the five test subjects were chosen randomly. All of the values presented in the table are average values of five subjects during a full sequence of five postures. As can be seen, the optimal algorithm allows one to obtain an energy savings of 29% with an average PER of 5.6% and 1.4% for the sensor nodes of Link 1 and Link 2, respectively. For the "not optimal" case, the average PER is similar to the previous case, 5.5% and 1.4%, respectively, but with a reduction in energy savings of almost 6% with respect to the "optimal" case. These results show the good energy savings achieved by the reactive policy only for previously characterized subjects; therefore, the reactive policy lacks generalization capabilities and requires a characterization and tuning phase per user.
7.2.. Predictive Algorithm
--------------------------
[Figure 5](#f5-sensors-15-05914){ref-type="fig"} shows the results for energy savings and PER in relation to the study case (complex sequence of movements) for each human subject. It can be seen that, from the energy savings point of view, the reactive algorithm shows the best results for all subjects. However, the difference with the results given by the predictive algorithm is relatively small, being 5% for the most significant case (Subject 2) and only 2% for the others. With respect to PER values, we can see that for both algorithms, it is always lower than 6% for all subjects.
[Table 6](#t6-sensors-15-05914){ref-type="table"} shows the average simulation results for the five subjects previously mentioned. In summary, the reactive algorithm shows an average energy savings better in only 2% with respect to the predictive algorithm, and on the other hand, the predictive algorithm shows an average PER lower in only 1.4% for Link 1 and 0.1% for Link 2, as compared to the reactive algorithm. Therefore, both algorithms provide similar benefits in terms of energy savings and PER, but with respect to the generalization capabilities, the predictive algorithm can be applied to a broader set of subjects with similar performance.
7.3.. Specific Case for a Human Subject
---------------------------------------
[Figure 6](#f6-sensors-15-05914){ref-type="fig"} shows the selected transmission power and the associated RSSI for a specific human subject doing a sequence of 12 postures under different power control schemes. It can be observed that both algorithms, reactive and predictive, respond appropriately by increasing the power transmission level when the RSSI value drops below the safety threshold (red line in −80 dBm), and they decrease the power transmission level when the RSSI has a value over the upper safety threshold (red line in −75 dBm). The range of RSSI values is divided into three zones delimited by red line thresholds. The minimum RSSI value required to maintain the link reliability is −80 dBm. The aim of the TPC policy is to maintain the RSSI within the safety region defined by the red line thresholds, with the minimal energy consumption. The choice of these thresholds is based on the results obtained by our experimental characterization.
All schemes exhibit considerable fluctuation in their transmission power, since the channel conditions vary quite rapidly. We can see that when we use the maximum transmission power level (0 dBm), the RSSI for both links shows a large variation, whereas when the TPC schemes are used, the RSSI can maintain a relatively stable level (which it is translated into an improvement of the QoS for the communication link). Link 1 is the most critical case, because even for the maximum transmission level, the RSSI value can fall under −85 dBm for some postures.
The reactive scheme shows 5.5% and 3.22% average packet loss and energy savings of 9% and 30.4% for Link 1 and Link 2, respectively, in comparison with the maximum transmission power level. The predictive scheme shows 5.1% and 2.5% average packet loss and energy savings of 9.5% and 30.3% for Link 1 and Link 2, respectively, in comparison with the maximum transmission power level. In conclusion, both power control schemes have a similar behavior, and both provide significant energy savings by adjusting the transmission power level to the channel requirements.
7.4.. Results Discussion
------------------------
The reactive algorithm has proven to be an efficient mechanism to adapt the transmission to the channel changing requirements and, at the same time, save precious energy. The reactive algorithm computes the optimal transmission power level based on the PER values measured from an exhaustive characterization of each patient. This characterization is performed for every link, all postures and every power level of the radio. Therefore, any new user of the system must face up to this characterization phase, which can be a very tedious process and a time-consuming task for both the patient and the technical staff. The situation can be even more complicated for cases with patients with chronic conditions or patients with reduced mobility.
On the other hand, the reactive algorithm has proven a similar performance and energy savings for adapting the transmission to the changing channel, but this time with the enormous benefit of avoiding the characterization phase per user. Although the building of the A-LQE model of the predictive algorithm was a very time-consuming task for us, as it required a great effort to collect all of the database used for its training, this task was carried out only once and allowed us to set up a model of path loss for a mobile wireless BAN channel. The advantage of the predictive algorithm is that it can be used in any patient without making a new or additional characterization, as our work has shown. This extensive work was finalized by upgrading the simulation capabilities of Castalia with a new path loss model that enables the proactive set up of the power transmission level.
Both algorithms show acceptable PER values, lower than 6%, and although the energy savings obtained by the predictive strategy are slightly lower than the reactive ones, they represent more than a 22% energy savings. These results could still be improved by tuning the safety RSSI thresholds defined for the TPC block. The obtained results are of great interest in terms of battery autonomy and user satisfaction and can be implemented with low cost and penalty, as they exploit the multiple transmission power levels provided by current radio chips.
8.. Future Works
================
The exploration of the space of solutions is considered as future work by running the simulations with varying values for parameters, such as duty cycle, data rate, packet size and the collision model. This analysis would provide the evaluation of the real impact of these parameters on the performance of the TPC policies. Furthermore, we are interested in evaluating other approaches based on expert systems for the implementation of the TPC block of the predictive scheme, with the aim of improving the adaptation of the transmission power to the channel requirements.
The authors would like to thank COLCIENCIAS (Departamento Administrativo de Ciencia, Tecnología e Innovación de Colombia) and Universidad Nacional de Colombia for their support in the development of this work. This work is partially funded by the Spanish Ministry of Economy and Competitiveness under contract TEC2012-33892.
All authors collaborated to carry out the work presented here. M.V. developed the experimental work, codified the Castalia modifications, derived the ANFIS model, analysed some results, implemented the reactive and proactive policies and wrote some sections of the article; J.R. envisioned the reactive and proactive policies, and contributed to the right functioning of the experimental set-up; J.L.A. designed the experimental work, analysed the acquired results, drafted the article and revising it critically. M.V., J.R. and J.L.A. reviewed and edited the manuscript. All authors have read and approved the manuscript.
The authors declare no conflict of interest.
{#f1-sensors-15-05914}
{#f2-sensors-15-05914}
{#f3-sensors-15-05914}
{#f4-sensors-15-05914}
{#f5-sensors-15-05914}
{#f6-sensors-15-05914}
######
Descriptive statistics of anthropometric and body composition variables.
**Variable** **Minimum** **Q1** **Q2** **Q3** **Maximum** **IQR** **Skewness** **Kurtosis**
----------------------------------------- ------------- -------- -------- -------- ------------- --------- -------------- --------------
Upper Arm Length (UAL) (cm) 28 31 33 35 38 4 0.09 2.18
Lower Arm Length (LAL) (cm) 22 25 26 27 30 2 0.06 2.67
Upper Leg Length (ULL) (cm) 44 48 53 54 63 6 0.27 2.74
Lower Leg Length (LLL) (cm) 37 40.5 43 45 51 4.5 0.33 2.27
Mid-upper Arm Circumference (MUAC) (cm) 23 28 30 32 37 4 0.04 2.62
Lower Arm Circumference (LAC) (cm) 22 25 27 28 33 3 0.42 2.98
Thigh Circumference (TC) (cm) 35 39 41 44 50 5 0.25 2.97
Muscle Mass (MM) (kg) 34 44.5 57.2 61.6 77.8 17 −0.18 2.10
Bone Mass (BM) (kg) 1.9 2.4 3 3.22 4 0.8 −0.20 2.02
Body Fat Mass (BFM) (%) 5.8 16.6 23.9 29.7 42.2 13 0.095 2.67
Body Fat Mass of Arm (BFMA) (%) 5.1 14.9 20.4 30.2 45.4 15.3 0.302 2.33
Body Fat Mass of Leg (BFML) (%) 6.2 13.4 19.6 30.6 45.3 17.1 0.434 2.12
Muscle Mass of Arm (MMA) (kg) 1.4 2.1 3.2 3.7 4.5 1.6 −0.38 1.89
Muscle Mass of Leg (MML) (kg) 6 7.6 10.2 11.1 13.6 3.5 −0.27 1.97
Body Mass Index (BMI) (kg/m^2^) 19.7 22.2 23.8 28.5 33.1 6.3 0.514 2.12
Total Body Water (TBW) (%) 43.5 51.7 55.4 59.5 67.6 7.7 0.115 2.59
Visceral Fat Level (VFL) 1 1.75 4 7 13 5.2 0.807 2.65
######
Correlations between anthropometric and body composition variables:  weak correlation;  moderate correlation;  strong correlation.
**UAL** **MUAC** **LAL** **LAC** **BFM** **BFMA** **MM** **BM** **TBW** **BMI** **VFL**
---------- --------- ---------- --------- --------- --------- ---------- -------- -------- --------- --------- ---------
**MUAC** 0.25
0.13
**LAL** 0.77 0.23
0.00 0.18
**LAC** 0.56 0.80 0.49
0.00 0.00 0.00
**BFM** −0.34 −0.00 −0.22 −0.13
0.04 1.00 0.20 0.45
**BFMA** −0.37 −0.16 −0.22 −0.24 0.96
0.02 0.34 0.20 0.16 0.00
**MM** 0.64 0.63 0.43 0.77 −0.45 −0.55
0.00 0.00 0.01 0.00 0.00 0.00
**BM** 0.63 0.63 0.42 0.77 −0.45 −0.54 1.00
0.00 0.00 0.01 0.00 0.00 0.00 0.00
**TBW** 0.23 −0.11 0.09 −0.02 0.97 −0.92 0.30 0.29
0.18 0.52 0.61 0.92 0.00 0.00 0.07 0.08
**BMI** 0.12 0.54 0.07 0.50 0.30 0.15 0.38 0.36 −0.35
0.49 0.00 0.70 0.00 0.07 0.39 0.02 0.03 0.03
**VFL** 0.02 0.69 0.06 0.54 0.40 0.26 0.45 0.45 −0.53 0.65
0.91 0.00 0.72 0.00 0.01 0.12 0.00 0.01 0.00 0.00
**MMA** 0.59 0.64 0.40 0.75 −0.54 −0.65 0.98 0.98 0.40 0.31 0.40
0.00 0.00 0.01 0.00 0.00 0.00 0.00 0.00 0.01 0.06 0.02
######
Link quality estimator models based on ANFIS (Adaptive Neuro-Fuzzy Inference System) (A-LQE) for both links with the training dataset.
**Link** **Inputs Name** **RMSE** **R2** **MAE**
---------- -------------------------- ---------- -------- ---------
L1 Ptx /BPosition /BFM/MUAC 5.38 0.84 4.21
L2 Ptx/BPosition/LLL 4.66 0.85 3.6
######
Simulation parameters.
**Parameter Name** **Simulation Time** **Radio Model** **Channel Frequency** **MAC Protocol** **Packet Size** **Number Nodes** **Sink Node** **Application**
-------------------- --------------------- ----------------- ----------------------- ------------------ ----------------- ------------------ --------------- -----------------
**Value** 52 s CC2420 2.4 GHz MAC 802.15.4 25 bytes 3 Node 0 Throughput test
######
Simulation results for the reactive algorithm in the best case. PER, packet error rate. Transmission Power Levels (TPL) , node 1 (n1) and node 2 (n2).
**Reactive** **TPLn1** **TPL n2** **RSSI n1** **RSSI n2** **Energy Saving** **PER n1** **PER n2**
-------------- ----------- ------------ ------------- ------------- ------------------- ------------ ------------
Optimal −5 dBm −15 dBm −78 dBm −70 dBm 29% 5.6% 1.4%
Not Optimal −2 dBm −15 dBm −74 dBm −70 dBm 23% 5.5% 1.4%
######
Simulation results for the case study. Transmission Power Levels (TPL) , node 1 (n1) and node 2 (n2).
**Algorithm** **TPL n1** **TPL n2** **RSSI n1** **RSSI n2** **Energy Saving** **PER n1** **PER n2**
------------------ ------------ ------------ ------------- ------------- ------------------- ------------ ------------
Reactive Optimal −5 dBm −10 dBm −71 dBm −74 dBm 24% 5.1% 3.3%
Predictive −3 dBm −11 dBm −71 dBm −73 dBm 22% 3.7% 3.2%
| |
Good characterization creates sentient beings who start to have minds of their own. It’s good for the author to listen, for they may have ideas that actually make the story better, as long as these ideas work with the whole.
Early on in my novel, I had a protagonist decide he’s a musician (so music became the passion that supported the theme). Later, two very minor characters designed the colours and atmosphere of their own house, which affected their son, a major supporting character to the protagonist, and still later, this same supporting character defined his attitude toward the protagonist through a character arc that led to a better, more complete ending I never could have drafted in the beginning.
“You never have to change anything you got up in the middle of the night to write.” ~Saul Bellow
A writing colleague had two very minor characters waltz in and demand bigger roles. Their unique (and in one case, eccentric) personalities, magnetism, backstory, and purpose greatly enhanced the story and strengthened relationships with the major characters. The author was quite worried about this surprising–and forceful–push for the change. But the characters played nice with the story arc and themes (though she gave them a map), and they fit in perfectly. The results were spectacular.
“Story structure is about plight, not plot.” ~Michael Dellert
I have written on this topic before
- Characters Don’t Listen to Us (“Hey, that’s not what I had in mind for you!”)
- Pantsing and Planning Can Be a Sliding Scale (An analysis with story examples.)
Related post from the blog series Snippets of An Author’s Life
- Annoying Things My Book Does To Me (With the short video “Tired Tom,” to which many a writer can relate!)
Today’s anecdotal post was inspired by K.M. Weiland’s recent post, Are You Being Too Much of a Control Freak About Your Characters?, which offers practical advice about this all too familiar phenomenon. | https://evablaskovic.com/2017/01/27/when-characters-become-sentient/ |
To study the secrets of a massive, bright blue star, astronomers at the University of California's Berkeley Space Sciences Laboratory dusted off an old technique -- aperture masking interferometry -- and applied it to one of the world's shiniest telescopes, the Keck 1 at Mauna Kea, Hawaii.
More than 100 years old, the technique masks a telescope's aperture to form a series of separate circular areas over the secondary mirror -- in effect, a series of separate telescopes. Although the method throws away most of the light the telescope gathers, the array of telescopes increases the number of images that can be simultaneously recorded, processed and combined into a single, detailed picture with the same resolution as the main mirror.
Peter G. Tuthill and Space Sciences colleagues John D. Monnier and William C. Danchi used Code V software from Optical Research Associates in Pasadena, Calif., to determine the geometry of their aluminum mask, which covered more than 90 percent of the mirror and created 36 areas corresponding to each of the telescope's segments. Each unmasked area on the 10-m mirror was only 35 cm across.
Once the mask was bolted into place, the astronomers pointed the telescope to Wolf-Rayet 104, a massive star 4800 light-years from Earth and beyond the limits of conventional observation. They recorded light waves reaching the mirror with a near-infrared camera, and the 36 images were added together using Fourier techniques to make a single image that eliminated much of the distortion from the Earth's atmosphere.
The resulting image, the first clear resolution of a dusty spiral surrounding the brilliant star, surprised the astronomers. They expected the star's radiant ultraviolet energy would burn up dust as it formed. Instead, the dust remained and trailed out from the star, forming a pinwheel tail. | https://www.photonics.com/Articles/Aperture_Masking_Resolves_Bright_Star/a4450 |
The liver is not only the largest organ in the human body but it is also one of the most important and critical organs to keep the body in top shape and healthy. The liver functions on enzymes, which stores energy such as glucose as well as minerals and iron. It does a long haul when working at synthesizing proteins in addition to metabolizing toxins, processing overdue red blood cells, creating bile, keeping your cholesterol and hormone levels balances, and is your defense team in killing off germs after they have been processed in the intestines.
As anyone can tell the liver is a vital part of our body and it is essential to take the right natural supplements for liver support so that it is functioning correctly. | http://www.herbsproblog.com/category/vitamins-and-supplements/liver-support/ |
We are often asked to take on projects that may be outside our comfort zone, particularly when those projects involve new technology. A substantial part of the process in any big digital project is finding tools that everyone can use together to get information, collaborate, and see the big picture as the project develops. Starting last year, Villanova University School of Law engaged in a complete website redesign while at the same time creating an institutional repository with Digital Commons, with the law school library taking charge of both projects. In this session we’ll share our experience with project management and review some of the tools we used to facilitate all phases of our digital projects.
Topics covered will include making detailed web analytics easy to read and understand, sharing information and ideas using cloud-based storage solutions, visualizing and delegating project goals with mind-mapping software, and using screencasts and other tools to create manuals and tutorials. These technologies helped us to get on the same page even when we were miles apart. In addition to these tools, we’ll discuss how we approached departments both within and outside the law school, how we balanced input and competing visions of the projects and how we built upon that information to create a roadmap that everyone could follow and contribute to, ensuring that everyone stayed on the right path to meet our goals. | http://conference.cali.org/2013/sessions/how-steer-20-hands-wheel-tech-tools-guide-and-stimulate-collaboration |
Nuclear Magnetic Resonance (NMR) spectroscopy is, together with liquid chromatography-mass spectrometry (LC-MS), the most established platform to perform metabolomics. In contrast to LC-MS however, NMR data is predominantly being processed with commercial software. Meanwhile its data processing remains tedious and dependent on user interventions. As a follow-up to speaq, a previously released workflow for NMR spectral alignment and quantitation, we present speaq 2.0. This completely revised framework to automatically analyze 1D NMR spectra uses wavelets to efficiently summarize the raw spectra with minimal information loss or user interaction. The tool offers a fast and easy workflow that starts with the common approach of peak-picking, followed by grouping, thus avoiding the binning step. This yields a matrix consisting of features, samples and peak values that can be conveniently processed either by using included multivariate statistical functions or by using many other recently developed methods for NMR data analysis. speaq 2.0 facilitates robust and high-throughput metabolomics based on 1D NMR but is also compatible with other NMR frameworks or complementary LC-MS workflows. The methods are benchmarked using a simulated dataset and two publicly available datasets. speaq 2.0 is distributed through the existing speaq R package to provide a complete solution for NMR data processing. The package and the code for the presented case studies are freely available on CRAN (https://cran.r-project.org/package=speaq) and GitHub (https://github.com/beirnaert/speaq).
Copyright: © 2018 Beirnaert et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The validation datasets are originally available through the University of Copenhagen (http://www.models.life.ku.dk/). All datasets in R format can be found on GitHub (https://github.com/beirnaert/speaq) and (partially, due to package size preferences) on CRAN (https://cran.r-project.org/package=speaq). All code is also available in both these repositories.
Funding: This research is supported by a GOA (Geconcentreerde Onderzoeksactie) from the University of Antwerp. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
1D NMR spectroscopy has been a popular platform since the early days of metabolomics. Although less sensitive than the complimentary and more common LC-MS technology, NMR has its merits. For one, it is an unparalleled technique in determining the structure of unknown metabolic compounds. Furthermore, because it is a non-destructive technique, samples can be reanalyzed later or can be used in a different spectroscopic analysis such as mass spectrometry. Also, an NMR spectroscopy experiment requires little sample preparation compared to LC-MS, thus limited unwanted extra variability is introduced. Finally, the results of an NMR analysis are less dependent on the operator and instrument used. All these factors make 1D NMR spectroscopy a technique with a relatively high reproducibility and rather minimal researcher bias . There are however also drawbacks associated with the technique. First, the aforementioned low sensitivity is an important issue as the dynamic range in real biological samples surpasses the NMR detection range. This is particularly problematic when the goal is to identify an unknown metabolite with a low concentration.
To get the best of both worlds, combining large scale LC-MS analysis with NMR spectroscopy has been presented as an option to yield valuable novel insights in metabolic pathways and biomarkers [2–4]. From a data processing perspective, this combination is not trivial. The data analysis of NMR is not as automated as LC-MS data analysis, which can rely on open-source solutions like XCMS . Most NMR data analyses are still performed with commercial software . While the reproducibility of the NMR experimental techniques is high, the data analysis still requires a substantial degree of user intervention. This results in the possible introduction of bias and lower research reproducibility, meaning that the data analysis can not be easily replicated by others. The absence of standardized and automated NMR metabolomics workflows is the main culprit despite recent progress. See Table 1 for an overview of freely available NMR software. Not all these NMR analysis tools are applicable to all research setups. Some serve only specific purposes like BATMAN , for example, which is aimed at obtaining concentration estimates for known metabolites from the raw spectra. However, a lot of untargeted experiments are in search for not only known metabolites, but also unknown ones. These experiments require tools that can process large amounts of data in a scalable way.
Table 1. An overview of open source NMR data processing solutions.
A typical workflow for NMR spectral analysis consists of several pre-processing steps, such as baseline correction, raw spectral alignment, spectra summarization and grouping. This is then followed by statistical analysis. The spectra summarization step and the alignment/grouping step are the most time consuming. Spectra summarization is the transformation of all the experimental measurement points into a small number of features, which are more suited for automated analysis. Multiple spectra summarization techniques exist, each with their own advantages and drawbacks . The specific method that is chosen can result in user-introduced bias and low reproducibility. This is the case for the most commonly used summarization approach: the so-called binning or bucketing method . This method was introduced to compensate for small spectral shifts between samples. It allows to vastly reduce the number of measurements points to a limited number of variables (the bins) in one single, automated step . There are however major drawbacks to this method that have a profound influence on the results . In particular, it is not straightforward to define the boundaries of the bins in crowded spectra. Automating this process may lead to splitting up small but relevant peaks. Manually checking the bins on the other hand is extremely tedious and tweaking the boundaries can forfeit any attempt for reproducibility. Several methods have been proposed to tackle the bin boundary issue [21–23], but this is not the only concern. Loss of information is intrinsically linked to the binning approach as entire bins are simply summed together.
At the end of an analysis based on the binning approach, when several bins are found to be interesting, it is still necessary to go back to the raw spectra to manually check the intervals. This is necessary to find the ppm values of the actual peaks of interest that can then be used to query a database, like HMDB . This introduces yet another point where user intervention is required, which slows down the whole process and hampers the use of an automated workflow.
In this paper, we present the speaq 2.0 method. The underlying core paradigm is to efficiently summarize spectra with little user interaction, high speed and most importantly little loss of information whilst greatly reducing the dimensions of the data. The overall aim however, is not to construct yet another all-encompassing package for NMR analysis, but more importantly, to construct a method that can complement established tools for NMR data analysis like MetaboAnalyst , by improving performance, analysis quality and reproducibility. This is achieved by improving the quality of the peak lists which are the starting points for MetaboAnalyst or muma-R . By automating the important peak picking step in the NMR analysis workflow, less researcher bias is introduced hereby greatly improving reproducibility. The automation potential of the package makes it suitable for the fast analysis of NMR spectra in a way that is very comparable to how LC-MS spectra are analyzed. In the future, this method could be effective for high-throughput analyses in which LC-MS and NMR data are combined to achieve better results. Nonetheless, a complete standalone analysis pipeline is presented with the focus on user-friendliness. This is to allow also non-expert users to be able to work with open-source tools instead of the black-box proprietary software.
The basic proposition of speaq 2.0 is to use wavelets to summarize the peaks within the spectra. By working with the peak data instead of the raw spectra or binned spectra a great reduction in data size can be achieved without a large loss of information. The Mexican hat wavelet is used to mathematically represent the peaks with only a few values instead of the tens or hundreds of raw data points describing the peak in the original spectrum. Besides the data reduction, the additional advantage of using wavelets is that the need for baseline correction and smoothing is eliminated with no loss of sensitivity or increase in false positives [25, 26].
The NMR data analysis workflow of speaq 2.0 is depicted in Fig 1. Spectra serve as input, then peak picking with wavelets is applied to transform the spectra to peak data. These peaks are then grouped into features with the grouping algorithm and peak filling is applied to fix missing values. The resulting matrix of features and samples is then used in statistical analysis. This approach is intrinsically different from the one available in the original speaq (v1.0—1.2.3), which was centered around the concept of aligning spectra with the CluPA algorithm. The BW-quantitation function of speaq v1.0—1.2.3 allows users to find regions that are different between two groups, such as case and control. However, speaq v1.0—1.2.3 does not support the case of multiple groups or ratio variables. Also, it requires a binning step to obtain a feature matrix to be used in pattern mining or statistical approaches such as PCA. The new speaq 2.0 methods allow multiple groups and circumvent the need for binning. Overall, speaq 2.0 offers a novel way of processing NMR data, with the option to use the classic spectral alignment techniques as an intermediary step. The entire package is designed to easily and quickly build a reproducible workflow to obtain the peaks that are of interest to the experiment. Although the approach is very different to the one available in speaq v1.0—1.2.3, the choice was made to integrate the functionality of both speaq v1.0—1.2.3 and speaq 2.0 in one single package. The main benefit is that the methods are fully compatible and it allows existing speaq users to easily extend their workflows as needed. The individual steps of the speaq 2.0 approach are described in more detail in the following section.
Fig 1. Possible workflows of speaq 2.0.
The newly presented methods are standalone (full black arrows) or can be used together with the CluPA alignment algorithm and BW quantification method that were made available in the first speaq implementation (v 1.0—1.2.3) (dashed arrows). It is still possible to perform an analysis based on raw spectra alone, as per the classic speaq (v 1.0—1.2.3) analysis. With the new methods, raw data is converted to peaks, and every peak is summarized with ppm location and width, intensity and SNR. These peaks are subsequently grouped and optionally peak filled (missed peaks in samples are specifically searched for). The resulting data is converted to a feature matrix that contains intensities for each peak and sample combination. This matrix can then be used in statistical analysis with built-in or external methods.
The input to the workflow consists of spectra in the intensity (y-axis) vs ppm (x-axis) format. This means that the free induction decay (FID) signal coming from the NMR spectrometer has to be converted to spectra by using the Fourier transform. In addition, before peak picking, the spectra can be aligned with the included CluPA algorithm (the core of speaq v1.0—1.2.3). Note that it is also possible to analyse spectra that have already been aligned with other methods like icoshift . However, depending on the algorithm used, aligning raw spectra can result in the distortion of small peaks .
Peak detection: From spectra to features via wavelets.
The Mexican hat wavelet is used to perform the peak detection. It is a suitable wavelet because it resembles a peak by being symmetrical and containing only 1 positive maximum . This peak detection method was inspired by the CluPA alignment algorithm where wavelets are used to find landmark peaks to aid in the alignment. The interaction with the wavelets relies on the MassSpecWavelet R package which performs the actual peak detection as per the method outlined by Du et al . A spectral segment (an intensity vector) containing a peak is converted to wavelet space by changing the scale and position of the mother wavelet and obtaining the wavelet coefficient for each combination of scale and position. The wavelet coefficient can be seen as a metric for how well the wavelet matches the peak. The entire intensity vector is converted into a wavelet coefficient matrix. This wavelet coefficient matrix is where the actual peak detection takes place. A ridge (line of high values) will appear in the matrix where the position of the wavelet matches the position of a peak in the spectrum. The height of the ridge is not constant as it varies with the scale of the wavelet. At the point where the scale best matches the width of the peak in the spectrum the ridge will feature a local maximum. The problem of peak finding is thus shifted to finding ridges, and finding the maximum of each ridge. See Du et al. for more details . The result is a peak detection that is both sensitive to low and high intensity peaks and insensitive to background noise (as noise will not produce a noticeable ridge).
Although the default parameters of the peak picking approach work for most NMR experiments, different parameters for the peak picking can be set according to user preferences. For example, the baseline intensity threshold (to focus on higher peaks only, default is 1000), the signal-to-noise ratio threshold and the scales to be used for the wavelets. Note that the default parameters are set up for untransformed spectra, when spectra are scaled to max intensity 100 or 1 different settings may be more appropriate. All data sets were analysed with the default speaq 2.0 values.
After the peak detection, the spectra (intensity vs ppm data) are converted to a dataset with peakIndex and peakValue values. Note that this peakValue vs peakIndex dataset has a substantially lower dimension than the original data. The peakIndex is directly linked to the ppm value. The peakValue is related to the wavelet coefficient that describes the peak. This wavelet coefficient is an approximation for the area under the peak curve and this used throughout the analysis. Since peak height is of interest for some NMR data analysis pipelines, the option to work with peak heights has been made available in the peak picking function.
The peaks resulting from the wavelet peak detection are not perfectly aligned since no two peaks are exactly the same and different spectra can be shifted relative to each other. These shifts can be caused by differences in sample environment (pH, solvent, etc.) or differences in experimental conditions (temperature, magnetic field homogeneity). However, the aim is to go towards a feature dataset whereby a feature is defined as a group of peaks with at most one peak per sample belonging to that feature. This means the peaks have to be grouped with a single index value describing the group center. To group the NMR peaks we can make optimal use of the results of the wavelet based peak detection step. Not only ppm values but also signal-to-noise ratio and sample values can provide information to aid in the grouping. The hierarchical clustering based algorithm developed for grouping does not require a reference sample as it divides the samples in groups based on the Gower coefficient . The merit of the Gower distance is that variables of different units (here ppm and intensity) can be safely used together. It is calculated by normalizing each variable to a value between 0 and 1. The distance between two data points is then the sum of the distances for each variable. As such the Gower distance can be seen as a Manhatten distance on normalized data. The grouping algorithm’s pseudocode is displayed in Fig 2. A more detailed description can be found in S1 Appendix.
Fig 2. Grouping algorithm pseudocode.
Note that this method is designed to process data that is sufficiently well aligned. If this is not the case the method will most likely underperform because of the larger overlap between peaks. Nonetheless the method even works on data with non-trivial shifts between samples as is the case in the wine benchmark dataset . Extremely shifted spectra can be aligned with existing methods such as CluPA , prior to peak detection.
The purpose of peak filling is to detect peaks that may have been missed in the first round of peak detection. To illustrate this problem, we can think of a scenario in which, for example, the user sets a high intensity threshold for peak detection. The features matrix will then be composed of features that correspond to locations with high peaks. If certain samples have low peaks in this region the peak filling step can be used to find these peaks, because peak filling works without an intensity threshold. This reduces the amount of missing data in the feature matrix. In an other scenario, peaks can be deliberately deleted if the grouping algorithm detects two peaks from the same sample in the same group. If this peak actually belonged to a different group it can then be recovered with the peak filling step. For each feature, the peak filling algorithm will specifically search the raw data for peaks of missing samples. A small section of the missing sample spectra is used to perform the peak detection. This small section is of length 512 measurement points (small) or 1024 (large), as this greatly speeds up the computation of the Fourier Transform used by the MassSpecWavelet package . A more refined wavelet search is performed in this region starting from the average group values for peak location and width. If a peak is found it has to be within a set distance from the group. The default is 10 measurement points, which is approximately between 0.001 ppm and 0.01 ppm, depending on the NMR instrument settings. This distance is small as otherwise distant peaks could be assigned to the group. If no peak is found then it is still regarded as a missing value and can be imputed later. The end result is more peaks, which in turn results in a more robust statistical analysis afterwards as less missing values have to be imputed.
Following peak filling, the data can now be represented in the form of a matrix with samples for rows, features (peak groups) for columns and peak values in each matrix cell. Each of these peak values corresponds to the size of the original peaks as quantified by the wavelets. A huge number of techniques for univariate and multivariate statistics (e.g. PCA, PLS-DA) and machine learning (e.g. SVM, random forest) can be applied to this data matrix. Most of these methods are made available through different R packages which can be found on CRAN, Bioconductor or Github. The output provided by speaq is compatible with the majority of these methods, as most of these allow to submit a data matrix and a response vector (class labels) as input. A selection of methods useful for statistical analysis have been directly integrated into the speaq 2.0 framework: a tool to perform scalings, transformations and imputations and a differential analysis method.
Before statistical analysis methods like PCA can be used, the missing values in the data have to be imputed. This step is often done in tandem with the desired scaling method since otherwise data can artificially be created. For example, imputing zeros followed by z-scaling is not the same as z-scaling followed by imputing zeros. The latter actually corresponds with imputing mean values. For all benchmark datasets zeros (the default) are used for imputing missing peak values in the data matrices as this indicates a non present peak. Although other methods are available, for example kNN imputation and random forest based imputation .
A differential analysis method based on linear models is available in speaq 2.0. This function provides a way of identifying significant features with (adjusted) p-values. Specifically, for each feature 1, …, j, …, K consisting of peak values yi,j of samples 1 … i … N a linear model of the form (2) is constructed with x the response vector (N elements, for example class membership, a bioassay result, etc.), yj the jth feature vector and ε the vector of errors εi. Now for each βj we can test whether there is a significant relationship between feature yj and x by testing the hypothesis that βj = 0 (two-tailed t-test). The K p-values can be used to find peaks significantly associated with the response vector. Several multiple testing corrections are available within the speaq 2.0 framework. While the default is Benjamini-Hochberg, for the purposes of this study, the stringent Bonferroni correction was applied to all reported p-values. Note that in the case of only two classes, this method is equivalent to the t-test.
After statistical analysis the relevant peaks can be matched with the molecules responsible for these peaks. Several databases with NMR metabolomics data are available . One of the more user friendly ones is the Human Metabolome Database (HMDB) , as it allows to search for compounds by providing the ppm values of the peaks of interest. To obtain the metabolites for the onion intake in mice data the latest version of HMDB (3.6 ) was used. It is however not optimal to submit all significant peaks in a single query to this database. The reason for this is that HMDB works by matching the queried peaks to the database and then sorting the matched molecules according to their Jaccard index. For two sets the Jaccard index is the number of common elements (the intersection), divided by all the elements (the union), or in this specific case the number of matched peaks divided by all peaks in the query. Thus, when submitting all peaks at once we risk not finding the correct metabolite as adding additional peaks from molecule B when trying to identify molecule A will dilute the results. To reduce this effect a correlation analysis can be performed to indicate which peaks belong together. The underlying assumption is that NMR spectra peaks originating from the same molecule exhibit similar behavior over all samples. Therefore the peaks that correlate strongly with each other are most likely to come from the same metabolite. The speaq 2.0 output format is compatible with the R functionality for correlation analysis. The correlation matrix is visualized with the corrplot R package which clusters the peaks according to their correlation. The number of clusters are chosen by the user between one and the total number of peaks. The correlation within each cluster is affected by the chosen number of clusters. The user is responsible for choosing this number of clusters and evaluating the corresponding performance. After the correlation analysis step, the ppm values of peaks in a correlated cluster can be submitted directly to HMDB via a built in speaq 2.0 function (HMDBsearchR, note this will open a webpage). This produces a list of metabolites ordered by Jaccard index. It is up to the user to determine which Jaccard index is to be considered high enough.
To validate the presented approach three datasets are analyzed: one simulated dataset for which the ground truth is known and two publicly available datasets which have been analyzed in published studies.
The wine dataset by Larsen et al. consists of 1H NMR spectroscopy data of 40 table wines (red, white & rosé). The focus of Larsen et al. was not to investigate differences between wines of different colour and origin, but merely to evaluate how pre-processing methods like alignment and interval selection can aid in chemometrics and quantitative NMR analysis . Wine is a good example for evaluating alignment algorithms because of the often substantial differences in pH, which can cause large shifts in the NMR spectra. Because of this property, the wine dataset has been used to evaluate the performance of several alignment algorithms, like COW , icoshift and CluPA .
The simulated dataset is constructed by combining the 1H NMR spectra of two metabolites, namely 3-Hydroxyphenylacetic acid (HMDB0000440) and 3,4-Dihydroxybenzeneacetic acid (HMDB0001336). The NMR spectra of both metabolites can be downloaded from the Human Metabolome Database . The dataset consists of 20 simulated spectra that are combined in such a way as to include variation that is comparable to the most common between-sample variation found in NMR spectra. Most notably, there is variation in peak height, peak location, and peak composition. The variation in peak composition is caused by both metabolites having peaks at almost identical locations. This results in two sources of variation in peak location namely, the variation introduced by the random shift left or right and by the mixing factor that describes the weight of each metabolite in each spectrum. See S2 Appendix for more details about how this dataset was generated.
The onion intake in mice dataset originates from a nutri-metabolomics study by Winning et al. . The objective of the study was to search for onion intake biomarkers. The underlying idea was that if their workflow can identify biomarkers for onion intake, it could also be used to locate biomarkers in other studies. 32 rats were divided into 4 categories each receiving a specific onion diet: control (0% onion), 3% onion residue, 7% onion extract and 10% onion by-product. Urine samples were collected during 24 hours and analyzed with proton NMR spectroscopy to characterize the metabolome of the different onion fed mice. More details can be found in .
Both the wine and onion datasets were made available by the University of Copenhagen at http://www.models.life.ku.dk/.
The first public validation dataset concerns the NMR spectra of table wines. This dataset has been often used to evaluate algorithms to align raw spectra. The new speaq 2.0 workflow which transforms spectra to peaks, which are then grouped together, can also be used to process this dataset.
The peak based approach for data reduction.
By using the speaq 2.0 peak picking method, followed by grouping and peak filling, the size of the data is greatly reduced. This is done in multiple steps. First, peak detection is applied to the raw spectra to convert the large raw measurement data matrix of 40 samples by 8712 measurements to a smaller matrix of 6768 peaks by 6 columns consisting of values describing the peaks. The data reduction after this step does not seem overwhelmingly large. However, it is important to realize that this is only a reduction in redundant information which is accompanied with little loss of information thanks to the wavelets. Furthermore, most of the correlation between consecutive measurement points in the spectral data is removed. Next, the peaks are grouped, resulting in the same dataset as the peak data, but now each peak has been assigned to a group. Such a group consists of a collection of peaks with at most one peak per sample. This grouped peak data can now be represented as a matrix, with groups as columns, samples as rows, and peak intensities as the matrix elements. The true data reduction becomes apparent now as there are only 207 peak groups, which correspond to the features used in further analysis. The original matrix of 40 by 8712 is thus converted to a matrix of 40 by 207.
From feature matrix to locating differences between spectra.
We can locate those features that are associated with wine type. Before any analysis the data matrix is Pareto scaled and centered. The first step in a multivariate analysis is often principal component analysis (PCA). The results show that there is a clear difference between on one side red and on the other white and rosé wines (S1 Fig). However, a differential analysis method incorporated into speaq 2.0 can be sued to investigate the specific features that are different between the red and white wine classes. The results of the differential analysis is a series of p-values, one for every feature, which indicate how useful each feature is in building a linear model that can discriminate between the two wine classes. The p-values are displayed in Fig 3 along with the raw spectra and grouped peak data for one of 33 significant features. When looking at the spectra that correspond to these features, the difference between red and white wines is obvious. However, manually searching the original spectra for these difference regions would be extremely tedious and time consuming. With speaq 2.0, this entire process takes about 3 minutes with 1 CPU and a mere 50 seconds with 4 CPUs (2.5 GHz machine).
Fig 3. Visualization of Bonferroni corrected p-values.
Numerous features have a corrected p-value below the significance threshold of 0.05 indicating that there is a significant difference between red and white wine. An example of a significant feature (indicated with the red diamond) is represented on the right with its raw spectra (top), the data after peak detection (middle) and the data after grouping (bottom).
Comparing peak grouping to raw spectral alignment.
The new speaq 2.0 approach differs from spectral alignment algorithms, such as CluPA and icoshift [15, 27], but the final results should correspond to each other: grouped peaks should correspond to aligned peaks in the spectrum. By comparing the results from each, we can study the cases where the peak based method performs better, equal or worse compared to the raw spectra based methods. The performance of both types of spectra processing methods (peaks vs raw spectra) is dependent on the content and specifics of the spectra. Most notably the number of peaks and the shifts between sample spectra (caused by pH differences etc.) largely dictate how well these methods will perform. Generally, if the peak shifts between samples are less than the distance between consecutive peak groups, all methods perform as expected. An example of this can be found in Fig 4. Beyond these ideal cases, several alignment or grouping mistakes can occur. The following illustrates a small portion of issues that can arise when processing 1D NMR spectra.
Many peaks are in a small region causing overlap and there is no clear indication as to which peaks correspond to each other. A clear example of this situation is depicted in S2 Fig. The speaq 2.0 grouping method based on peaks performs similarly to the other methods and in some cases even provides superior grouping solutions (although there is no real way to say which of the smaller peaks belong together). The reason is that it sees all peaks and tries to group them locally. This is in contrast to the CluPA algorithm from the original speaq which only regards the landmark peaks and aligns the highest ones in this crowded region, but the spectra are clearly overshifted. The icoshift algorithm provides a better solution than CluPA in this case but the results remain suboptimal.
A single sample shows unique behavior compared to all other samples. An example of such a situation is depicted in S3 Fig. In this case no method performs as it should and every method introduces errors or artifacts. It is however important to note that such unique cases will usually not show up in the final statistical analysis since these analyses often focus on general group differences and are robust against outlier samples.
The shift between samples is larger than the difference between two adjacent peaks in a non crowded peak region. An example is shown in S4 Fig. Both the raw spectra approaches (CluPA and icoshift) align the spectra as expected. The speaq based approach initially groups peaks wrongly. However, this wrong alignment can be detected by using a built in function of the speaq package which calculates the silhouette values for each group (see S3 Appendix for definition and implementation). Groups that are flagged as having bad silhouette values are regrouped. After this step the results correspond to those of the correctly aligned spectra.
Fig 4. Raw spectral alignment methods and peak based grouping methods perform equally.
When the peak shifts between samples (caused by pH differences etc.) are less than the distance between adjacent peaks, all methods perform as expected. The raw spectra based methods (CluPA from the speaq v1.0—1.2.3 and icoshift) mitigate the differences in peak shifts and the peak based method groups the peaks accordingly.
The lowest performance is obtained when simply binning the raw shifted spectra. This is to be expected as it is more likely that peaks will be split over multiple bins because of the large shifts.
By aligning the spectra before the binning step the performance increases. Although the spectral alignment methods have occasional artifacts, many peaks in the spectra are in fact aligned correctly, thereby aiding in reducing the splitting of peaks over multiple bins. Nonetheless, this approach is still hampered by the binning step.
The peak based method of speaq 2.0 performs best on the simulated case vs control dataset. This is predominantly caused by the different way of obtaining features. With the binning approach, features consist not only of the peak signal but also of the adjacent signals, which are often background. This background effectively reduces the difference between two samples, as the background signal is often similar in scale. With the peak picking approach of speaq 2.0, only the peak is used to describe a feature. This results in greater differences between samples and this in turn makes it easier for the statistical approaches to spot the relevant differences.
Fig 5. Performance comparison workflow.
The default way of processing 1D NMR spectra is illustrated on the left. The case vs. control spectra are aligned and are then binned to produce features which can be used in statistical analysis. Note that the spectral alignment step can be skipped as the binning approach can correct for small shifts. This default processing approach is compared to our method shown on the right. The aim of both methods is to point the user to the peaks/intervals that discriminate between the two groups.
Fig 6. Performance comparison with ROC and P-R curves on a simulated dataset.
Binning raw unaligned spectra results in the worst performance. The two alignment tools (CluPA and icoshift) show an increase in performance compared to no alignment but are still hampered by the binning step. The new speaq 2.0 workflow has the highest performance on the ROC and P-R curve.
With this validation dataset, we will compare the results of the new speaq 2.0 workflow with those obtained by Winning et al. . This dataset contains onion 4 groups of mice with increasing percentages of onion in their diet (0, 3, 7 and 10%).
Towards a small and usable data matrix.
The original analysis by Winning et al. used binning to process the spectra. Here we use the new speaq 2.0 workflow to convert the raw NMR spectra (S5 Fig) to peaks (S6 Fig). Next the peaks are grouped, peak filled and converted to a feature matrix. The dimensions of this feature matrix are 31 samples by 677 features. This is a substantial reduction from the original spectra matrix (31 samples by 29001 measurement points). This feature matrix is the input for the following statistical analysis.
No group trend is found by PCA.
Corresponding with the original analysis by Winning et al. a principal component analysis (PCA) is performed. The feature data matrix is Pareto scaled and centered. The results of the PCA analysis, as presented as a score plot in Fig 7, are analogous to those of : there are no obvious and consistent group trends that follow increases in onion intake.
Fig 7. PCA analysis of onion mice data.
The onion mice data matrix is Pareto scaled and centered. There are no clear trends that follow the onion intake percentage present in the PCA results, this matches the results of Winning et al. .
Locating possible biomarkers with ease.
From this point onwards the merit of the wavelet based analysis becomes more obvious. Winning et al. resort to interval partial least squares (iPLS) and interval extended canonical variate analysis (iECVA). After careful cross validation, these methods lead to intervals that have to be checked manually for interesting peaks. The new speaq 2.0 workflow allows a quicker and more straightforward analysis. The constructed feature matrix is processed with the differential analysis method. In this case there exists a numerical relationship between all the groups (i.e. the percentage of onion in the diet), which is directly supported by the new speaq 2.0 differential analysis based on linear models. Each feature receives a Bonferroni corrected p-value assigned indicating how well the feature corresponds to the increasing onion concentration. The distribution of uncorrected p-values is depicted in S7 Fig. The corrected p-values are shown in Fig 8 along with an excerpt of one of the significant peaks. In total, 9 peaks were found to be significant. The 9 significant peaks can be used to search HMDB to find the possible biomarkers related to onion intake.
Fig 8. Differential analysis results of onion intake in mice data.
(Left) the Bonferroni corrected p-values for the features resulting from the differential analysis and (right) one of the features with a significant p-value (indicated with the blue diamond on the left image): (top) raw spectra, (middle) data after peak detection and (bottom) data after grouping.
Merely submitting all peak ppm values to HMDB will not produce the correct outcome, as HMDB expects all peaks to correspond to a single metabolite. To avoid submitting peaks from multiple metabolites to an HMDB search, a correlation-based clustering step is performed on the highly significant peaks. The result from this clustering, based on peak intensity correlations, is displayed in Fig 9. The significant peaks are grouped into 5 clusters, where the minimal Pearson correlation between any two peaks in the same cluster is higher than 0.75. These peak clusters are used to search HMDB (tolerance ± 0.02), by submitting the ppm values of the peak groups within a cluster. When submitting the cluster of 4 peaks, the top hit is 3-hydroxyphenylacetic acid (HMDB00440) with a Jaccard index of 4/9. This molecule is also identified in the original paper as a biomarker for onion intake. However, in the original paper this is done only by looking at a small region around 6.8 ppm, as compared to the speaq 2.0 analysis which yields peaks in multiple ppm regions that can be used for identification. The peak with index 18662 can actually also be assigned to 3-hydroxyphenylacetic acid (raising the Jaccard index to 5/9 upon also submitting this peak to HMDB). When the cluster that only contains peak 19723, with corresponding ppm value of 3.1558, is submitted to HMDB the top hits are dimethyl sulfone and 9-methyluric acid, both with a Jaccard index of 1/1. These results match those from the original paper where dimethyl sulfone (HMDB04983) is identified as a biomarker for onion intake. Raw spectra of the main peaks for both biomarkers are shown in S8 Fig.
Fig 9. Correlation analysis of significant peaks.
The significant peaks, which are indicated by their peakIndex value, are clustered based on their Pearson correlation. The group of four peaks correspond to the 3-hydroxyphenylacetic acid biomarker, peak nr. 19723 corresponds to the dimethyl sulfone biomarker. Both biomarkers are also identified in the original analysis paper , but with only one peak for the first biomarker.
The other peaks found cannot be identified querying HMDB. The peak with index 19510 is somewhat absorbed in the background. The peak with index 23648 ends up in a cluster with non-significant peaks that are assigned to ethanol within HMDB, when the correlation procedure is run on the entire dataset. As HMDB does not assign the 23648 peak to ethanol, this may indicate that this is a derivative or a byproduct of ethanol. The peak with index 19752 is actually a peak in the tail of the large peak of one of the identified biomarkers, namely dimethyl sulfone. The fact that this peak is significant is caused by an artifact of the wavelet based peak detection since it considers the tail of the large dimethyl sulfone peak as the baseline for the small peak. So when the dimethyl sulfone peak is larger, the baseline for the small peak is also larger and therefore the peak diminishes. This is also the reason why this peak is anti-correlated with the dimethyl sulfone peak.
The MetaboAnalyst platform is widely used for the analysis of metabolomics data. The processing of NMR data is also possible, provided the NMR data are supplied as a peak list or as binned data. Since Winning already used the binning approach, we will compare the results of MetaboAnalyst when peak data is supplied. This means the grouping step is performed in MetaboAnalyst thereby allowing the comparison to the speaq 2.0 grouping method. See Fig 10 for a visual representation of which steps of the workflow are different. The grouping method performed in MetaboAnalyst uses a moving window to group peaks together. The window is 0.03 ppm wide and moves with steps of 0.015 ppm according to the documentation. If more than one peak per sample is detected in a single group the intensities of these peaks are summed together. After pre-processing with MetaboAnalyst the data matrix (245 features) is extracted and processed with the differential analysis function. So again each column in the MetaboAnalyst matrix gets a (Benjamini-Hochberg corrected) p-value assigned to indicate how well this feature corresponds to the increasing onion diet. The results are presented in Table 2. For every highly significant feature in the MetaboAnalyst data, there is at least one highly significant feature from the speaq 2.0 analysis. The difference between both methods is the lower resolution of MetaboAnalyst, as it sums close peaks together. This approach effectively removes a source of information contained in the data as multiplets can aid in the identification of compounds.
Fig 10. Workflow for comparing the results of MetaboAnalyst with speaq 2.0.
Table 2. Comparison between MetaboAnalyst and speaq 2.0 for the onion intake in mice dataset.
We present an easy way of converting 1D NMR spectra (or other 1D spectra) to peak data by using wavelets for peak detection. This wavelet based method performs better than binning or other spectra summarizing methods as the dimension of the dataset is greatly reduced with little to no loss of information, while requiring no user intervention. After the wavelet based step the peaks are grouped via a hierarchical clustering method. These groups of peaks are called features. The features can easily be analyzed with a myriad of statistical techniques or data mining approaches. Our method has been implemented in an entirely new version of the existing speaq R package which offered the CluPA algorithm for aligning spectra. This package now provides an entire solution for easy 1D NMR data analysis without the need for binning. Each step in the workflow is available as a single function. Thus, analysis pipelines can be constructed easily and with little additional user interaction, fostering improved research reproducibility and shareability.
Besides the possibility to perform a complete standalone analysis, our method can also be used in tandem with other commonly used tools that rely on summarized spectra. Specifically, it can be used to quickly and efficiently produce a high quality peak list. Such a peak list is the starting point of an analysis with for example the often used MetaboAnalyst .
The data processed in this article came in a matrix format with ppm values and intensities. Other proprietary software or open-source frameworks are thus needed if only the raw NMR Free Induction Decay signal (FID) is available and conversion to the frequency space is needed. Optional pre-processing steps on the raw FID signal like zero-filling, apodization, and phase-shifting have to be performed prior to employing speaq 2.0, if they are desired. These pre-processing steps are on the road-map for future developments.
We expect the introduced method to be especially useful for processing NMR spectra from large cross-platform experiments that combine NMR and LC-MS. Often software packages like XCMS are used to process LC-MS data. These open source packages also employ the standard paradigm of peak-picking, grouping, etc. so the integration of data or results should be facilitated with this framework. The method in itself also has merit as is clearly demonstrated in the case of the onion intake in mice data. The analysis is fast, sensitive to both small and large peaks and user-independent. Also, when comparing the results we obtained to the work presented by Winning et al. , our analysis required less user interaction and yields more peaks in the end that can be used to identify the possible biomarkers, resulting in an improved confidence in the results.
The user-friendliness of speaq 2.0 should also allow people with little experience in R to use the package. Also, it can serve as an attractive option for researchers interested in switching from closed, proprietary software to open-source, especially if the goal is to speed up analysis, improve reproducibility and increase control over workflows and algorithms.
speaq 2.0 is distributed through the existing speaq R package to provide a complete solution for NMR data processing. The package and the code for the presented case studies are freely available on CRAN (https://cran.r-project.org/package=speaq) and GitHub (https://github.com/beirnaert/speaq). Future directions will aim to provide compatibility with the open source nmrML (http://www.nmrml.org) format and to improve on the identification part by combining our approach with Statistical Total Correlation Spectroscopy (STOCSY) .
S1 Appendix. Grouping algorithm details.
S2 Appendix. Simulated data based comparison of speaq 2.0 vs alignment algorithms.
S3 Appendix. Silhouette values and the SilhouetR function.
S1 Fig. Wine data PCA plot.
The PCA score plot shows that Principal Component 2 clearly indicates a difference between red, white and rosé wines.
S2 Fig. Difficulties arise in crowded spectra.
When many peaks are present in a small region, it is not clear which peaks correspond to each other. The speaq 2.0 method, based on finding peaks and subsequently grouping, performs similar or better compared to the other methods as it sees all peaks and tries to group closer ones together. The CluPA algorithm uses landmark peaks and therefore simply tries to align the largest ones together, which is not correct in this case. Lastly, the icoshift algorithm tries to align the spectra based on correlations but the result in this crowded region is also not satisfactory.
S3 Fig. A sample with unique behavior causes issues.
In the region around 5.43 ppm there appear to be two small peaks in all samples. A single sample of red wine has two additional large peaks around the 5.40 ppm region. Every method performs poor in this case: both icoshift and CluPA (speaq v1—v1.2.3) align the two large peaks with the group of small peaks. The CluPA algorithm does this by shifting the entire region to the right, this results in the two small peaks of these spectra to be shifted to the right of the small peaks group around 5.43 ppm. The icoshift algorithm on the other hand introduces some strange artifacts and the two small peaks are gone all together. The speaq 2.0 algorithm deletes one of the large peaks in the grouping step, which it often does if multiple peaks from the same sample are present in one group. This problem is usually mitigated by the peak filling step but in this case it is not.
S4 Fig. Between-sample shifts that are larger than between-adjacent-peaks shifts can cause problems.
In this case both raw spectra methods perform as expected whereas the speaq 2.0 method does not. Initially peaks are wrongly grouped together. This problem is however detected by the optional SilhouetR function in speaq 2.0 which calculates the silhouette values for each group. After the appropriate correction the results are as expected.
S5 Fig. Raw spectra of the onion intake in mice data.
S6 Fig. speaq 2.0 workflow applied to the onion intake in mice data .
(A) Onion intake in mice peak data after grouping and filling. The gap in the raw data is clearly visible: this data was removed by the study authors because of insufficient water suppression. (B) Excerpt of peak data pre-grouping. (C) Excerpt of peak grouped data.
S7 Fig. Distribution of uncorrected p-values.
The possible biomarker signals are clearly present on the left as an increase in frequency over the otherwise uniform distribution.
S8 Fig. NMR spectra of biomarkers identified by speaq 2.0.
Main peaks of both biomarkers from the onion intake in mice data . (Top) dimethyl sulfone and (bottom) 3-hydroxyphenylacetic acid.
S9 Fig. Spectral alignment algorithms can introduce artifacts.
The results of spectral alignment algorithms is not always optimal when dealing with severely shifted spectra. This is illustrated here for the simulated case vs control data (mi is bimodal). The algorithms can introduce artifacts, i.e. misalign or overcorrected spectra, which affect the following processing steps (e.g. binning).
Thanks go out to Aida Ligata (Mrzic) for the many fruitful discussion on figures and visualizations. Also we would like to thank all users of the speaq R package for their valuable feedback and support. We dedicate this paper to our colleague, Prof. Sandra Apers, who passed away much too early on February 5th, 2017.
1. Gowda GN, Raftery D. Can NMR solve some significant challenges in metabolomics? Journal of Magnetic Resonance. 2015;260:144–160.
13. Gaude E, Chignola F, Spiliotopoulos D, Spitaleri A, Ghitti M, García-Manteiga JM, et al. muma, An R package for metabolomics univariate and multivariate statistical analysis. Current Metabolomics. 2013;1(2):180–189.
21. Sousa S, Magalhães A, Ferreira MMC. Optimized bucketing for NMR spectra: Three case studies. Chemometrics and Intelligent Laboratory Systems. 2013;122:93–102.
22. Anderson PE, Reo NV, DelRaso NJ, Doom TE, Raymer ML. Gaussian binning: a new kernel-based method for processing NMR spectroscopic data for metabolomics. Metabolomics. 2008;4(3):261–272.
29. Gower JC. A general coefficient of similarity and some of its properties. Biometrics. 1971;27(4):857–871.
35. Wei T, Wei MT. Package ‘corrplot’. Statistician. 2016;56:316–324.
36. Larsen FH, van den Berg F, Engelsen SB. An exploratory chemometric study of 1H NMR spectra of table wines. Journal of Chemometrics. 2006;20(5):198–208.
Is the Subject Area "One-dimensional NMR spectroscopy" applicable to this article? | https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006018 |
Throughout the pre-season, Post basketball reporter Eric Koreen will ask a few personal questions to members of the Toronto Raptors. Today: point guard John Lucas III.
Who was your favourite player growing up?
JL: I didn’t have a favourite player. There were so many. I was just a fan of basketball. I liked everyone in the NBA. Of course there are Michael [Jordan], Magic [Johnson], Larry [Bird]. I liked all of them. But I also like Oscar Robertson averaging a triple double. I’m coming from a basketball house. My father [former San Antonio, Philadelphia and Cleveland head coach John Lucas] always expressed the legends and who came in before he came in and who came in after him. With my size, I always respected the small guards, like Muggsy Bogues, Spud Webb, Damon Stoudamire, Allen Iverson. Even the guards today: Chris Paul.
[np-related]
Who was the most important coach in your development?
JL: I would have to say my father. Second to that would be a tie between Coach [Eddie] Sutton, my college coach, and Tom Thibodeau. Sutton, he was like a father to me. He made me change my act. He made me grow up. What I mean by that is when I went there, I had braids. In order for me to play there, he made me cut my hair. It’s just that mindset of you’re not a little kid anymore. I’m getting you ready for this next chapter in your book. After you leave college, you’re in the real world. They’re not going to accept the braids. Coach Thibodeau was just the guy who gave me a chance to show what I can do.
Who is your favourite teammate ever?
JL: My favourite teammate? I’ve been around so many great guys, man. That’s a tough one. My favourite teammate would have to be Joakim Noah. That guy is one of a kind. I don’t think I’ll meet anybody like him.
What’s your goal this season?
JL: As an individual, just come in and contribute. Come in and play my heart out, and leave it all out there every night. I’m not big on how many points I score, how many assists I have. I’m just big on winning ball games. As a team, I think our goal is to just get over that hump, trying to get to the playoffs, trying to get where they were with Vince Carter. We’re capable of doing it. We have the right pieces to do it.
What’s one thing people don’t know about you?
JL: I’m silly. I always joke around. I’ve got a personality about me where I’m serious on the court, but when I’m off the court, I just like to have fun. I like to live life. Life can’t be all serious every single minute, hour. When I go home, I watch cartoons. I like comedies. I crack jokes with my mom. She always gets mad at me. I think I get that from my dad.
Do you have a favourite cartoon?
JL: Mighty Mouse, Scrappy-Doo, Speedy Gonzales. I like all those little, fast characters that get away with everything. | http://nationalpost.com/sports/basketball/nba/get-to-know-a-new-toronto-raptor-john-lucas-iii |
RELATED APPLICATIONS
BACKGROUND
1. Field of Invention
2. Discussion of Related Art
SUMMARY
DETAILED DESCRIPTION
This Application claims priority under 35 U.S.C. § 119(e) to U.S. Provisional Application Ser. No. 62/411,496, entitled “SYSTEM AND METHOD FOR COLLECTING ONLINE SURVEY INFORMATION” filed on Oct. 21, 2016, which is incorporated by reference in its entirety.
The field of the invention relates generally to Internet survey systems, and more particularly, to systems and methods for collecting survey information.
There are many methods for collecting survey information, especially over distributed networks such as the Internet. Types of surveys range in the information they collect and the manners in which they are presented to users.
Qualitative research is primarily exploratory research, and is used to gain an understanding of underlying reasons, opinions, and motivations. Qualitative research provides insights into the problem or helps to develop ideas or hypotheses for potential quantitative research. Qualitative Research is also used to uncover trends in thought and opinions, and dive deeper into a particular problem. Qualitative data collection methods vary using unstructured or semi-structured techniques. Some common methods include focus groups (group discussions), individual interviews, and participation/observations. The sample size is typically small, and respondents are selected to fulfill a given quota.
Quantitative research is used to quantify the problem by way of generating numerical data or data that can be transformed into useable statistics. It is used to quantify attitudes, opinions, behaviors, and other defined variables—and generalize results from a larger sample population. Quantitative research uses measurable data to formulate facts and uncover patterns in research. Quantitative data collection methods are much more structured than qualitative data collection methods and generally involve more respondents. Quantitative data collection methods include various forms of surveys—online surveys, paper surveys, mobile surveys and kiosk surveys, face-to-face interviews, telephone interviews, longitudinal studies, website interceptors, online polls, systematic observations, among others.
According to one aspect of the present invention, it is realized that improved methods for collecting survey information are needed that are more efficient and derive necessary information using less effort and steps, particularly within systems that integrate multiple respondents, such as those that operate over the Internet. However, it is appreciated that several forms of market research are different, and are not necessarily integrated. According to one aspect of the present invention, it is appreciated that it would be beneficial to provide new methods of collecting market research information. In one implementation, a new market research methodology and process is provided that combines classical quantitative market research surveys with automated qualitative research. In another implementation, this methodology may be integrated within a social networking system.
In another aspect, an output of the study includes core structured quantitative data commissioned by the an information consumer (e.g., a client) as well as unstructured social conversation about the study or related topics. According to one aspect, the number of steps and time between performing qualitative research to quantitative research is reduced, and the result may be provided to the client in a more expedient manner. When integrated with a social network, the conversation can take place concurrently with the study as well as after the study is completed on a social network platform. Also, the study may be either closed to respondents or open to respondents and other members of the social network. This social conversation is referred to herein as social user generated content (UGC).
According to another aspect, the combination of the quantitative and automated user generated qualitative data yields a much richer and more holistic dataset. The additional value could be transformational for quantitative research. Not only is the research methodology of combining quantitative surveys with social UGC new and unique, another first is the automation of qualitative user generated research derived from a quantitative study.
According to another aspect, a unique methodology of switching from quantitative to qualitative collection methods may be implemented within an online data gathering system (e.g., a survey system), and in the process, the system maximizes the likelihood of automatically creating 100% user-generated high quality discussions related to a quantitative study.
According to one implementation, a creation of social survey information and the resulting unstructured qualitative social conversation (the movement from quantitative to qualitative mode) may include three major stages within an overall process. In a first stage, the system presents an interface that asks a group of quantitative respondents to think of and submit an interesting peer to peer discussion topic related to the quantitative study. After the system has received a certain number of discussion topics (e.g., 3-5 discussion topics) the system moves to a second stage. For instance, in an online network of a closed group of respondents, the respondents may be presented a question by the system to submit a topic of discussion in relation to hepatitis drugs (e.g., from a group of selected physicians). The respondents may be offered some offer in association with their participation (e.g., an honorarium). The system may collect the responses, and after a predetermined threshold is determined, the system proceeds to another phase of data collection.
According to another implementation, in a second stage, the system presents an interface to respondents that permits the respondents to vote on which submitted discussion topic from the first stage is best for a peer to peer discussion. For instance, the system may present each of the collected questions from the respondents in the first stage to a number of voting respondents (who may be the same respondents in the first stage) to determine a selected candidate voting topic. Once a discussion submission has a predetermined number of votes (e.g., three votes or more or some other configurable threshold) the system progresses to a qualitative stage for collecting survey information (e.g., at a third stage), the survey information including qualitative commentary or conversation information regarding the winning topic. It should be appreciated that one or more candidate questions may be voted through and submitted for feedback in the third stage. In one implementation, a single question of the candidate questions is selected.
In one implementation, at a third stage, the system collects comments, from subsequent respondents, to a discussion regarding the winning topic determined by the second stage. Once the system compiles a predetermined number of comments (e.g., 30 comments on the topic), the system transitions to the first stage to create a new discussion by subsequent respondents. According to one implementation, the system automatically rotates through the stages, eliminating human intervention in the collection process. Further, because the process is automated, any number and type of survey can be generated (even in parallel) to produce quantitative and qualitative information.
As an alternative implementation, the conversation can occur and be collected either off the social platform or on the social platform. When the conversation is performed and collected on the social network, non-survey respondents may be requested to participate within the conversation. For instance, a closed survey group consisting of certain targeted doctors may be requested to establish and vote for the qualitative portion in the first two stages, and the survey information may be collected from a number of respondents outside of the initial survey group. Such a survey may be posted, for example, to an open or closed social networking group.
Conventional quantitative surveys are one-directional and force respondents to answer within a fixed structure
Real-world medicine is often complicated and always nuanced
Responses to healthcare surveys often miss the type of subtleties found in qualitative research
Within a specific social networking group such as the well-known SERMO physician online community, one or more of the aspects for collecting survey information may address the limitations of traditional quantitative data collection in healthcare:
According to one embodiment of the present invention, a system is provided for collecting survey information. For instance, the system may be implemented using one or more components and interfaces that allows multiple respondents to provide information which is then aggregated and provided to information consumers. According to one embodiment, the collection of candidate questions for the survey, election of winning questions, and collection of survey responses for the winning questions are all performed automatically by the system. According to one implementation, system is capable of automatically generating survey questions and responses in real time responsive to provided targeted areas that are created by a client.
According to another implementation, the survey system is coupled to a social networking system, and the survey system is adapted to submit entries as posts within the social network which can be used to generate further high-value content. Further, the system may connect to other systems and provide additional functionality, such as sending emails to non-winning respondents and requesting feedback on winning entries.
Further, the system may collect a variety of information in particular topic areas, such as those areas requested by a client, and the client may be presented an interface in which to view results from surveys in progress (e.g., by a client portal). In one version of the client portal, a client may be permitted to put forth a topic as if it had been a winning topic, and may submit the topic for comments to the network. In another aspect, it is appreciated that during different forms of market research, the clients do not have access to respondents, so according to one implementation, the system may include a capability for the client to correspond or otherwise contact a respondent (e.g., anonymously) based on information within the survey results. For example, a client may want to follow-up on a certain issue within an anonymized physician network on a particular issue, and the physician may be offered an honorarium to do so. In one implementation, the system permits both the client and the respondent to be anonymous to each other within this communication.
According to one aspect, a distributed system is provided comprising a survey engine adapted to perform a plurality of survey functions in a computer system controlling a survey including one or more functions that generate a plurality of first stage interfaces, each of the plurality of first stage interfaces being generated for respective ones of a plurality of respondents; receive, via the plurality of first stage interface, respective topics from one or more of the respondents, each of the respective topics being authored by the respective respondent; generate a plurality of second stage interfaces, each of the plurality of second stage interfaces being generated for respective ones of the plurality of respondents; receive, via the plurality of second stage interfaces, a respective vote for one or more of the received topics; automatically determine, responsive to the received plurality of votes, at least one winning topic of the respective topic received from the respective respondents; generate a plurality of third stage interfaces, each of the plurality of third stage interfaces being associated with the plurality of respondents and having controls to accept user feedback regarding the at least one winning topic of the respective topics received from the respective respondents.
According to one embodiment, the survey engine is adapted to automatically compile the received user feedback into compiled information and display the compiled information to a client via a fourth stage interface. According to another embodiment, the survey engine is adapted to indicate, to the plurality of respondents, the at least one winning topic responsive to the determining of the at least one winning topic.
In another embodiment, the survey engine is adapted to perform the plurality of survey functions without operator intervention. According to another embodiment, the survey engine is operable to submit the compiled information as a post to a social network.
According to another embodiment, the distributed system further comprises a component adapted to collect feedback received from the social network relating to the post, and providing the feedback automatically to the client. In yet another embodiment, the distributed system further comprises a router component adapted to transition a respondent through a set of first, second and third stage interfaces.
In another embodiment, the router component is adapted to perform a validation of a user token associated with a respondent. In yet another embodiment, the router component is adapted to perform a check of a particular project survey prior to admittance of the user to the project survey. In another embodiment, the router component is adapted to perform a check of a particular member prior to admittance of the user to a project survey.
In yet another embodiment, the router component is adapted to perform a check of a particular member/project combination prior to admittance of the user to a project survey. In one embodiment, the distributed system further comprises a router component adapted to transition a respondent to a particular survey based on one or more parameters relating to a user. In another embodiment, the distributed system further comprises a router component adapted to evaluate one or more parameters relating to a state of a survey and execute a transition between the set of first, second and third stage interfaces responsive to the evaluation of the one or more parameters.
In another embodiment, the distributed system further comprises a router component adapted to evaluate a member and/or a project responsive to exclusion/inclusion logic defining whether to admit a user to a specified survey. In another embodiment, the evaluation and transition is performed in real time in parallel for a plurality of surveys.
In another embodiment, the distributed system further comprises a router component adapted to transition the user to an egress state of a browser program. In another embodiment, the router component is adapted to record a plurality of ingress and egress states associated with a particular user and project.
In another embodiment, the survey engine is adapted to permit an anonymous bidirectional communication between a client that sponsors the survey and at least one of the plurality of respondents. In another embodiment, the survey engine is adapted to award an honorarium to the at least one of the plurality of respondents for opting into the bidirectional communication.
In another embodiment, the survey engine is adapted to receive a fee from the client to initiate the anonymous bidirectional communication. In another embodiment, the survey engine is adapted to generate a client interface in which a client is permitted to submit a seed topic to which the respective topics are submitted from one or more of the respondents. In another embodiment, the survey engine is adapted to generate an input within the client interface that permits the client to submit a winning topic, and responsive to the submission of the winning topic, the survey engine is adapted to bypass the first stage and second stage receiving stages.
In another embodiment, the survey engine is adapted to generate a client interface including a plurality of ingress controls that determine which respondents are admitted to participate in the survey.
In another embodiment, the survey engine is adapted to generate a client interface including a plurality of controls that permit the client to define participation rules for conducting the survey. In another embodiment, the plurality of controls includes at least one control that limits the survey to respondents within a social network.
Still other aspects, examples, and advantages of these exemplary aspects and examples, are discussed in detail below. Moreover, it is to be understood that both the foregoing information and the following detailed description are merely illustrative examples of various aspects and examples, and are intended to provide an overview or framework for understanding the nature and character of the claimed aspects and examples. Any example disclosed herein may be combined with any other example in any manner consistent with at least one of the objects, aims, and needs disclosed herein, and references to “an example,” “some examples,” “an alternate example,” “various examples,” “one example,” “at least one example,” “this and other examples” or the like are not necessarily mutually exclusive and are intended to indicate that a particular feature, structure, or characteristic described in connection with the example may be included in at least one example. The appearances of such terms herein are not necessarily all referring to the same example.
FIG. 1
FIG. 1
100
102
103
shows one embodiment including a block diagram of a system according to one embodiment of the present invention. In particular, shows a distributed network that includes a number of distributed computer systems coupled by one or more communication networks. In particular, one or more users (respondents) interact with one or more user (respondent) interfaces presented as part of a distributed survey system.
101
102
101
104
101
101
100
The survey system may include a survey engine that functions to perform surveys with multiple users/respondents (e.g., users/respondents ). According to one aspect, survey engine is capable of automatically conducting one or more surveys (e.g. surveys ) which include both qualitative and quantitative aspects. In one implementation, survey engine performs a series of interactions with a group of respondents that permits both qualitative and quantitative results to be compiled for one or more topics. The survey engine may be implemented as, for example, a software component stored in a non-transitory computer-readable medium that is executed by at least one hardware processor in the distributed network .
108
108
111
107
110
111
In one embodiment, topics and/or questions may be provided to the survey engine by the client . To this end, client may be presented one or more client interfaces which allow the client to integrate with the survey system. In one implementation, the client provides one or more topics of discussion to the survey system and the client receives one or more survey results including reports (e.g., survey reports ) or any other results (e.g., results ) through the client interface .
101
As discussed, survey engine may be capable of automatically performing one or more surveys that collect both qualitative and quantitative results in a more efficient manner. According to one embodiment, such surveys may be generated on-the-fly, and their results may automatically spawn additional steps within the process, including but not limited to, automatically generating reports, identifying new questions for potential surveys, and submitting vetted topics to one or more groups of users (e.g., a social network).
FIG. 2
FIG. 1
200
201
200
202
shows a process for conducting a survey in accordance with one embodiment of the present invention. Such a process may be performed, for example, using one or more components as discussed above with reference to . At block , process begins. At block , the system (e.g. a survey system) creates and presents a survey topic to a user group within one or more user interfaces. For instance, responsive to a seed topic area provided by a client, the system may ask for relevant topics for discussion to be posed to the group on the identified seed topic area. One or more users may respond within the interface to provide their example topics (e.g., in the form of a comment, question, or other format) for consideration among the group of users. In this way, there may be interaction among users to define and determine a more relevant topic to collect feedback. This survey process contrasts with typical surveys which do not involve user-to-user coordination in that they are carefully defined off-line and then provided to users.
203
204
At block , the system solicits and receives candidate responses from the user group as discussed above. The candidate responses include candidate topics that are relevant to the topic area provided by the client. At block , the system determines whether there are enough responses received from the users. For example, the system may have defined a predetermined number of responses to receive prior to transitioning to a voting stage. In the case where none enough responses have been received, the system may continue to collect and solicit responses from the user group. In one embodiment, the system may automatically expand the user group based on the number of received responses (e.g., if the received responses are insufficient).
205
206
At block , the system conducts a vote on the candidate responses. For example, for any candidate response, users from the group may be solicited to vote for particular candidates. In one particular mentation, the number of candidate responses that are voted on may be limited by the system (e.g., the top three candidates). The vote may include responses such as yes/no, up/down, or other types of inputs (e.g., other binary inputs, other quantitative inputs). At block , the system may determine whether there are one or more winning candidates determined. For example, this may occur when a particular candidate response reaches a predetermined number of votes, receives a particular percentage of the vote, and/or any combination of parameters that could indicate a consensus among candidate responses.
206
208
209
200
If one or more winning candidates are determined at block , the system may send a winning candidate response to the user group and/or other users for comment. At block , the system may collect the results and return the results to the client. In one example implementation, the system may automatically correlate responses from multiple users and present a consolidated report to the client. According to one aspect, the report may include both qualitative and quantitative information. At block , process ends.
FIG. 3
FIG. 3
300
300
300
302
301
300
100
shows one embodiment of a router component and survey process according to various embodiments. In particular, shows a router component that performs one or more functions associated with an online survey process according to various embodiments. For example, router may receive one or more ingress communications from one or more external processes. For instance, another process such as one associated with the browser program executing a particular website interface may direct the user to a router (e.g., router ) that performs a number of real time checks, and conducts surveys for appropriate users. In one example, a real-time survey system passes off a particular user (e.g., user ) to the router as a possible participant within a particular survey. The router component may be implemented as, for example, a software component stored in a non-transitory computer-readable medium that is executed by at least one hardware processor in a distributed network (e.g., distributed network ).
306
307
308
301
300
At an initial step , a token associated with the user is handed (e.g., via a communication protocol) to the router. In one particular instance, the router parses the token, checks the validity of the token, and sets a session within the router. At step , the router writes the session to the database, after which one or more external components may be called (e.g., via API or other interfaces). For example, at step , the router checks the member and parameters associated with the user (e.g., user ). For instance, this system may maintain a separate membership database and according to one embodiment, the system may have inclusion and exclusion logic that determines whether a particular member can participate in a particular survey. For instance, in one example criteria, a survey participant must be a valid member within a particular social network (e.g., a physician in a closed physician network). To this end, router may access a membership database to obtain user information that can be used to validate the user.
309
308
309
304
305
At step , the router may check project information associated with a particular survey. For instance, depending on the survey being conducted and one or more states of where that particular survey resides, the router may determine whether a particular user is admitted into the survey. For instance, the system may maintain one or more parameters (e.g., health codes, completion percentages, country of medical practice, specialty of medical practice, languages spoken, or other parameters) that evaluate the survey which can be used as admission criteria for particular users. In one example, a particular surveys may not have enough members, may not be completed within a specific timeframe, or other evaluation that would prohibit additional users to be admitted to the survey. Responsive to a failed check member at step or a check project at step , the router may reject the user and return the user via an egress back to another environment (e.g., such as returning the user's viewing context to a browser environment ).
310
The system may also include a step of checking a complete code which may include logic for member/project combinations. In one implementation, is appreciated that although individual member and project checks may pass, the combination of a particular member and project may not be good for the survey. For example, in one instance, a member may check out as being a possible participant in the survey, and the health of a particular project may be good, however the number of people needed to complete the survey from a particular quota group may be exceeded, and as a result, the user may be screened out from a particular survey.
312
313
If these checks clear, however, the user may be permitted to interact with the survey, including permitting the user within the user face to define a particular candidate topic (e.g., a topic/project), and thereafter a page may be rendered at step to the user and other users within the survey group. The system may collect responses at step including posting of candidate topics, voting, and/or comment submitted to one or more winning topics.
304
305
After the responses are completed, the survey system may collect and collate such responses and store them within the database. Such information may be presented to the client that commissioned a particular survey. The router may thereafter return the user to an egress which puts the user into another context (e.g., a return to a previous browser environment) at step .
300
400
401
402
FIG. 4
As discussed previously, the router may track any number of the ingress and egress processes for particular users, along with any results of the survey. To this end, router may maintain one or more databases that identify different surveys, topics, states, comments, loading results, or any other information. shows a block diagram of example data structures (e.g., data structures ) used in various embodiments used to store state and variable information. For example, one or more topics may be created in the system (e.g., by a client). The topic may have a unique project ID, and the system may track the state of the topic, when it was created, and last updated. Further, the topic may have one or more rounds (e.g., rounds ) which tracks the state, when it is created, and a link to the particular defined topics.
403
404
When within each round, there may occur one or more posts (e.g., posts ) received from one or more users having different user IDs and having various levels of content. The system may also track events (e.g., events ) occurring within each round associated with each individual round specified by a round ID.
405
406
407
409
410
The system may be adapted to maintain other information relating to comments (e.g., comments ), votes (e.g., votes ), any internal metadata (e.g., ar_internal_metadata ), etc. as needed for operating the survey system. The system may also track the particular ingress and egress information (e.g., ingresses , egresses ) associated with how the user arrives within the survey and how they left the survey. Information that may be tracked can include information identifying particular ingress refers, URLs, and information identifying when the ingress occurred. Further, egress information may include when the user was transferred, what destination it was sent to, the reason for the egress, and any other egress—related information. Some of all of this information may be reported or otherwise provided to the client (e.g., within a user interface).
FIG. 5
FIG. 3
FIG. 3
500
500
501
502
shows another example implementation of a router component in accordance with various embodiments of the present invention. As discussed above with reference to , a router component may be provided that transitions users from one state of the survey to another. Similar to the system discussed above with reference to , the router , may handle interface changes within a survey interface for a particular user from an ingress point (e.g., ingress ) to an egress point (e.g., egress ).
0
1
2
According to one implementation, a session-based method may be used to establish connections with the router, and create, track, and store session data using the router. For instance, the well-known Ruby on Rails framework may be used to implement sessions within the router. In one embodiment, an ingress request (e.g., an HTTP request) may be received in handed off to the router via cross process communication. In one embodiment, a domain cookie may be set to record a URL from which the process ingresses into the router. At point α, the router may record information relating to the ingress to a database. For example, at α, the router may store a cookie (e.g., in a browser) associated with a user. At α, the router may write an ingress record, and at α, the router may transition to point β.
0
1
2
At point β, one or more remote lookups may be performed. For instance, a member database lookup may be performed that identifies the user within an external membership database (e.g., a social networking database). At point Δ, a topic update and/or insert may be performed within the database. At point ε, the router may perform a store, a post, a commit, a vote, or a skip action responsive to one or more parameters. At point θ, the system may transition to an egress state with one or more events θ, such as θwhich returns the user to the URL identified by the ingress cookie, θwhich returns the current URL, and θwhich returns an HTTP 302 redirect. It should be appreciated that other implementations of a router component is possible.
FIG. 6
FIG. 7
600
600
600
610
600
620
690
691
690
710
720
691
shows an example user interface of a first stage of a survey function generated by a survey engine. In some embodiments, the survey engine may present a user interface to a user at the first stage of the survey function. At the first stage, the user interface may allow the user to submit a topic of discussion for the user and the user's peers to discuss. For example, the user interface may comprise a first input field in which the user may provide a title of the topic of discussion. The user interface may comprise a second input field in which the user may provide more information, or a body, for the topic of discussion. The user interface may include a submit button and a skip button . The user may press the submit button to submit a discussion topic by providing input to the first input field and the second input field . The user may press the skip button to elect to not provide a discussion topic. In some embodiments, the user may receive a reward for submitting a discussion topic, which will be described in further detail in connection with . After one or more users have submitted discussion topics at the first phase of the survey function, a second phase of the survey function may begin. In some embodiments, the second stage of the survey function may not begin if not enough discussion topics have been submitted. For example, the survey engine may determine a value P, representing a threshold of submitted discussion topics. For example, the threshold of submitted discussion topics P may be 4. If the number of submitted discussion topics at the first stage of the survey function is less than P, the survey engine may not begin the second stage of the survey function. If the number of submitted discussion topics at the first stage of the survey function is greater than or equal to P, the survey engine may begin the second stage of the survey function.
FIG. 7
700
700
700
710
720
730
740
750
790
791
790
791
shows an example user interface of the second stage of the survey function generated by the survey engine. In some embodiments, the survey engine may present user interface to the user at the second stage of the survey function. At the second stage, the user interface may allow the user to vote on topics of discussion submitted by the user and the user's peers at the first phase of the survey function. The user interface may display multiple (e.g., five) discussion topics for the user to vote on, for example a first discussion topic (Test post #4), a second discussion topic (Test post #1), a third discussion topic (Test post #0), a fourth discussion topic (Test post #3), and a fifth discussion topic (Test post #2). Each of the discussion topics may include buttons allowing the user to “favorite” and “dislike” discussion topics. The user interface may include a submit button and a skip button . The user may press the submit button to submit votes if the user has voted (e.g., pressed the “favorite” or “dislike” button(s)) on one or more discussion topics. The user may press the skip button to elect to not submit votes if the user did not vote on any of the discussion topics. After one or more users have submitted votes on the one or more discussion topics at the second phase of the survey function, a third phase of the survey function may begin. In some embodiments, the third stage of the survey function may not begin if not enough votes have been submitted on the one or more discussion topics. For example, the survey engine may determine a value V representing a threshold number of “favorite” votes. For example, the threshold number of “favorite” votes may be 3. If none of the discussion topics have a number of “favorite” votes greater than or equal to V, the survey engine may not begin the third stage of the survey function. However, if at least one of the discussion topics have a number of “favorite” votes greater than V, the survey engine may begin the third stage of the survey function. In some embodiments, the discussion topic(s) that receive a number of “favorite” votes greater than V may be one or more selected topics that are passed to the third stage of the survey function. In some embodiments, one or more users that submitted the one or more selected topics may be rewarded for having their discussion topics selected. For example, the one or more users that submitted the one or more selected topics may receive compensation (e.g., $20).
FIG. 8
800
800
800
810
800
820
890
891
890
820
891
820
shows an example user interface of the third stage of the survey function generated by the survey engine. In some embodiments, the survey engine may present user interface to the user at the third stage of the survey function. At the third stage, the user interface may display to the user the one or more selected topics. For example, the user interface may display a selected discussion topic (Test post #0) that received a number of “favorite” votes greater than V at the second stage of the survey function. The user interface may comprise an input field in which the user may provide a comment relating to the selected discussion topic. In this way, the survey engine may be adapted to collect feedback relating to the selected discussion topic. The user interface may include a submit button and a skip button . The user may press the submit button to submit a comment that the user provided in input field . The user may press the skip button if the user elected not to provide a comment in input field .
FIG. 9
900
800
900
900
930
930
a
b
shows another example user interface of a third stage of the survey function generated by the survey engine. User interface may be largely the same as user interface , except that user interface additionally displays comments that have already been submitted. For example, the user interface may display a first comment (Comment by autophys10) and a second comment (Comment by autophys9). In this way, the survey engine may be adapted to automatically display collected feedback relating to the selected discussion topic.
As described, the survey engine may be adapted to generate each stage of the survey function and transition between each stage without operator intervention.
m
m
n
In some embodiments, the first, second, third, and fourth stages of the survey function as described may be a first round of the survey function. In some embodiments, the survey engine may generate multiple rounds of the survey function. For example, the survey engine may generate a second round of the survey function if the one or more selected discussion topics of the first round of the survey function receives a threshold number of comments C. For example, the threshold number of comments Cmay be 30. In such an example, if the one or more selected discussion topics in the first round of the survey function receive at least 30 comments, the survey engine may automatically generate a second round of the survey function. In another embodiment, the survey engine may generate a second round of the survey function if it is predicted that the second round of the survey function may receive a threshold number of predicted comments C. For example, the threshold number of predicted comments may be 20. Additionally, the survey engine may generate a second round of the survey function if the following is true:
0
n
1
+C
RemainingRespondents>=β/β,
0
n
1
where RemainingRespondents is the number of respondents remaining from the first round of the survey function, βis a number of respondents needed to proceed from the second stage of the survey function to the third stage of the survey function, Cis the threshold number of predicted comments needed to generate a new round of the survey function, and βis a comment conversion rate, or the number of comments expected to be received from each respondent.
The phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. Any references to embodiments or elements or acts of the systems and methods herein referred to in the singular may also embrace embodiments including a plurality of these elements, and any references in plural to any embodiment or element or act herein may also embrace embodiments including only a single element. References in the singular or plural form are not intended to limit the presently disclosed systems or methods, their components, acts, or elements. The use herein of “including,” “comprising,” “having,” “containing,” “involving,” and variations thereof is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. References to “or” may be construed as inclusive so that any terms described using “or” may indicate any of a single, more than one, and all of the described terms. Any references to front and back, left and right, top and bottom, upper and lower, and vertical and horizontal are intended for convenience of description, not to limit the present systems and methods or their components to any one positional or spatial orientation.
Having described above several aspects associated with at least one embodiment, it is to be appreciated various alterations, modifications, and improvements will readily occur to those skilled in the art. Such alterations, modifications, and improvements are intended to be part of this disclosure and are intended to be within the scope of the invention. Accordingly, the foregoing description and drawings are by way of example only.
BRIEF DESCRIPTION OF DRAWINGS
Various aspects of at least one example are discussed below with reference to the accompanying figures, which are not intended to be drawn to scale. The figures are included to provide an illustration and a further understanding of the various aspects and examples, and are incorporated in and constitute a part of this specification, but are not intended as a definition of the limits of a particular example. The drawings, together with the remainder of the specification, serve to explain principles and operations of the described and claimed aspects and examples. In the figures, each identical or nearly identical component that is illustrated in various figures is represented by a like numeral. For purposes of clarity, not every component may be labeled in every figure. In the figures:
FIG. 1
shows a block diagram of a system according to one embodiment of the present invention;
FIG. 2
shows a process for conducting a survey in accordance with one embodiment of the present invention;
FIG. 3
shows one embodiment of a router component and survey process according to various embodiments;
FIG. 4
shows a block diagram of data structures used in various embodiments;
FIG. 5
shows another embodiment of a router component according to various embodiments;
FIG. 6
shows an example user interface for a first stage of a survey function generated by a survey engine according to various embodiments;
FIG. 7
shows an example user interface for a second stage of a survey function generated by a survey engine according to various embodiments;
FIG. 8
shows an example user interface for a third stage of a survey function generated by a survey engine according to various embodiments; and
FIG. 9
shows another example user interface for a third stage of a survey function generated by a survey engine according to various embodiments. | |
The CPI and PPI for March 2022 have been released and both show continued increases in inflationary pressures. Businesses and lenders are challenged to deal with 2022 actual performance and 2022 forecasts in this complicated environment. This article will discuss approaches to identify performance risk, and related impacts on loan covenant compliance.
What did the March CPI and PPI tell us?
The March CPI is 8.5%, up from 7.9% in February. The categories with the highest increases are energy at 32.0%, gasoline at 48.0%, new vehicles at 12.0%, and food at 8.8%. These are basic food, shelter, and getting to work expenses for employees, and result in financial pressures that may drive employees to move to new employers to increase income. This inflationary pressure and the resulting labor issues should be expected to continue throughout 2022.
The March PPI is at 11.2%, up from 10.0% in February. The highest increases were in foods at 16.2%, energy at 36.7%, and transportation/warehousing at 21.0%. These are basic components of the expense structure of businesses. Not all price increases have been passed from vendor to customer; therefore, increased upward costs will be occurring throughout 2022.
Why will inflationary pressures continue?
It is important to consider that the CPI and PPI are reporting inflationary levels not seen in 40 years. Most business owners and managers have not experienced this level of inflationary performance disruption. Only one third of the US population is over 50 and lived through the prior disruption in the 1970’s and 1980’s. Businesses have attempted to respond to increased costs but have not yet passed along all the price increases to their customers, whether the customers are the end consumers or other businesses. Until costs stop increasing, businesses need to anticipate continued waves of inflation as prices rise throughout the chain from producer/manufacturer/processor/distributor/consumer.
Inflation will also be impacted by the expected increase in interest rates and by quantitative tightening, which will further pressure interest rates. Economists, bankers, and business leaders are expecting the Federal Reserve to increase interest rates throughout 2022. When the Federal Reserve increased the federal funds rate by .25% on March 17, 2022, the Chairman of the Federal Reserve, Jerome Powell, suggested more rate increases would be needed to combat inflation. At that time Powell expected inflation to drop below 3% by the end of 2022 and anticipated six additional federal funds rate increases during 2022. It appears clear inflation will not be down to 3% by the end of 2022; therefore, additional rate increases may occur. However, if you assume interest rates will increase by a total of 1.75% during 2022, that will increase the interest costs for businesses tied to variable rate lending or placing new term debt. Those increased interest costs will further feed expense growth for businesses and will hit businesses who rely on variable rate debt, such as asset-based lines of credit, more severely.
How do you assess performance risk?
When evaluating 2022 performance risk, consider these waves of expense structure impacts.
1) First Wave.
a) Labor cost increases.
b) Transportation cost increases.
c) Commodity price increases.
Consider if the business has been able to pass those costs already experienced along to its customers or if the business needs to further evaluate contracts and bid procedures to increases sales prices or to lock in costs.
2) Second Wave.
a) Escalator clauses, and impact dates.
b) Supplier increases to costs.
Consider lease and rent escalator clauses tied to CPI levels. Evaluate whether those increases have been considered at the current CPI levels, and the timing of those impacts. Consider any other escalator clauses for revenues or expenses.
Evaluate the level of initial supplier cost increases the business has experienced, and how many line items on the income statement have the potential for near term cost increases.
3) Third Wave.
a) Interest rate increases.
b) Further supplier increases based on PPI and CPI readings.
Evaluate the impact of interest rate increases between .25% and 1.75% on the financial performance of the company.
Specifically evaluate income statement line items that include energy costs, transportation and warehousing costs, and food costs. Apply the PPI March inflation factors to those line items to determine potential cost increases.
4) Fourth Wave.
a) Increased labor costs.
b) Further supplier increases, including costs associated with reduced supplier networks created by financial performance problems at the supplier level.
Evaluate the impact of 10% to 25% increases in labor costs. Consider the impact of labor or skill shortages on output, resulting in sales impacts.
Continue to evaluate the level of increase in each line item of the income statement. Where the expense item has not increased at the level of inflation for that category or for inflation overall, then consider the performance impact on the company when those inflationary impacts do occur.
5) Fifth Wave.
a) Commodity price changes.
When considering the financial impact of this wave of inflation, consider current commodity price changes, but also anticipate 2023 impacts. For example, 2023 fertilizer costs should certainly be expected to increase given the amount of imported fertilizer the US uses. Food costs should be expected to increase based on the Russia / Ukraine conflict’s impact on acres planted during 2022.
6) Continuing Waves
As shocks occur after each one of the waves of inflation identified above, there will be trickle down impacts throughout the expense structure of businesses. Each of those shocks and trickle-down impacts will create an additional wave of inflation.
2022 Forecast Evaluation
When evaluating performance to plan, develop a performance risk analysis table. Use each key line item in the expense structure of the business, such as labor, energy, transportation, leases/rents, commodity purchases, and interest expense, and identify the high and the low potential impact. Also consider price increases for all other expenses at the current PPI level. By preparing this type of analysis, it will be easier to identify areas of performance risk and it will be easier to put energy into areas that will have the highest probability of positive impacts on financial performance.
Then evaluate the cushion a business has in terms of meeting its loan covenants. For example, a company with a 2022 forecast resulting in a 2.5x fixed charge coverage ratio certainly has more room to absorb performance risk than a company with a 1.2x fixed charge coverage ratio forecast.
Overall, this can sound depressing or daunting, and to some extent it is. But this is when winner and losers will be determined. Businesses that dive into the analysis and work to evaluate ways to minimize financial impacts will be the ones that succeed. Businesses that hope this economic environment goes away, or ignore the impacts, will be the ones that lose. | https://www.focusmg.com/post/escalating-inflationary-pressures-expect-waves-of-inflationary-impacts |
Tools of Engagement:
10 Actions to keep your employees engaged at work.
1. Acknowledge each employee as a valued individual contributor.
2. Continuously communicate vision/values/goals that must be accomplished.
3. Involve them in root cause analysis, gap analysis; identifying problems and solutions.
4. Provide them with the necessary resources to do their jobs effectively.
5. Be clear about performance standards, acknowledge progress and success.
6. Set and make one-on-one development time with each employee your top priority.
7. Set standards of tolerance, compassion and interpersonal workgroup support.
8. Identify, then remove frustrations and productivity barriers hindering progress.
9. Promote creating a positive work life – creating one’s own positive environment.
10. Tend to your own work life, satisfaction and work engagement.
Let us know how these work for you! | http://www.pswct.org/10-tools-of-engagement/ |
A code of conduct for dealing with each other with sensitivity to our differences, with confidentiality, and with mutual respect.
Volunteering sensitively
- All volunteers, enablers and helpers should treat each other with courtesy, respect and dignity
- People should be prepared to listen to each other and allow people time to express themselves regardless of language skills
- People should be aware of different cultural sensitivities – some people may find physical contact difficult; others may expect a physical form of greeting. People should respect differences without taking offence.
- When talking to someone with limited English a calm and moderate
- tone should be used. Please use short simple phrases and be patient.
- When talking to someone via an interpreter try to be sensitive to confidential issues and be discrete
- People should not use obscene, derogatory or blasphemous language – in any language
- Everyone should be aware of child safety issuers and ensure that the venue and equipment is safe.
- Volunteers, enablers and helpers should treat people equality and non-judgementally
Precautions and sensitivities
- Photography: Please be aware that people have come from difficult situations, always ask before you photograph people. Be aware that some people are escaping marital violence photos of them or their children may compromise their safety.
- Social media: Please do not post comments relating to asylum seekers on social media – as with photos people can be traced. Do not post messages for people or refer to people by name. If people are trying to find family members contact the Red Cross.
- Communications with the media: Asylum seekers who are at Urban House, (the Initial Accommodation Centre), and asylum seekers in the community are vulnerable people. If you are contacted by the media, or someone claiming to be from the media, please contact WDCofS Coordinator or the Chairperson. If you are asked to make comments about Urban House please refer the matter to Urban House. (01924 572003). Urban House tries to protect the interest of people in their care. Please be aware that the identity of people at Urban House should be kept confidential. If an agency contacts you asking about a specific person, do not acknowledge that you know the person. Even if the caller is offering help and support, refer the caller to Urban House.
- Confidentiality: Please be aware that people who are in the asylum process or have been through the process may give you some of their personal information. Information should not be shared with another person or an agency unless the person has specifically given you permission – even then take care.
- General: If a person shares with you that they wish to harm themselves, harm others or committee a crime, please contact – Urban House if they are staying there, WDCofS Coordinator or the Chairperson or Samaritans (01924 377011), the police non-emergency (101) or, in an emergency the police on 999.
- Do not try to cope with matters on your own – we are here to offer a welcome and when needed to find the right support. | https://wakefield.cityofsanctuary.org/volunteers/policies/sensitivity-to-each-other |
Intelligent Transport Systems (ITS) have shown their capability of exploiting real-time and/or predicted data to enhance traffic safety, reduce the negative environmental impacts of transport systems, and increase the efficiency of transportation networks. However, performances of dynamic transportation models in reproducing network dynamics are strongly related to the quality and quantity of the available mobility demand information. As a consequence, the knowledge of the Origin-Destination (OD) demand flows has an important role to play in ensuring the successful deployment of ITS.
In a dynamic context, for simple cases such as urban intersections or motorways equipped with entrance/exit toll stations, it is possible to directly estimate/count OD flows. For more general applications, if one were to rely solely on standard methods, only samples of the OD flows can be observed. A quite classical approach is to conduct an expensive travel survey to derive time-dependent OD flows. A more recent option is to exploit new data collection methods, such as Automated Vehicle Identification (AVI), GPS, Bluetooth and GSM data. After this sample has been collected, a common and widely adopted procedure to infer the real OD flows is to estimate them through surveillance data that, contrarily to the OD data, are easily available and cheaper. This problem is commonly known as the OD Estimation problem. When this procedure is applied to public transport, additional difficulties arise from the fact that the most straightforward source of information is the ticketing system, which is inhomogeneous across different transit networks. The problem becomes even more complex when multimodal matrices are jointly estimated, since the level of information may be different for different modes of transports.
Despite the last decades have witnessed an intensive research activity in this direction, a reliable estimation of the OD flows is achievable only if an enormous amount of information is available. As a consequence, only a few frameworks have been successfully implemented on real-world instances.
This proposed special session focuses on the new opportunities in the area of ITS and new technologies for developing and calibrating demand models for both private, Public Transport as well as multimodal systems. Vehicle to Vehicle communication, Crowd sensing, Smart Mobility, Shared Mobility, digitalization and Electronic ticketing are only a few examples of new trends that are shaping the mobility of the future. By leveraging on a more reliable, complete and large set of data, researchers can nowadays tackle many of the critical and systematic issues behind OD estimation, develop new models based on a better understanding of complex mobility behavior, forecast unusual patterns and introduce benchmarking practices to validate their approaches.
6. Applications on real life networks. | http://www.mt-its2017.org/index.php/8-special-sessions/9-models-and-technologies-for-demand-estimation |
A nurse or doctor obtains treatment information about you and records it in a health record.
During the course of your treatment, the clinician determines he/she will need to consult with another specialist in the area. He/she will share the information with such specialist and obtain his/her input.
I submit a request for payment to your health insurance company. The health insurance company (or other business associate helping me to obtain payment) requests information from me regarding medical care given. I will provide information to them about you and the care given.
I obtain services for my insurers or other business associates for outcome evaluation. I will share information about you with such insurers or other business associates as necessary to obtain these services.
A tool with which I can assess and continually work to improve the care that I provide and the outcomes achieved.
Obtain an accounting of disclosures of your health information as required to be maintained by law by delivering a request to my office/hospital. An accounting will not include uses and disclosures of information for treatment, payment, or operations; disclosures or uses made to you or made at your request; uses or disclosures made pursuant to an authorization signed by you; uses or disclosures made in a facility directory or to family members or friends relevant to that person’s involvement in your care or in payment for such care; or, uses or disclosures to notify family or others responsible for your care of your location, condition, or your death.
Revoke authorizations that you made previously to use or disclose information, except to the extent that action has already been taken, by delivery of a written revocation to my office/hospital.
If you want to exercise any of the above rights, please inform Nancy Arikian, Ph.D., L.P., in person or in writing. She will inform you of the steps that need to be taken to exercise your rights.
I reserve the right to change my privacy practices and to make the new provisions effective for all protected health information I maintain. If my information practices change, I will amend this Notice and mail a revised notice to the address you’ve supplied to me, or if you agree, I will email the revised notice to you. I will not use or disclose your health information without your authorization, except as described in this Notice. I will also discontinue using or disclosing your health information after I have received a written revocation of the authorization from you according to the procedures included in the authorization.
Communication with Family – Using my best judgment, I may disclose to a family member, other relative, close personal friend, or any other person you identify verbally or in writing, health information relevant to that person’s involvement in your care or in payment for such care. Under non-emergency circumstances, I will ask for your consent in writing.
Notification – Unless you object, I may use or disclose your protected health information to notify, or assist in notifying, a family member, personal representative, or other person responsible for your care, about your location, and about your general condition, or your death.
Research – I may disclose information in my own research or to researchers when their research has been approved by an institutional review board that has reviewed the research proposal and established protocols to ensure the privacy of your protected health information.
Disaster Relief – I may use and disclose your protected health information to assist in disaster relief efforts.
Workers Compensation – If you are seeking compensation through Workers Compensation, I may disclose your protected health information to the extent necessary to comply with laws relating to Workers Compensation.
Public Health – As authorized by law, I may disclose your protected health information to public health or legal authorities charged with preventing or controlling disease, injury, or disability; to report reactions to medications or problems with products; to notify people of recalls; and to notify a person who may have been exposed to a disease or who is at risk for contracting or spreading a disease or condition.
Abuse and Neglect – I may disclose your protected health information to public authorities as allowed by law to report abuse or neglect.
Employers – I may release health information about you to your employer if I provide health care services to you at the request of your employer, and the health care services are provided either to conduct an evaluation relating to surveillance of the workplace or to evaluate whether you have a work-related illness or injury. In such circumstances, I will give you written notice of such release of information to your employer. Any other disclosures to your employer will be made only if you provide a specific authorization for the release of that information to your employer.
Correctional Institutions – If you are an inmate of a correctional institution, I may disclose to the institution or its agents the protected health information necessary for your health and the health and safety of other individuals.
Law Enforcement – I may disclose your protected health information for law enforcement purposes as required by law, such as when required by a court order, or in cases involving felony prosecution.
Health Oversight – Federal law allows me to release your protected health information to appropriate health oversight agencies or for health oversight activities.
Judicial/Administrative Proceedings – I may disclose your protected health information in the course of any judicial or administrative proceeding as allowed or required by law, with your authorization, or as directed by a proper court order.
Serious Threat – To avert a serious threat to health or safety, I may disclose your protected health information consistent with applicable law to prevent or lessen a serious, imminent threat to the health or safety of a person or the public.
For Specialized Governmental Functions – I may disclose your protected health information for specialized government functions as authorized by law such as to Armed Forces personnel, for national security purposes, or to public assistance program personnel.
Marketing – I may contact you to provide appointment reminders or information about treatment alternatives or other health-related benefits and services that may be of interest to you.
Website – If I maintain a website that provides information about my practice, this Notice will be on the website. | http://nancyarikian.com/hippa-notice/ |
Abstract Recently there has been an increasing interest in wind power generation systems. Among renewable sources of energy (excluding hydro power), wind energy offers the lowest cost. It is therefore imperative that basics of wind power generation be taught in the undergraduate electrical engineering curriculum. In this paper, an experiment that emulates wind turbine systems has been developed for this purpose. It is known that the power that can be drawn from the wind in a wind turbine depends on the wind speed and the speed at which the shaft of the turbine is rotated. The objective of this project was to emulate the behavior of such a system using two DC machines. One of the DC machines was operated under torque control. The torque reference for this machine was generated using the Power vs. Shaft speed curves for wind turbines. This DC machine emulated the wind turbine and shaft. The second DC machine was operated under speed control and this machine emulated the electrical generator. Simulations were performed to design such a system. The system was implemented in real-time using Simulink and dSPACE control platform. Two 200W DC machines rated at 40VDC and 4000 rpm were used. The DC machines were controlled using a pulse width modulated (PWM) power converter. This project was part of an undergraduate research supported by NSF and the University of Minnesota Research Experiences for Undergraduates (REU) program.
I. Introduction
The objective is to develop a system that emulates a wind turbine. Previous efforts in this direction have employed separately excited DC machines1,2 with power ratings in the multiple horsepower range. The intended application of the system described in this paper is for undergraduate laboratory courses. Thus, a system that works at lower voltages is desired. Existing laboratory equipment such as DC motors and generators are to be used to describe the system. Since this experiment was done using two 200W DC machines3 rated at 42VDC and 3600 rpm it is more appropriate for educational purposes.
The kinetic energy from the wind is transferred as rotational mechanical energy to the wind turbine system. An optional gearbox can be placed depending on the generator specifications to increase the shaft speed (hence decreasing the torque). This mechanical energy is converted to electrical energy using a generator. A power electronic interface may be needed to interface the generator with the supply grid and to provide a control method for the system. | https://peer.asee.org/emulation-of-a-wind-turbine-system |
National Technical University of Athens | NTUA
National Technical University of Athens (NTUA) is the oldest and most prestigious educational institution of Greece in the field of technology, and has contributed unceasingly to the country's scientific, technical and economic development since its foundation in 1836. NTUA has extensive experience in synthesis, characterisation and processing of nanomaterials as well as of nanostructured materials that can be used in the biomedical sector. Additionally, NTUA has the ability to evaluate the health and safety footprint of each process through life cycle analysis and Safe by Design approaches. | https://www.biorima.eu/consortium/detail.php?we_objectID=48 |
Foreign minister to stay more in US after PM’s visit
National
WASHINGTON: Foreign Minister Shah Mahmood Qureshi will stay back in the US for at least four more days after Prime Minister Imran Khan’s visit is over.
Prime Minister Imran Khan will meet President Donald Trump on July 22. The White House confirmed Wednesday that both leaders will focus on strengthening cooperation between the two countries to bring peace, stability, and economic prosperity and will also discuss a range of issues, including counterterrorism, defence, energy, and trade, "with the goal of creating conditions for a peaceful South Asia and an enduring partnership between our two countries."
Clearly, the focus will be regional situation, especially when US desperately wants to reach a peace deal with the Taliban so it could withdraw from Afghanistan. Imran Khan is scheduled to stay for three days in Washington along with his cabinet members including the foreign minister, adviser on finance, adviser on investment and commerce.
The foreign minister as part of the delegation will accompany the prime minister but will remain here for four more days to meet officials, US dignitaries, and lawmakers and other stakeholders to discuss matters of mutual and regional concerns, sources told The News.
The embassy here has not shared any details of the meetings so far. However, the Foreign Office has released a statement highlighting that the prime minister will also be meeting prominent members of the US Congress, corporate leaders and opinion makers as well as members of the Pakistani diaspora. The visit will contribute towards building a broad-based, long-term and enduring partnership between the two countries on the basis of mutual interest and mutual benefit, according to the Foreign Office.
"During his various engagements in Washington, the prime minister will outline his vision of “Naya Pakistan” and underscore the importance Pakistan attaches to a broader and multi-faceted relationship with the United States. In the regional context, the prime minister will underscore Pakistan’s commitment to peace and stability and the importance of constructive engagement to promote a political solution in Afghanistan. He will also highlight Pakistan’s policy of “peaceful neighbourhood” aimed at resolving disputes through dialogue and promoting the vision of peace, progress and prosperity in South Asia and beyond," the Foreign Office statement said.
WASHINGTON: Foreign Minister Shah Mahmood Qureshi will stay back in the US for at least four more days after Prime Minister Imran Khan’s visit is over.
Prime Minister Imran Khan will meet President Donald Trump on July 22. The White House confirmed Wednesday that both leaders will focus on strengthening cooperation between the two countries to bring peace, stability, and economic prosperity and will also discuss a range of issues, including counterterrorism, defence, energy, and trade, "with the goal of creating conditions for a peaceful South Asia and an enduring partnership between our two countries."
Clearly, the focus will be regional situation, especially when US desperately wants to reach a peace deal with the Taliban so it could withdraw from Afghanistan. Imran Khan is scheduled to stay for three days in Washington along with his cabinet members including the foreign minister, adviser on finance, adviser on investment and commerce.
The foreign minister as part of the delegation will accompany the prime minister but will remain here for four more days to meet officials, US dignitaries, and lawmakers and other stakeholders to discuss matters of mutual and regional concerns, sources told The News.
The embassy here has not shared any details of the meetings so far. However, the Foreign Office has released a statement highlighting that the prime minister will also be meeting prominent members of the US Congress, corporate leaders and opinion makers as well as members of the Pakistani diaspora. The visit will contribute towards building a broad-based, long-term and enduring partnership between the two countries on the basis of mutual interest and mutual benefit, according to the Foreign Office.
"During his various engagements in Washington, the prime minister will outline his vision of “Naya Pakistan” and underscore the importance Pakistan attaches to a broader and multi-faceted relationship with the United States. In the regional context, the prime minister will underscore Pakistan’s commitment to peace and stability and the importance of constructive engagement to promote a political solution in Afghanistan. He will also highlight Pakistan’s policy of “peaceful neighbourhood” aimed at resolving disputes through dialogue and promoting the vision of peace, progress and prosperity in South Asia and beyond," the Foreign Office statement said. | |
To enable a multidisciplinary team to build, evaluate and sustain a shared approach to supporting self-management, reflecting a shift towards successful person-centred care.
Frontline staff in health and social care staff of any grade working in the same team, service or pathway.
What are the potential benefits of our service?
Teams who work in a targeted person-centred way can improve the quality of their interactions with patients and families, which in turn can lead to improvements in measures of patient experience and satisfaction. Secondary service outcomes that can be expected include more efficient use of health and social care resources.
Patients and families who feel supported to self-manage feel greater confidence and control over their condition and daily lives. In turn this can bring about improvements in mood, functional ability and perceived quality of life.
Bridges is two stage training with a full day (09.00 – 16.30) initial workshop and half day (09.00 –13.00) workshop approximately three months later.
The time in between workshops allows the team to implement the Bridges approach to self-management support and return to the follow-up workshop with practical experiences to critically reflect on. Practitioners are required to write a case reflection in time for the follow-up workshop.
The team has a more consistent, person-centred approach to practice, with a focus on what’s important to the person and their family.
Teams in London can fund our team training through the Health Education England Portal for London and the South East via the Kingston University and St. George's University of London workforce development prospectus. Please contact your Training and Education Lead. | http://www.bridgesselfmanagement.org.uk/team-training/ |
It is no accident that many of President-Elect Biden’s first personnel announcements have been key appointments aimed at strengthening the country’s economy. Biden is sending a clear message that important “pocket book issues” will be a central part of his White House agenda. Steering the economy through the COVID pandemic, record unemployment and a long and grueling winter of climbing infections, hospitalizations and death will not be easy. Having the right people in the right positions has rarely been more critical.
Though not officially part of the President’s economic team, there is another key position that has an enormous impact on millions of Americans and their household budgets: Director of the Consumer Financial Protection Bureau (CFPB). If predatory lenders are free to rip off and cheat Americans without any effective check it will undermine our ability to climb out of this hole.
Four years ago, the CFPB was among the most effective agencies in Washington, taking steps both in enforcing consumer protection laws and in adopting regulations protecting consumers, aimed at making banks and other lenders operate in fairer and more honest ways. But the CFPB has become a shell of its former self under the direction of Kathy Kraninger and her predecessor, Mick Mulvaney. Restoring the CFPB and refocusing it on the central mission Congress intended cannot be done without a change in leadership. The President-Elect must swiftly remove Kraninger from her post and nominate a true consumer champion to take the reins of the CFPB.
In the first six years of the CFPB’s existence, consumers could count on it to enforce the laws prohibiting lenders from cheating and scamming consumers. In those first six years, the agency collected more than $12 billion from lenders who had cheated consumers, and returned most of this sum directly to ripped-off consumers. Since Kraninger took over in 2018, that number has fallen to $800 million – a dramatic drop that has an impact on consumers’ bottom line. (And as Bloomberg notes, a sizable portion of what has been collected by Kraninger was the result of investigations originally launched by her predecessor, Richard Cordray.)
Kraninger has proven herself to be a great friend to the corporations she’s tasked with overseeing, but she has utterly failed American consumers. Under her “leadership,” the CFPB has been transformed into the Corporate Fraud Protection Bureau. Her unwillingness to take on predatory lenders and crooked corporations renders her completely unfit for the job she currently holds.
In addition to reinvigorating the agency’s enforcement activities, President-Elect Biden should pick a CFPB Director who will engage in meaningful regulation. In 2017, after a Congressionally mandated study was completed, the agency took strong action against the use of forced arbitration clauses by lenders. Again and again, large banks, payday lenders and others had used these fine print contracts to make it impossible for consumers to go to court when they were cheated. The 2017 Congress overturned this regulation under the Congressional Review Act, however, freeing lenders back up to cheat their customers with impunity. While the agency cannot adopt a “substantially similar” regulation, it should swiftly move to take other actions to limit and sharply regulate the abuses of forced arbitration. The agency’s own research, conducted under prior Director Richard Cordray, showed, forced arbitration suppresses claims — only an infinitesimal of cheated consumers ever bring cases against lenders. Locking consumers out of court has saved corporations countless billions while costing consumers real money. In today’s economy, that’s not just theoretical savings; it has a direct impact on Americans’ lives.
As the pandemic-fueled economic crisis drags on, a growing number of consumers have been forced to turn to payday lenders. The problem is, people who turn to payday lenders nearly always turn out to be far worse off. When Rich Cordray was Director of the CFPB, the Bureau adopted an evidence-based regulation to prevent some of the worst abuses of payday lending. Unfortunately, under Kraninger, the agency repealed its earlier regulation, and instead pushed through a regulation that strongly favors the industry, and affirmatively encourages the growth of this predatory lending. Putting common sense limits on payday lending should not be a partisan issue (even though payday lenders make huge campaign contributions, mostly to Republicans). On Election Day 2020, the voters of Nebraska (who voted overwhelmingly for Kraninger’s boss, Donald Trump) passed a ballot initiative capping interest rates charged by payday lenders. Their vote follows a similar successful law approved by the residents of South Dakota in 2016 (when those same voters also voted overwhelmingly for Republican candidates, including President Trump).
Despite clear evidence that protecting consumers enjoys broad, bipartisan support among the electorate, Kraninger, Mulvaney and Trump forged ahead with a sweeping and vicious anti-consumer agenda that immunized lenders and left working people out in the cold. Even when she has taken action, Kraninger has mostly gone after small lenders that are unable to pay even the meek fines she has leveraged against them, meaning she can hardly be considered a deterrent to, or worry for, larger companies that have been given free rein to rip off consumers without any meaningful consequences.
All of this points to one obvious conclusion: The CFPB cannot right its course without new leadership. Firing Kathy Kraninger should be a priority for President-Elect Biden.
Even a new, consumer-friendly Director will have his or her work cut out for them. The new Director will have to overcome a powerful industry which wields enormous political influence, and will have to be especially vigilant – and determined – in developing new policies to curb the worst of these behaviors. It won’t be easy, but it can be done. Consumer groups like Public Justice stand ready – and eagerly willing – to work with new leadership at the agency and help rebuild the CFPB into the effective, dependable government agency it once was.
Those who have suffered most during the current economic crisis will benefit most from strong new leadership at the CFPB. It is imperative that President-Elect Biden choose a Director who believes in the agency’s mission and the working men and women it was meant to protect. Turning the page on Kraninger’s tenure, which has been spent dismantling the agency and dismissing wrong-doing, is a critical part of steering our economy and putting our government back on the side of the people. | https://www.publicjustice.net/firing-kathy-kraninger-must-be-part-of-president-elect-bidens-economic-agenda/ |
Quick academic help
Don't let the stress of school get you down! Have your essay written by a professional writer before the deadline arrives.
Social Responsibility and Social Responsibility Disclosure
Corporate governance is a critical element in driving excellence in corporate social responsibility (CSR). One of the important cornerstones of corporate governance is board of directors. Thus, this study attempts to examine the effect of this board structure on corporate social responsibility disclosures of public listed companies in Malaysia. Data for the study was collected using secondary source. CSR disclosure index was developed in an attempt to examine the CSR disclosure in the four dimensions as specified by the Bursa Malaysia. The four dimensions are environmental, community, workplace and marketplace. Multiple regression analysis was employed to analyze the data. The result shows that managerial ownership is significant and negatively influences the CSR disclosure in Malaysian listed companies. The other board variables appear to have the expected direction of the hypotheses, but are not significant.
The existing literature on corporate social responsibility (CSR) disclosure in the context of Islamic banks has focussed only on the extent and nature of CSR, ratherthan on factors influencing the level of CSR disclosure. This study investigates the types and extent of CSR information disclosure in 132 annual reports of Islamicbanks from different countries for the year 2008 using a benchmark based on Shariah principles and rules. It also examines the impact of corporate governance (CG)mechanisms, consisting of the board of directors‟ effectiveness (BODE), the effectiveness of Shariah supervisory board (ESSB), the audit quality (AUDITQ), andthe overall score of CG on the CSR disclosure level based on legitimacy theory. The study uses content analysis and ordinary least square regression to achieve the research objectives. The findings show that there is an increase in disclosure of CSR information. The ranking of CSR disclosure themes (from highest to lowest) is as follows: (1) top management, (2) products and services, (3) Employees, (4) Shariah supervisory board (SSB), (5) customers, (6) late repayments and insolvent clients, (7)other aspects of community involvement, (8) vision and mission statement, (9) poverty, (10) Zakah, (11) charitable and social activities, (12) unlawful transactions, (13) Quard Hassan, and (14) the environment. As for the BOD, the results show that only the board composition has a negative impact on CSR disclosure. Regarding the SSB, cross-memberships are positively associated with CSR disclosure, whilst for the AUDITQ, the members with an accounting degree, the percentage of nonexecutive directors and the meetings‟ frequency are found to influence the CSRdisclosure. The findings also show that SSBE and AUDITQ significantly influence the CSR disclosure, while BODE has no influence at all. The overall score of CG also significantly influences the CSR disclosure
Corporate social responsibility disclosure ..
CSR disclosure can be defined as the information that a company …Durham E-Theses corporate social responsibility disclosure · PDF fileA Thesis Submitted for the Degree of PHD corporate social responsibility disclosure 12 1.
5.2 CSR initiatives 36The Legitimacy Theory And CSR Disclosure Accounting Essay23-3-2015 · The Legitimacy Theory And CSR Disclosure Accounting Essay.
Ph.D Thesis: Impact of Corporate Social Responsibility …
As a consequence, the environmental aspect of CSR has even gained theCorporate Social Disclosures and Accounting Theories · PDF fileCorporate Social Disclosures and Accounting Theories An Investigation René Orij Leiden University The Netherlands Prepared for the 30th Annual Congress of the CORPORATE SOCIAL RESPONSIBILITY DISCLOSURE PRACTICES … · PDF fileCORPORATE SOCIAL RESPONSIBILITY DISCLOSURE PRACTICES IN VIETNAM Differences between English and Vietnamese versions of large listed companiesDETERMINANTS OF SOCIAL RESPONSIBILITY DISCLOSURE … · PDF file1 Discussion Paper 72 DETERMINANTS OF SOCIAL RESPONSIBILITY DISCLOSURE BY CHINESE FIRMS Shujie YAO, Jianling WANG and Lin SONG July 2011 China …A study of Corporate Social Disclosure in Financial Sector · PDF fileInternational Review of Business Research Papers Vol.
37-55 Corporate Social Disclosure in Bangladesh: A Study of theUniversity of Wollongong Research Online · PDF fileUniversity of Wollongong Research Online CSR disclosure and financial performance in an emerging PhD thesis would not have been completed without …Empirical Study between CSR and Financial Performance of · PDF fileI Empirical Study between CSR and Financial Performance of Chinese Listed Companies Spring 2012:MF06 Master’s thesis in Business Administration (15 credits)
Why choose our assistance?
-
UNMATCHED QUALITY
As soon as we have completed your work, it will be proofread and given a thorough scan for plagiarism.
-
STRICT PRIVACY
Our clients' personal information is kept confidential, so rest assured that no one will find out about our cooperation.
-
COMPLETE ORIGINALITY
We write everything from scratch. You'll be sure to receive a plagiarism-free paper every time you place an order.
-
ON-TIME DELIVERY
We will complete your paper on time, giving you total peace of mind with every assignment you entrust us with.
-
FREE CORRECTIONS
Want something changed in your paper? Request as many revisions as you want until you're completely satisfied with the outcome.
-
24/7 SUPPORT
We're always here to help you solve any possible issue. Feel free to give us a call or write a message in chat.
CSRHub for Research Purposes | CSR Ratings
The objective of this study is to produce a longitudinal analysis of the disclosure levels of CSR reporting in Chinese listed companies which are listed in the Top l00 in 2002 and 2006.
Several students have written their thesis or ..
The aims of this study to investigate influence of size, profit, gearing and industry type, and dependence on government on Corporate Social Responsibility (CSR) disclosure in MESDAQ listed companies, where CSR is the low awareness and lack of recognition that participation in the implementation of CSR in business operations.
we are asked if anyone has used CSRHub for research purposes
This thesis attempts to investigate three questions: to what extent does the gambling industry disclose CSR-related data, how is CSR understood in this industry and why does the gambling industry engage in CSR?
Corporate Social Responsibility Disclosure Among …
Future research should also examine the effect of assurance and its quality on firm value at different points of time and with various types of CSR disclosure.
Research Title CORPORATE SOCIAL RESPONSIBILITY ..
Regression analysis using panel data is initially used to analyse the potential association between CSR disclosure and five important board diversity measures, specifically independence, tenure, gender, multiple directorships and an overall diversity measure.
Contact Us - B. S. Abdur Rahman University
Today, the way to measure CSR performance is through it’s activity report, which is with content analysis. This method can change qualitative information to quantitative information so it can be divined in a statistic equation. This means, the total number which can be retrieved from this content analysis process can describe many disclosures which has been informed in this report. Which needs to be underlined is the CSR information which is being disclosed is not guaranteed information which describes all CSR activities which has been conducted. There might be a gap.
How it works
-
You submit your order instructions
-
We assign an appropriate expert
-
The expert takes care of your task
-
We send it to you upon completion
Our achievements
-
37 684
Delivered orders
-
763
Professional writers
-
311
Writers online
-
4.8/5
Average quality score
Students’ reviews
-
Kim
"I have always been impressed by the quick turnaround and your thoroughness. Easily the most professional essay writing service on the web."
-
Paul
"Your assistance and the first class service is much appreciated. My essay reads so well and without your help I'm sure I would have been marked down again on grammar and syntax."
-
Ellen
"Thanks again for your excellent work with my assignments. No doubts you're true experts at what you do and very approachable."
-
Joyce
"Very professional, cheap and friendly service. Thanks for writing two important essays for me, I wouldn't have written it myself because of the tight deadline."
-
Albert
"Thanks for your cautious eye, attention to detail and overall superb service. Thanks to you, now I am confident that I can submit my term paper on time."
-
Mary
"Thank you for the GREAT work you have done. Just wanted to tell that I'm very happy with my essay and will get back with more assignments soon." | http://paperkilledrock.com/thesis-csr-disclosure.html |
The artistic education areas in didactic process are articulated with different weight. The aim of this chapter is theoretical substantiating of reference marks for methodological integralization of artistic education. This chapter analyses the impact of artistic education arias to general education. Based on hypothesis that inconsistent methodological approach of the artistic education demarches in different schooling stages, it is confirming the coherence of our society in realizing the formation/development process of the pupil's personality through and for the art. Thus, in context of methodological integration the ideal of integral personality on the affective, attitude, cognitive, cultural line belongs to artistic education arias. It will offer new perspectives for the theoretical conceptualization, design and development of innovative educational-artistic methodologies and technologies.
The function of art in school is educational. The cultural/spiritual/intercultural challenges and disorientation of youth in society require reconceptualization of curricula in arts. On the one hand, as noted by Howard (2014, p. 1) the dispute over education practice exists because defined knowledge within the curriculum is instilled with cultural signifiers that either perpetuate the status quo or challenge the dominant culture narrative. On another hand, the artistic education areas - literature, music, theatre, visual arts, choreography, etc. - are articulated with different weight in the educational process. The role of artistic education in forming the competences for young people for life has been widely recognized in the twenty-first century. However, only music and visual arts are taught in school, at primary and secondary levels, in most European countries (EACEA, 2009). Each educational institution has the freedom to choose optional artistic subjects. However, this does not contribute to the harmonization of education toward the artistic act. The notion of artistic education can be interpreted as “a continuous process of the personality spiritual self-realization through different forms of contact with the art” (Morari, 2005, p. 5). The areas of artistic education should be stressed upon the formation/creation/edification of the personality/being/self of the pupil, not on the learning/acquisition of art as a subject of study, as goal in itself. Let us suppose that the theoretical-methodological integration of the areas of artistic education may insure the coherence and synergy of the action of arts over every personality, by capitalizing the arts potential and appropriate exploration of the educational-artistic methodologies.
The inconsistent methodological approach of the demarche of artistic education in different schools, and the low interference between the axiological fundamentals of the artistic education areas and methodological fundamentals of the arts as curricular subject in contemporary school, generates the problem of the research: Which are the theoretical-practical benchmarks of methodological integration of artistic education areas in general education? The general perspective of the chapter is to describe theoretical resources for the methodological substantiation of the integration process of artistic education areas in the context of postmodernism.
The chapter comprises several key subsections, which in total delineate the idea of methodological integration of artistic education areas. The first subsection connects different artistic works on a common platform and establishes ideas about the synthesis of art and unity of art works. The next subsection uses a critical methodological framework to describe the applicability of the principles of intellect in art and artistic education, as well as the role of existential, expressive and actional artistic phenomenon in the formation of a complex vision about art. The next subsection uses a critical pedagogy framework to analyze the system of values of artistic education from the perspective of educational system components (epistemological, teleological, technological and content) depending on the nature of the knowledge object and from the perspective of didactic technologies. Moreover, it reveals the content and indicators of artistic education as well as levels and awareness of artistic experience and its functions in the training and development of personality. The last subsection examines postmodernist trends in twenty-first century learning outcomes through conceptualization of attitudes as the main element of integration of the artistic experiences not only by on the behavioral, but also the inner level – the psychological actions, acquiring as a result of education and personal artistic education.
Artistic Education Values: All types of values that represent the message promoted by teleology, content, methodology and epistemology of arts education. The criteria for classification of values arts education are: (1) from perspectives of educational system components; (2) in function of the nature of knowledge objects; and (3) from the perspective of didactic technologies ( Paslaru, 2007 , p. 11).
Artistic Process: A succession of states, steps, stages, through which the human evolves together with artistic phenomenon, characterized in two ways: as an external action, through the sequence of steps of artistic activity (creation, reception, interpretation, listening, etc.); and as the inner action in the succession of states and inner experiences of artistic activity.
Attitude toward Art: Consists of affective, cognitive and behavioral elements. Essential in the attitude toward art is the inner side (the psychic) and not the outward (behavior). Attitude toward art should be open, comprehensive, exploratory, and based on experiences and personal interpretations of values.
Artistic Education Areas: Spheres (areas or domains) of education that correspond to patterns of fine arts , represented through literature, music, choreography, theatre, and visual arts. The areas of artistic education are literary-artistic education, music education, theatre education, choreography and artistic-aesthetic education.
Artistic Experience: Whole states of emotional sensibility experienced directly in the process of artistic acts of perception, interpretation, creation, and reflection. In the process of artistic experience the personal autonomy of the student emerges through the discovery of spirituality in artwork artistic messages and through creating and fostering cultural values.
Common Nature of Arts: Points of contact and exchange between arts, the phenomenon of synthesis and symbiosis of arts in work and artistic creation. The common nature of arts can be discovered at existential level of artistic phenomenon, at the expressive level of art through intuition and at the actional level through artistic experience. The common features of all arts have aesthetic and ideological implications conditioned by the space and the time in which they appear.
Artistic Education: An individual continuous process of spiritual self-perfection of the personality through multiple forms of contact with art. In K-12 arts education there are two approaches: 1. Education for art, aimed at preparing those who receive/interpret arts for the understanding as well as for assimilation of artistic messages; 2. Education through art, which endorses the educational potential of the artwork toward the general development of the student personality. | https://www.igi-global.com/chapter/artistic-education-areas/140774 |
Effective leadership, particularly in a professional environment, entails a lot of things, including self-discipline, critical thinking and emotional intelligence. First and foremost, it entails effective communication, capable of exploring and explaining ideas in the clearest and most engaging way possible. It doesn’t matter if you’ve got the tactical prowess of Napoleon; if all that happens when you open your mouth is that those around you tune out, you’ll never conquer France.
In this article, we’ll explore some habits of the best leaders and communicators, and some tips on how to implement them in your own life.
He also liked to say, “Measure twice, cut once.” That’s my dad for you.
In professional communications, both of these pieces of advice can be invaluable, as they can save you from the pitfalls of impulse or ego. There’s a tendency in a crowded room to want to talk for the sake of talking, to be the first person to open their mouth and get there – and it can often come at the cost of actually having anything worthwhile to say.
If shows like “How It’s Made” have anything to teach us, it’s that a passionate, assertive communicator can make even the most esoteric subject thrilling. If you care about your subject, your audience will too – and if you don’t, they won’t.
Despite what advertisers or politicians might wish were the case, generally speaking, people can smell BS a mile away. Unless you’re a KGB operative, chances are overwhelmingly positive that you’re not half as good at lying as you think you are, and those around you can sense when you’re not being straight with them.
That doesn’t mean you should get better at being crooked – it just means you should tell the truth to start with.
Just as people can sense when you’re not telling them the truth, they can also sense when they’re not being spoken to in a truthful fashion. There’s a marked, noticeable difference in the cadence of a used car salesman and that of a preacher, politician or teacher.
Assume that other people are exactly as perceptive as you, and exactly as familiar with different modes of communication. Then ditch every mode of communication that isn’t simple, straightforward and authentic. Be candid. Be frank. If necessary, be blunt. Do not try to spin people or handle them – because being spun is nauseating, and being handled is demeaning.
Strong communication requires strong critical thinking, and strong critical thinking requires an ability to see the big picture. Effective communication also requires the ability to be specific, to distill the big picture down into a single comic-book panel that conveys one idea in a clear, efficient way. Platitudes and generalizations are useless. Communication is the means to an end, and for a leader, that end should always either be requesting information, or inciting action.
A good military strategist knows when to cut his losses and cede a battle – but even better, he knows which battles to avoid. Likewise, a good communicator knows when to admit they’re wrong, when to concede a point and when to avoid speaking on a subject they’re not informed about.
Most importantly, a good leader knows when to confess his or her own ignorance, and even how to use a confession to force others to communicate more clearly.
If leadership is conducting the battle, then communication is relaying orders: Be brief and to the point. Get in and get out. No chatter on the mission channel. Loose lips sink ships. After you’ve listened, thought about what you wanted to say, and said it … stop talking. Let your words do the work you designed them to do, and keep your mouth shut until and unless you have something useful to say.
Always remember: Choosing not to communicate is a form of communication, too.
Responding effectively to external events is only half of effective leadership. The other half is doing things proactively: setting new goals, evaluating progress, anticipating problems and preventing them before they arise. Communication is no exception, and in a team-based setting, it can be the single greatest tool you have to accomplish those tasks in other areas.
A teacher teaches others. A cook cooks for other people. Remember at all times that communication is a service, and it’s not just for yourself or the audience: Effective leaders take the risk of speaking on behalf of others. They say the things that others are thinking, in the manner they’re not capable of saying it. They express the rumblings of the majority, but also the whispers of the minority. They contrast and synthesize ideas. They offer analysis and commentary. They help those with a voice express their thoughts more clearly, and those without a voice express their thoughts at all.
Leadership is communication, and communication is leadership. The two are inseparable ideas, and you can’t do one without doing the other.
It’s no wonder, then, that bad leaders are bad communicators. They don’t listen, they don’t think, they spin lies, they think they know everything, they don’t speak up when they should and at the same time, they don’t know when to keep their mouth shut.
In seeking to emulate the communication habits of the best leaders, the simplest solution is this: Evaluate the qualities that make them the best, then apply those qualities to communication.
This post is so rich with meaningful nuggets of leadership truth about communication. I think the advice “Choosing not to communicate is also a form of communication” helps me see things in a new perspective.
Thanks, Mariah! I think that’s one of the hardest habits to learn, but definitely one of the most important as a leader! | https://www.punchedclocks.com/10-communication-habits-of-the-best-leaders/ |
TECHNICAL FIELD
BACKGROUND
RELATED ART DOCUMENT
Patent Document
BRIEF SUMMARY
Technical Advantages
DETAILED DESCRIPTION
EXPLANATION OF REFERENCE NUMERALS
The present invention relates to an overhead console for a vehicle and a coupling method of the overhead console.
Japanese Patent Publication 2007-223355 discloses a conventional coupling method of an overhead console where after coupling a bin with a panel from a design surface side, a torsion spring is set to the panel and the bin from a side surface of the panel.
However, there is a problem with the coupling method of the conventional overhead console. More particularly, it is difficult to set the torsion spring to a space between the bin and the panel after coupling the bin with the panel. Therefore, it is desirable to improve the conventional coupling method of an overhead console.
[Document 1] Japanese Patent Publication No. 2007-223355
One object of the invention is to provide an overhead console and a coupling method of the overhead console improved in coupling work compared with a conventional coupling work.
The overhead console according to certain embodiments of the present invention for achieving the above object is as follows:
(1) An overhead console comprising:
(a) a panel including a bearing and a panel-side engaging portion;
(b) a bin including a rotatable shaft rotatably supported by the bearing and a bin-side engaging portion, the bin being rotatably coupled with the panel so as to be rotatable between an open position and a closed position; and
(c) a torsion spring including a coil portion, a panel-side arm finally engaged with the panel-side engaging portion after coupling the bin with the panel and a bin-side arm regularly engaged with the bin-side engaging portion, the torsion spring biasing the bin in an opening direction relative to the panel after coupling the bin with the panel,
wherein a temporal engaging portion is provided at the bin, the panel-side arm of the torsion spring being temporally engaged with the temporal engaging portion before coupling the bin with the panel, the panel-side arm of the torsion spring having been released from a temporal engagement with the temporal engaging portion after coupling the bin with the panel.
(2) An overhead console according to item (1) above, wherein the torsion spring is set to the bin in a state that the panel-side arm of the torsion spring is elastically deformed and is temporally engaged with the temporal engaging portion, the bin set with the torsion spring being rotatably supported by the panel, the panel-side arm being pushed by the panel-side engaging portion on a way of rotation of the bin to the closed position during coupling the bin with the panel whereby the temporal engagement of the panel-side arm with the temporal engaging portion is released, upon the temporal engagement being released the panel-side arm being elastically sprung-back toward a free state of the torsion spring whereby the panel-side arm is finally engaged with the panel-side engaging portion of the panel.
(3) An overhead console according to item (1) above, wherein the panel-side arm of the torsion spring is engaged with the temporal engaging portion in a state that the panel-side arm is elastically deformed in a winding direction and an axial direction of the coil portion from a free state of the torsion spring when the torsion spring is set to the bin.
(4) An overhead console according to item (1) above, wherein the temporal engaging portion of the bin includes a base against which the panel-side arm is pressed in a direction reverse to a winding direction of the coil portion and a protrusion protruding from the base, against which the panel-side arm is pressed in an axial direction of the coil portion, when the torsion spring is set to the bin.
(5) An overhead console according to item (4) above, wherein when the bin is rotated toward the closed position during coupling the bin with the panel and the panel-side arm of the torsion spring is brought into contact with the panel-side engaging portion of the panel and is pushed in a direction reverse to a rotation of the bin toward the closed position, the panel-side arm of the torsion spring is moved to float up from the base of the temporal engaging portion, and when the panel-side arm of the torsion spring floats up to a tip of the protrusion, the panel-side arm of the torsion spring is caused to move over the protrusion due to a component in an axial direction of the coil portion, of a spring-back force of the panel-side arm toward a free state of the torsion spring, to finally engage with the panel-side engaging portion of the panel.
(6) An overhead console according to item (1) above, wherein the panel-side engaging portion of the panel downwardly extends from an upper wall of a recess-defining wall into a space defined in a recess, and an inclined portion defined by a surface or an edge is formed at a lower end of the panel-side engaging portion, the inclined portion being inclined in a direction toward a central side of the recess in a width direction of the recess and toward a rear side of a vehicle when viewed upwardly from a lower side of the vehicle.
(7) An overhead console according to item (6) above, wherein when the bin is rotated toward the closed position during coupling the bin with the panel and the panel-side arm of the torsion spring is brought into contact with the pane-side engaging portion of the panel and is pushed in a direction reverse to a rotation of the bin toward the closed position, the inclined portion of the panel-side engaging portion pushes the panel-side arm toward the protrusion, and when the panel-side arm is moved over the protrusion of the temporal engaging portion of the bin, the inclined portion of the panel-side engaging portion pushes the panel-side arm toward side wall of the panel to thereby finally engage the panel-side arm with the panel-side engaging portion.
(8) A coupling method of an overhead console according to item (1) above for coupling the bin and the torsion spring with the panel comprising:
a first step for setting the torsion spring to the bin by elastically deforming the panel-side arm of the torsion spring and engaging the panel-side arm with the temporal engaging portion of the bin;
a second step for inserting a portion of the bin set with the torsion spring into a recess of the panel and rotatably coupling the bin with the panel during coupling the bin with the panel; and
a third step for rotating the bin toward the closed position relative to the panel during coupling the bin with the panel, pushing the panel-side arm of the torsion spring by the panel-side engaging portion of the panel thereby releasing a temporal engagement of the panel-side arm with the temporal engaging portion of the bin, and causing the panel-side arm of the torsion spring to finally engage with the panel-side engaging portion of the panel automatically, using a spring-back of the panel-side arm to a free state of the torsion spring.
According to the overhead console according to items (1)-(7) above, since the temporal engaging portion is provided, it is possible to set the torsion spring to the bin by engaging the bin-side arm of the torsion spring with the bin-side engaging portion and temporally engaging the panel-side arm of the torsion spring with the temporal engaging portion before coupling the bin with the panel. Further, since the temporal engagement of the panel-side arm with the temporal engaging portion has been released when coupling the bin with the panel is finished, it is possible to automatically engage the panel-side arm with the panel-side engaging portion of the panel by coupling the bin with the panel. Therefore, by coupling the bin temporally set with the torsion spring with the panel, the torsion spring also can be coupled to the panel. Thus, the coupling work of the overhead console is improved compared with the conventional coupling work.
According to the overhead console according to item (3) above, the panel-side arm of the torsion spring is temporally set to the temporal engaging portion in a state that the panel-side arm has been elastically deformed in the winding direction of the coil portion from a free state of the torsion spring. Thus, compared with a case where the panel-side arm is temporally set to the temporal engaging portion in a state that the panel-side arm has not been elastically deformed in the winding direction of the coil portion, the panel-side arm is prevented from being disengaged from the temporal engaging portion to cause the torsion spring to drop from the bin. Further, the panel-side arm of the torsion spring is temporally engaged with the temporal engaging portion in a state that the panel-side arm has been elastically deformed in the axial direction of the coil portion from a free state of the torsion spring. Thus, when a force directed in the winding direction of the coil portion is added on the panel-side arm so that the panel-side arm is disengaged from the temporal engaging portion, the panel-side arm is automatically deformed toward a free state of the torsion spring due to a spring-back force and moves to a range outside the temporal engaging portion in the axial direction of the coil portion. As a result, the panel-side arm can be disengaged from the temporal engaging portion by adding a force directed in the winding direction of the coil portion onto the panel-side arm, and the panel-side arm disengaged from the temporal engaging portion is prevented from engaging with the temporal engaging portion again.
According to the coupling method of the overhead console according to item (8) above, since the method includes the first step, it is possible to set the torsion spring to the bin before the bin is coupled with the panel. Further, since the method includes the second and third steps, the panel-side arm can be disengaged from the temporal engagement with the temporal engaging portion and can be finally engaged with the panel-side engaging portion of the panel automatically, by fitting the rotatable shaft provided at the bin into the bearing formed at the panel and then rotating the bin relative to the panel toward the closed position. Therefore, by coupling the bin temporally engaged by the torsion spring with the panel, coupling the torsion spring with the panel also can be conducted. Thus, the coupling work of the overhead console is improved compared with the conventional coupling work.
An overhead console and a coupling method of parts of the overhead console (hereinafter, a coupling method of the overhead console) according to certain embodiments of the present invention will be explained with reference to drawings. In the drawings, “UP” shows an upward direction of a vehicle, and “OUT” shows an outward direction in a width direction of the vehicle.
10
10
10
20
30
40
FIG. 5
The overhead console (hereinafter, “apparatus”) according to an embodiment of the present invention is provided at a ceiling of a passenger room of a vehicle. The apparatus is located in a vicinity of a front end of the passenger room ceiling and at a central portion in a width direction of the vehicle. As illustrated in , the apparatus includes a panel , a bin (which may be called a door or a lid) and a torsion spring .
20
20
10
21
22
23
FIGS. 1 and 2
The panel is made from, for example, synthetic resin. The panel is fixed to the passenger room ceiling. As illustrated in , the apparatus includes a recess-defining wall , a bearing and a panel-side engaging portion .
FIG. 9
21
24
21
21
21
30
24
40
40
a
b
As illustrated in , the recess-defining wall defines a recess opening downward. The recess-defining wall includes a side wall and an upper wall . At least a portion of the bin can enter the recess and the torsion spring always enters the recess .
FIG. 1
22
21
21
22
21
10
10
a
a
As illustrated in , the bearing is provided at the side wall of the recess-defining wall . The bearing is provided at a portion of each of the side walls disposed on opposite sides of the apparatus in a width direction of the apparatus (i.e., the width direction of the vehicle) and extending in a front-rear direction of the vehicle.
23
21
23
21
21
23
21
21
23
24
21
23
23
23
23
24
23
21
23
b
a
a
a
a
a
a
a
FIG. 2
The panel-side engaging portion is integrally provided at the recess-defining wall . The panel-side engaging portion is connected to at least one of the upper wall and the side wall . The panel-side engaging portion may be provided at only one of the right and lift side walls or may be provided at each of the right and lift side walls . The panel-side engaging portion protrudes into the recess from the recess-defining wall . As illustrated in , the panel-side engaging portion includes an inclined portion at at least a rear surface of a bottom of the panel-side engaging portion . The inclined portion is a surface or an edge line and is inclined toward a central side in the width direction of the apparatus (a central side in the width direction of the recess ) and toward a rear side in the front-rear direction of the vehicle. In other words, the inclined portion is inclined in a direction away from the side wall to which the panel-side engaging portion is connected and toward the rear side of the vehicle.
30
30
30
20
30
24
30
30
30
31
24
30
32
31
30
30
31
32
FIG. 9
a
b
a
a
a
The bin is made from, for example, synthetic resin. As illustrated in , The bin is rotatably supported about a rotational axis P. The bin is rotatable relative to the panel between a closed position closing an entirety (including a substantial entirety) of an opening of the recess and an open position located downward of the closed portion in a rotational direction of the bin. The bin includes a design panel closing the opening of the recess when the bin is at the closed position and a rising wall rising upward from a peripheral of the design portion when the bin is at the closed position . The design panel and the rising wall define a space S therein capable of housing sunglasses, for example, etc.
30
30
36
35
30
25
23
20
30
b
FIG. 5
When the bin is located at the open position , a stopper surface of a temporal engaging portion of the bin contacts a stopper receiving surface (see ) provided at the panel engaging portion of the panel thereby preventing the bin from opening too much.
FIG. 4
FIG. 9
30
33
34
30
33
As illustrated in , the bin includes a rotatable shaft and a bin-side engaging portion . The rotational axis P of the bin of coincides with a rotational axis of the rotatable shaft .
33
32
33
10
33
10
32
33
33
33
33
33
33
33
22
24
33
33
22
30
20
34
32
32
21
21
20
a
b
a
a
a
a
a
The rotatable shaft is integrally formed with the rising wall . The rotatable shaft is provided at the rising wall located at each of right and left sides of the apparatus and extending in the front-rear direction of the vehicle. The rotatable shaft protrudes in an outward direction in the width direction of the apparatus from an outside surface of each of the right and left rising walls . A diameter of a tip portion of the rotatable shaft is smaller than a portion (a large diameter portion) of the rotatable shaft except the tip portion . The tip portion of the rotatable shaft is fitted (inserted) into the bearing from a side of the recess . The tip of the rotatable shaft is rotatably supported by the bearing . By the rotatable support, the bin is supported by the panel so as to be rotatable about the axis P. The bin-side engaging portion is integrally provided with the rising wall and protrudes from the outside surface of the rising wall toward the side wall of the recess-defining wall of the panel .
40
40
30
20
40
41
42
23
43
34
42
41
43
41
41
33
33
30
41
33
30
FIGS. 4 and 5
FIG. 4
b
The torsion spring is made from metal. The torsion spring biases the bin in an open direction relative to the panel . As illustrated in , the torsion spring includes a coil portion , a panel-side arm for being engaged with the panel-side engaging portion and a bin-side arm for being engaged with the bin-side engaging portion . The panel-side arm is connected to one end of the coil portion and the bin-side arm is connected to the other end of the coil portion . As illustrated in , the coil portion is fitted to an outside surface of the large diameter portion of the rotatable shaft of the bin . An axis of the coil portion substantially coincides with an axis of the rotatable shaft of the bin .
35
30
The temporal engaging portion is provided at the bin .
35
40
30
20
42
35
30
20
42
35
30
20
42
35
30
20
The temporal engaging portion is provided for temporally holding (temporally engaging) the torsion spring with the bin before the bin is coupled with the panel . The panel-side arm is temporally engaged with the temporal engaging portion before the bin is coupled with the panel . The temporal engagement of the panel-side arm with the temporal engaging portion is released on a way of rotation of the bin toward the closed position relative to the panel . The temporal engagement of the panel-side arm with the temporal engaging portion has been released when coupling of the bin with the panel is finished.
35
30
35
35
35
35
31
31
30
35
32
32
35
35
35
32
32
35
30
a
b
a
a
a
a
b
a
a
a
a
The temporal engaging portion is provided at a portion of the bin outside the housing space S. The temporal engaging portion includes a base and a protrusion . The base is provided at the design panel so as to protrude upward from an upper surface (a back surface of a design surface) when the bin is at the closed position. The base may be connected to an outside surface of the rising wall . The protrusion is provided at the base so as to protrude upward from a portion of an upper end of the base (i.e., an end further from the outside surface of the rising portion , of the upper end of the base ), when the bin is at the closed position.
40
30
42
40
35
43
40
34
42
35
42
40
41
35
43
34
42
35
42
35
35
35
42
35
42
35
41
40
42
35
42
35
41
40
a
b
a
b
When the torsion spring is temporally engaged with the bin , the panel-side arm of the torsion spring is temporally engaged with the temporal engaging portion , and the panel-side arm of the torsion spring is engaged with bin-side engaging portion . The panel-side arm is engaged with the temporal engaging portion in a state that the panel-side arm is elastically deformed from a free state of the torsion spring in winding and axial directions of the coil portion and in a direction where a contact pressure of the panel-side arm against the temporal engaging portion is increased, while the bin-side arm is engaged with the bin-side engaging portion . When the panel-side arm is engaged with the temporal engaging portion , the panel-side arm is engaged with the base of the temporal engaging portion in an unwinding direction of the coil portion and is engaged with the protrusion in the axial direction of the coil portion. Since the panel-side arm is engaged with the base , the panel-side arm engaged with temporal portion can maintain the state deformed in the winding direction of the coil portion from the free state of the torsion spring . Further, since the panel-side arm is engaged with the protrusion , the panel-side arm engaged with temporal portion can maintain the state deformed in the axial direction of the coil portion from the free state of the torsion spring .
30
40
20
A coupling method of the overhead console according to the embodiment of the present invention includes a method for coupling the bin and the torsion spring to the panel will now be explained.
The coupling method of the overhead console according to the embodiment of the present invention includes the following first to third steps in an order of the first to third steps:
FIG. 4
40
30
43
40
34
30
42
40
35
30
(i) In the first step, as illustrated in , the torsion spring is set (temporally held) to the bin , by engaging the bin-side arm of the torsion spring with the bin-side engaging portion of the bin and temporally engaging the panel-side arm of the torsion spring with the temporal engaging portion of the bin .
FIG. 5
FIG. 6
30
40
24
30
20
30
30
30
30
33
22
30
20
b
b
a
(ii) In the second step, as illustrated in (a) of and (a) of , the bin set with the torsion spring is inserted into the recess at a state where the bin is inclined relative to the panel so as to be at the open position or at a position on a way of rotation of the bin from the open position to the closed position . Then, the rotatable shaft is fitted into the bearing whereby the bin is rotatably supported by the panel .
FIG. 2
FIG. 5
FIG. 6
FIG. 7
FIG. 8
FIG. 5
FIG. 2
FIG. 2
30
20
30
42
23
20
30
42
23
30
35
42
23
30
42
35
35
42
35
42
35
42
40
35
42
23
23
42
40
23
23
42
21
20
23
a
a
b
b
a
a
a
a
(iii) In the third step, as illustrated in , (b) and (c) of , (b) and (c) of , (b) and (c) of , and (b) and (c) of , the bin is rotated relative to the panel from the above insertion position toward the closed position (in direction D1 shown in (b) and (c) of ). During rotation, the panel-side arm begins to contact the panel-side engaging portion of the panel at position P1 shown in . When the bin is further rotated from the contact-beginning position, the panel-side arm is pushed by the panel-side engaging portion in a direction reverse to rotation of the bin toward the closed position thereby being released from the temporal engagement with the temporal engaging portion . More particularly, when the panel-side arm is pushed by the panel-side engaging portion in the direction reverse to rotation of the bin toward the closed position, the panel-side arm is caused to float up from the base of the temporal engaging portion . When the panel-side arm floats up to a tip of the protrusion , the panel-side arm is moved over the protrusion due to an axial component of a spring-back force of the panel-side arm to the free state of the torsion spring . Upon being released from the temporal engagement with the temporal engaging portion , the panel-side arm slides on the inclined portion of the panel-side engaging portion due to the component of the spring-back force of the panel-side arm to the free state of the torsion spring and is moved to a front end of the inclined portion . At the front end of the inclined portion , the panel-side arm contacts the side wall of the recess-defining wall of the panel to stop at position P2 shown in and is finally engaged with the panel-side engaging portion .
30
20
41
33
30
42
40
35
42
40
41
35
FIG. 4
The first step (the step of (i) above) is conducted before the bin is coupled with the panel . As illustrated in , at the first step the coil portion is fitted to the rotatable shaft of the bin . At the first step, the panel-side arm of the torsion spring is engaged with the temporal engaging portion in a state that the panel-side arm is elastically deformed from a free state of the torsion spring in winding and axial directions of the coil portion and in a direction where a contact pressure of the panel-side arm against the temporal engaging portion is increased.
FIG. 5
30
24
20
33
22
23
36
30
42
At the second step (the step of (ii) above), as illustrated in (a) of , the bin is caused to enter the recess of the panel . At the second step, the rotatable shaft is fitted to the bearing under a state that the panel-side engaging portion is located between the stopper surface of the bin and the panel-side arm .
42
23
41
42
23
23
23
42
30
35
42
41
40
23
30
30
30
30
20
FIG. 8
FIG. 6
a
a
a
a
At the third step (the step of (iii) above), the panel-side arm is pushed by the panel-side engaging portion in the winding direction of the coil portion . As illustrated in (b) and (c) of , at the third step, the panel-side arm begins to contact the inclined portion of the panel-side engaging portion and then slides along the inclined portion . As illustrated in (b) and (c) of , at the third step, when the panel-side arm is moved relative to the bin and is released from engagement with the temporal engaging portion , the panel-side arm is elastically deformed in the axial direction of the coil portion so as to automatically return to the free state of the torsion spring thereby finally engaging the panel-side engaging portion . At the third step, when or before the bin rotated toward the closed portion reaches the closed position , coupling of the bin with the panel is or has been completed.
10
Operation and technical advantages of the apparatus according to the embodiment of the present invention will now be explained.
10
(A) According to the apparatus according to the embodiment of the present invention, the following operation and technical advantages are obtained:
35
30
40
30
40
30
30
20
43
40
34
42
40
35
42
35
30
20
42
35
30
20
42
23
42
23
20
30
20
30
40
20
40
20
10
(A1) Since the temporal engaging portion is provided at the bin , it is possible to temporally engage the torsion spring with the bin to thereby set the torsion spring to the bin before coupling the bin with the panel , by engaging the bin-side arm of the torsion spring with the bin-side engaging portion and temporally engaging the panel-side arm of the torsion spring with the temporal engaging portion . Further, since the temporal engagement of the panel-side arm with the temporal engaging portion is released on a way of coupling the bin with the panel , the temporal engagement of the panel-side arm with the temporal engaging portion has been released when coupling the bin with the panel is completed, and the panel-side arm is finally engaged with the panel-side engaging portion . Thus, it is possible to automatically engage the panel-side arm with the panel-side engaging portion of the panel by coupling the bin with the panel . Therefore, by only coupling the bin temporally set with the torsion spring with the panel , the torsion spring also can be coupled to the panel . Thus, the coupling work of the overhead console is improved compared with the conventional coupling work of the apparatus where the torsion spring was set to the apparatus after the bin was coupled with the panel.
42
40
35
42
41
40
42
35
42
41
42
35
40
30
(A2) Since the panel-side arm of the torsion spring is temporally engaged with the temporal engaging portion in a state that the panel-side arm has been elastically deformed in the winding direction of the coil portion from a free state of the torsion spring . Thus, compared with a case where the panel-side arm is temporally set to the temporal engaging portion in a state that the panel-side arm has not been elastically deformed in the winding direction of the coil portion , the panel-side arm is prevented from being disengaged from the temporal engaging portion to cause the torsion spring to drop from the bin .
42
40
35
42
41
40
41
42
42
35
42
40
41
42
35
41
42
42
35
35
(A3) The panel-side arm of the torsion spring is temporally engaged with the temporal engaging portion in a state that the panel-side arm has been elastically deformed in the axial direction of the coil portion from a free state of the torsion spring . Thus, when a force directed in the winding direction of the coil portion is added on the panel-side arm and the panel-side arm is disengaged from the temporal engaging portion , the panel-side arm is automatically deformed toward the free state of the torsion spring in the axial direction of the coil portion due to the spring-back force. As a result, the panel-side arm can be disengaged from the temporal engaging portion by only adding a force directed in the winding direction of the coil portion onto the panel-side arm , and the panel-side arm disengaged from the temporal engaging portion is prevented from engaging with the temporal engaging portion again.
30
40
20
42
23
42
42
23
42
23
a
a.
(A4) When the bin and the torsion spring are coupled with the panel , the panel-side arm is pressure-contacted onto the inclined portion of the panel-side engaging portion . Thus, the panel-side arm can be more surely prevented from being disengaged from panel-side engaging portion than in a case where the panel-side arm is not pressure-contacted onto the inclined portion
36
30
35
30
36
30
35
25
30
23
20
25
20
23
(A5) Since the stopper surface for preventing the bin from opening too much is provided at the temporal engaging portion , it simplifies the structure of the bin and decreases material costs compared with a case where the stopper surface is provided at a portion of the bin except the temporal engaging portion . Further, since the stopper receiving surface for preventing the bin from opening too much is provided at the panel-side engaging portion , it simplifies the structure of the panel and decreases material costs compared with a case where the stopper receiving surface is provided at a portion of the panel except the panel-side engaging portion .
10
(B) According to the coupling method of the apparatus according to the embodiment of the present invention, the following operation and technical advantages are obtained:
40
30
30
20
42
35
23
20
33
30
22
20
30
20
30
30
40
20
40
20
10
a
(B1) Since the method includes the first step, it is possible to set the torsion spring to the bin before the bin is coupled with the panel . Further, since the method includes the second and third steps, the panel-side arm can be disengaged from the temporal engagement with the temporal engaging portion and can be finally engaged with the panel-side engaging portion of the panel automatically, by fitting the rotatable shaft provided at the bin into the bearing formed at the panel and then rotating the bin relative to the panel toward the closed position . Therefore, by coupling the bin temporally engaged by the torsion spring with the panel , coupling the torsion spring with the panel also can be conducted. Thus, the coupling work of the overhead console is improved compared with the conventional coupling work.
42
23
23
42
35
35
a
(B2) At the third step, since the panel-side arm contacts the inclined portion of the panel-side engaging portion , the panel-side arm once disengaged from the temporal engaging portion is prevented from being engaged with the temporal engaging portion again.
10
apparatus (overhead console)
20
panel
21
recess-defining wall
21
a
side wall
21
b
upper wall
22
bearing
23
panel-side engaging portion
23
a
inclined portion
24
recess
25
stopper receiving surface
30
bin
30
a
closed position
30
b
open position
31
design panel
32
rising wall
33
rotatable shaft
34
bin-side engaging portion
35
temporal engaging portion
35
a
base
35
b
protrusion
36
stopper surface
40
torsion spring
41
coil portion
42
panel-side arm
43
bin-side arm
P rotational axis of the bin
S space
BRIEF DESCRIPTION OF THE DRAWINGS
Referring to the drawings, which form a part of this disclosure:
FIG. 1
is a bottom view of a panel among an overhead console according to an embodiment of the present invention.
FIG. 2
FIG. 1
is an enlarged view of portion “A” of .
FIG. 3
is a side view of a bin and a torsion spring among the overhead console according to the embodiment of the present invention.
FIG. 4
is an enlarged perspective view of the torsion spring and the vicinity thereof of the overhead console according to the embodiment of the present invention, when the torsion spring is set to the bin before the bin is coupled with the panel;
FIG. 5
is rough cross-sectional views of the overhead console at steps (a), (b) and (c) conducted in an order of (a), (b) and (c) when the bin set with the torsion spring is coupled with the panel, where:
(a) is a step when the bin set with the torsion spring is moved toward the panel and a rotational shaft is fitted to a bearing,
(b) is a step when the bin is rotated from the state of above (a) toward a closed position and a panel-side arm of the torsion spring is brought into contact with a panel-side engaging portion of the panel, and
(c) is a step when the bin is further rotated from the state of above (b) toward the closed position and the panel-side arm of the torsion spring is disengaged from a temporal engaging portion of the bin and is engaged with the panel-side engaging portion of the panel;
FIG. 6
is partial perspective views of the overhead console according to the embodiment of the present invention viewed from the bin-side arm of the torsion spring at steps (a), (b) and (c) when the bin set with the torsion spring is coupled with the panel, where:
FIG. 5
(a) is a step corresponding to step (a) of ;
FIG. 5
(b) is a step corresponding to step (b) of ; and
FIG. 5
(c) is a step corresponding to step (c) of ;
FIG. 7
is cross-sectional views of the panel-side arm of the torsion spring and the temporal engaging portion of the bin of the overhead console according to the embodiment of the present invention illustrating a positional relationship between the panel-side arm and the temporal engaging portion at steps (a), (b) and (c) when the bin set with the torsion spring is coupled with the panel, where:
FIG. 5
(a) is a step corresponding to step (a) of ;
FIG. 5
(b) is a step corresponding to step (b) of ; and
FIG. 5
(c) is a step corresponding to step (c) of ;
FIG. 8
is cross-sectional views of the panel-side arm of the torsion spring and the panel-side engaging portion of the panel of the overhead console according to the embodiment of the present invention illustrating a positional relationship between the panel-side arm and the panel-side engaging portion at steps (a), (b) and (c) when the bin set with the torsion spring is coupled with the panel, where:
FIG. 5
(a) is a step corresponding to step (a) of ;
FIG. 5
(b) is a step corresponding to step (b) of ; and
FIG. 5
(c) is a step corresponding to step (c) of ; and
FIG. 9
is a rough cross-sectional view of the overhead console according to the embodiment of the present invention in a state that the bin and the spring are finally coupled with the panel. | |
Modulation of the yerba-mate metamer production phenology by the cultivation system and the climatic factors
- Source:
- Ecological modelling 2018 v.384 pp. 188-197
- ISSN:
- 0304-3800
- Subject:
- Markov chain, autumn, climatic factors, developmental stages, drought, models, ontogeny, perennials, phenology, tree growth, trees, warm season, yerba mate
- Abstract:
- In rhythmic tree growth, the alternation between growth and resting phases can be either periodic or irregular, depending on climatic and endogenous factors. The aim of this study was to analyze the growth pattern of yerba-mate, a subtropical South American tree with monopodial and rhythmic growth, over a two-year period. Metamer emission rate of selected axes was followed during two years after a severe pruning in two cultivation systems contrasting for light conditions, agroforestry system and monoculture. A new longitudinal data modeling approach relying on hidden semi Markov chains to identify growth and resting phases and multiple change-point models to relate these phases to climatic factors was used. Despite large variability among individuals, two growth phases separated by a resting phase were most often identified within each year. Contrasting situations regarding the modulation of the polycyclism were observed between the two years and cultivated systems: in the first year of growth, the pattern consisted of a single long growth phase in vigorous individuals and the cultivation system had a major effect, likely due to the induced light environment; in the second year, some individuals did not grow during the second autumn phase likely due to the drought during the warm season. The observed differences in growth patterns between years and cultivation systems were interpreted with respect to ontogenetic and climatic effects, in interaction with endogenous factors resulting from plant reproductive phenology. This study introduces a new longitudinal data analysis approach for investigating the phenology of perennial plants over long follow-up periods, at time scale intermediate between days and years.
- Agid: | https://pubag.nal.usda.gov/catalog/6106787 |
Updated:
Keep
The defense of the PP in the case that is being followed by the papers of Luis Bárcenas has presented a brief in the National Court in which it demands that Judge Santiago Pedraz definitively exonerate the formation, in light of the null credibility of the ex-treasurer, of the reports that undermine the veracity of its alleged off-the-books accounting and of the eleven years that have passed “without evidence.”
The brief, to which ABC had access, is recorded in connection with the closing of the investigation of this piece detached from the Gürtel case. Last July Pedraz, head of the Central Court of Instruction number 5 of the National Court, agreed that there was no place to extend the instruction any longer, but it has not yet decided if the end will come by dismissal or will continue the process on the way to trial.
For the PP, “there is an absence of evidence of criminality in the events that have been the subject of investigation for almost 11 years” in this case and that focus on clarifying if the donations that Bárcenas registered in his papers on behalf of various businessmen they had their translation in public works awards in administrations governed by the ‘popular’.
«None of the Reports prepared by the experts of the General State Intervention (IGAE) that work together with these actions reveal the existence of no criminally reprehensible irregularity in the award of the analyzed contracts “, says the letter, which considers” evident the absence of any link between the analyzed awards and the presumed donations “.
“Spurious intentions”
Regarding Bárcenas’ own statements, the last one on July 16, the PP defense points out that “in addition to not having thrown any relevant information for the course of this investigation, it has once again highlighted the lack of credibility that their papers and their multiple and contradictory versions deserve ».
«There is a clear lack of objective facts, testimonies and / or objective documentary evidence that give credibility to the story that Mr. Bárcenas claims in relation to his notes and, especially, to those related to the presumed donations that are reflected in his papers that are being investigated in these proceedings, “he argues .
On the contrary, he points out the existence of “objective elements” that question his credibility “and his spurious intentions towards the people who, without any justification, have been involved in his famous roles”, as already collected by various judicial decisions that reduced the veracity of as stated by the former treasurer. They add the fact that none of the witnesses cited in the case has acknowledged what appears from their papers. | https://digismak.com/the-pp-asks-for-the-file-in-the-papers-of-barcenas-for-its-null-credibility-and-11-years-without-evidence/ |
With four consecutive starts and a run of impressive performances to his name, Liam Rose is showing why he is such a valued player on the Central Coast.
In the absence of experienced Dutch midfielder Tom Hiariej, Liam Rose has shown the form over the past month that saw him awarded the mariners Medal in 2016.
Alongside the experienced Dutch Duo of Wout Brama and Tom Hiariej is Liam Rose, Jacob Melling and Adam Berry who are all competing to be a part of the A-League match day midfield.
Competition for spots can bring out the best in players and Rose says there is a healthy competition across the board whilst the young players take what they can from their experienced counterparts.
Could we see another 18-year-old attacker from the Mariners Academy earn his @ALeague debut this weekend?
Ins & Outs: https://t.co/m4hgn4Weqn#MCYvCCM #CCMFC pic.twitter.com/s0ZYZhwVLL
— Central Coast Mariners (@CCMariners) April 5, 2018
“Tom Hiariej is a big loss and you can see that when he’s stepped out of the team,” Rose said. “Everyone who has come in has done a good job, whether that’s Melling or anyone they’ve done a job and worked hard for the team.
“Tom & Wout have played at the highest level. Wout has played for the Dutch national team and Tom has hundreds of games to his name in the Eredivisie so they have a lot of experience. I just try to take little bits out of their game and what they do as footballers which has helped me a lot this season and I am really grateful for that,” Rose said.
“Competition for places makes everyone work hard and fight for that position and it only brings out the best in the team so when there’s competition everyone’s fighting at training,” Rose said.
After today's training session, 18-year-old academy striker Jordan Smylie fronted the media for the first time and reflected on his @ALeague debut! Smylie explained how he found out and the first person he called to break the exciting news. #CCMFC #ALeague pic.twitter.com/WFSSYz01SQ
— Central Coast Mariners (@CCMariners) April 4, 2018
Days out from his 21st birthday, Rose has taken this opportunity with both hands having started in every game since round 22 against Perth Glory. Despite plenty of game time, Rose says the number one focus is team performance.
“It’s great for me to be able to get some minutes under my belt,” Rose said. “But the main thing is the team performance and we haven’t had the best results in the last few games so we will be looking to change that on the weekend and hopefully bring back the three points.
“The result against Brisbane was disappointing. One moment we let ourselves down defensively and they capitalised. But there were plenty of positives signs, we created chances and were a bit unlucky not to score so we’ve been working hard this week at training to change that this weekend and get the three points,” Rose said.
This Saturday the Mariners face Melbourne City at AAMI Park which Rose says will be a tough proposition. City are coming hot off the back of back-to-back 3-0 wins over the Western Sydney Wanderers and Newcastle Jets.
After the success of our Newcastle Permanent Mighty Mariners Holiday Clinics in January, Nick Montgomery & @josh_rose3 will lead the next instalment of clinics on the Central Coast in April!
Bookings are OPEN: https://t.co/c8oilAGC7c#CCMFC #ALeague pic.twitter.com/3hDNT8tXA2
— Central Coast Mariners (@CCMariners) March 28, 2018
“It’s going to be a tough game,” Rose said. “They’ve hit some great form, they’ve got some great players like Daniel Arzani who has started to show his potential and guys like Bruno Fornaroli but that won’t change our game plan – we’re going to go in and work hard for the win.
“We’re not where we want to be but we know that we don’t deserve to be down that end of the table. There were games this year we should have won but that’s football at the end of the day and we need to keep working hard for our fans,” Rose said. | https://ccmariners.com.au/news/liam-rose-learning-dutch-duo |
PROBLEM TO BE SOLVED: To construct an earth retaining wall extending between a soft layer and a hard layer preferably on the ground having the soft layer on the ground surface side and the hard layer below the soft layer.
SOLUTION: A soil mortar pillar 1 is constructed by drilling the ground to a depth deeper than the planned excavation depth (floor surface) at a predetermined interval along the boundary line between the excavation side and the ground side. Next, the soil mortar pillars 1 and 1 are excavated in a groove shape to a depth shallower than the planned excavation depth to form the soil mortar wall 2. Next, a first core material 3 is inserted into the soil mortar pillar 1, and a second core material 4 shorter than the first core material 3 is inserted into the soil mortar wall 2 between the soil mortar pillars 1 and 1.
SELECTED DRAWING: Figure 3
COPYRIGHT: (C)2020,JPO&INPIT | |
The method of CO2 sequestration using gas hydrates has the advantages that (1) the CO2 is stored as a solid which reduces the risk for leakage; (2) a geological containment structure is not required; (3) it could be suitable for direct flue gas injection since the nitrogen in the flue gas would not be trapped; (4) if injected into a methane gas hydrate reservoir it could simultaneously produce methane while trapping the CO2 as a hydrate (through molecular exchange). The main goals of this research are to elucidate the fundamentals underlying crystal growth and to measure the formation kinetics of CO2 gas hydrates in sandy sediments. Experiments on formation kinetics and crystal growth were carried out using a high-pressure vessel and a Raman spectrometer. A bulk-scale experiment was also conducted to observe the behavior of CO2 hydrate formation in sandy sediments. The change in permeability and porosity in porous media was measured in the process of crystal growth. | https://tohoku.pure.elsevier.com/en/publications/formation-kinetics-of-co-sub2sub-gas-hydrates-in-sandy-sediment-a |
Gambit Books, paperback, Englisch, ISBN: 978-1-911465-24-9, 320 Seiten, 2018.
Is chess a logical game?
What constitutes an advantage in chess?
How can we set problems and create psychologically difficult situations for the opponent?
These are big questions, and Erik Kislik tackles them and others head-on in this thought-provoking, thoroughly modern, and original work.
He answers the first of those questions with a resounding ‘yes!’. His assessments focus on concrete points: pawn-structure, material imbalance and compensation. Even though the analytical proofs may be complex, he repeatedly shows that these elements are the keys to evaluating positions and forming plans.
As the trainer of players ranging from high-level grandmasters to average club-players, Kislik is very strong on providing practical guidance on topics such as how best to use chess software, choosing hardware, getting psychologically ready for a game and preparing for specific opponents. He is always willing to boldly state his views, even when they run contrary to conventional chess wisdom.
“I was excited by this book because of the way all of the ideas are intertwined and you get very concrete advice ... Everything is applicable and it is easy to see how it applies to the real world.” – from the Foreword by GM Hjörvar Steinn Gretarsson.
Erik Kislik is an International Master originally from California who lives in Budapest. He is an expert in computer chess and one of the most in-demand chess trainers on ICC. He has coached many grandmasters and assisted a number of elite players with their opening preparation.
Download a pdf file with a sample from the book. (PDF)
“A great addition to the literature about chess improvement, because the author writes in a clear and lengthy fashion to players looking to take an active approach to improve at chess. Kislik shares his experiences and methods on how to move from amateur to International master level. The book is well written and the reader can feel the connection with the author, to the point of understanding that, regardless of skill level, a player can improve if he/she allocates resources like time and effort to identify and eliminate shortcomings. A must read this summer for any player, coach or parent interested in how chess players become better at the game” - Miguel Ararat, FLORIDA CHESS
“A highly interesting training manual written by Erik Kislik, an International Master from San Mateo, California, and so far I am aware the only IM in the world who was a beginner as an adult (at age 18). Going through this book you will understand how Erik Kislik managed to become such a strong chess player. These 317 pages are highly instructive packed with topics which I have never seen before. For example, you can learn a lot from your blitz games, although hardly anyone does, World elite blitz players have told Erik that they focus on concrete moves first in their fast games. This makes sense… seeing threats and specific ideas is very important with little time. For players who see things slowly, this is certainly something they can consider working on and applying. Gradually, one’s awareness of concrete relevant moves and tactical possibilities must improve with consistent work. Having a very tightly worked out opening repertoire is also very useful for blitz. Like Max Euwe, Erik Kislik does not see much in memorizing grandmaster games. In opening play, it is much better to focus on understanding. Try to guess the moves and understand why they were played! The whole idea of memorizing openings is usually counterproductive at lower levels. Erik Kislik also explains we mostly remember openings due to their logic and strategic ideas. This, and more, well packed in 14 highly instructive sections. A unique work!” – John Elburg, chessbooks.nl
Leider sind noch keine Bewertungen vorhanden. Seien Sie der Erste, der das Produkt bewertet.
Leider sind noch keine Bewertungen vorhanden. Seien Sie der Erste, der das Produkt bewertet. | https://www.euroschach.de/kislik-applying-logic-in-chess.html |
CROSS-REFERENCE TO RELATED APPLICATION
BACKGROUND
SUMMARY
DETAILED DESCRIPTION
The present document is based on and claims priority to U.S. Provisional Application Serial No. 62/086,369 filed Dec. 2, 2014, which is incorporated herein by reference in its entirety.
In many hydrocarbon well applications, electric submersible pumping systems are used for pumping of fluids, e.g. hydrocarbon-based fluids. The hydrocarbon fluids are pumped from a subterranean geologic formation, referred to as a reservoir, by operating the electric submersible pumping system within a wellbore. In general, the electric submersible pumping system comprises a submersible pump powered by an electric, submersible motor which receives power via a power cable routed downhole into the wellbore. The power cable comprises three electrical conductors which supply three-phase power to the submersible motor which, in turn, powers a submersible pump. The electrical conductors are each round in cross-section and collectively enclosed within armor. However, the structure of the electrical conductors and cooperating layers of the overall power cable may be space inefficient and susceptible to damage.
In general, a system and methodology enable construction of a power cable which is internally protected by a foamed protective layer. The power cable comprises at least one electrical conductor. Each electrical conductor is insulated with an insulation layer and protected from deleterious fluids by a fluid barrier layer. Further protection is provided by a protective layer disposed around the fluid barrier layer. The protective layer is foamed to provide a cushion layer and to further protect components of the power cable. An armor layer may be disposed around the protective layer.
However, many modifications are possible without materially departing from the teachings of this disclosure. Accordingly, such modifications are intended to be included within the scope of this disclosure as defined in the claims.
In the following description, numerous details are set forth to provide an understanding of some embodiments of the present disclosure. However, it will be understood by those of ordinary skill in the art that the system and/or methodology may be practiced without these details and that numerous variations or modifications from the described embodiments may be possible.
The present disclosure generally relates to a power cable and construction of a power cable which is protected internally by a foamed protective layer. The power cable comprises at least one electrical conductor, e.g three electrical conductors to provide three-phase power. Each electrical conductor is insulated with an insulation layer and protected from deleterious fluids by a fluid barrier layer. The insulation layer may comprise a single layer of insulation material or combined layers to provide the desired electrical insulation. Similarly, the fluid barrier layer may comprise a single layer or combined layers to protect the insulation and electrical conductor from unwanted fluids. Further protection is provided by a protective layer disposed around the fluid barrier layer. The protective layer is foamed to provide a cushion layer and to further protect components of the power cable. An armor layer may be disposed around the protective layer. The armor layer may be disposed in direct contact with the protective layer (without an additional jacket layer) to provide a protected power cable in a relatively smaller, space efficient form.
As described in greater detail below, embodiments of the cable utilize the foamed, protective layer to provide cushioning within a power cable, such as a power cable for electric submersible pumping system. The methodology facilitates construction of the cushioning, protective layer by, for example, extruding a foamed material over the fluid barrier layer. The fluid barrier layer can be formed with fluoropolymer tapes and/or extruded lead sheaths to provide chemical resistance. However, such materials may provide poor mechanical properties in a variety of applications and environments.
By disposing the foamed, protective layer over the fluid barrier material, cushioning is provided within the power cable to protect the fluid barrier materials from damage during construction, handling, and use of the power cable. The reduction or elimination of damage and/or mechanical stress on the fluid barrier layers promotes an increased life for the power cable in the downhole environment. The increased cable life, in turn, provides increased reliability and runtime for the electric submersible pumping system. Use of the foamed protective layer also enables elimination of conventional jacket layers to provide a more space efficient power cable.
Depending on the application, power cables may be rated in the range of 3-8 kV or other suitable ratings. The power cable may be structured in a generally flat or round cable construction. For example, round power cables may be used when there is sufficient room in the wellbore to accommodate the wider profile of a round cable. Flat cable constructions are useful in many well applications because they occupy less space between the well string and the surrounding wellbore wall thus mitigating clearance issues within the wellbore.
In a specific example, a power cable with a round or flat cross-sectional construction may be rated up to about 5 kV. Depending on the application, the power cable may comprise various conductors, e.g. copper conductors, surrounded by various layers. By way of example, the layers surrounding the conductors may comprise insulation layers, e.g. ethylene propylene diene monomer (EPDM) rubber insulation, to provide oil and heat resistance. The layers also may comprise at least one fluid barrier layer, e.g. a lead sheath and/or fluoropolymer tape wrap barrier layer, a foamed protective layer, and an armor layer, e.g. a galvanized steel, stainless steel, or Monel™ armor layer. In some applications, the various layers may be formed from other types of materials or combinations of materials.
It should be noted that conventional cable construction often employed an additional jacket layer and/or other types of additional layers. In embodiments described herein, however, power cables, e.g. flat power cables, may be constructed without a jacket layer to help reduce cost and to improve clearance when employed in a wellbore. The fluid barrier may be protected from direct contact with the armor layer (such contact can result in gouges or cuts to the fluid barrier layer during handling and use of the power cable) by the foamed protective layer.
Damage to the fluid barrier layer can substantially reduce the operational life of the power cable and thus of the electric submersible pumping system, especially when the power cable is used in corrosive, gassy, and/or hot wellbore environments. As described in greater detail below, the protective layer provides cushioning and protection which reduces or eliminates the potential for damage to the fluid barrier layer. This ensures a longer life of the power cable and electric submersible pumping system. The protective layer may be foamed and placed between the fluid barrier layer and the armor layer to provide the cushioning and protection.
FIG. 1
Referring generally to , an embodiment of a well system is illustrated as comprising a downhole, electrically powered system, e.g an electric submersible pumping system. Electric power is provided to the electric submersible pumping system or other powered system via a power cable. The power cable, in turn, is coupled with the electrically powered system by an electrical connector, e.g. a pothead assembly. The illustrated electric submersible pumping system or other types of electrically powered systems may comprise many types of components and may be employed in many types of applications and environments, including cased wells and open-hole wells. The well system also may be utilized in vertical wells or deviated wells, e.g. horizontal wells.
FIG. 1
20
22
24
22
26
24
26
28
30
32
24
30
Referring again to , a well system is illustrated as comprising an electrically powered system which receives electric power via an electrical power cable . By way of example, the electrically powered system may be in the form of an electric submersible pumping system , and the power cable may be designed to withstand high temperature, harsh environments. Although the electric submersible pumping system may have a wide variety of components, examples of such components comprise a submersible pump , a submersible motor , and a motor protector . The power cable may be structurally and electrically coupled with the electric submersible motor .
26
34
36
38
40
26
22
In the example illustrated, electric submersible pumping system is designed for deployment in a well located within a geologic formation containing, for example, petroleum or other desirable production fluids. A wellbore may be drilled and lined with a wellbore casing , although the electric submersible pumping system (or other type of electrically powered system ) may be used in open hole wellbores or in other environments exposed to hydrocarbons, high temperatures, and high-pressure deleterious gases.
40
42
36
38
26
38
44
46
44
22
48
24
44
26
24
26
24
26
In the example illustrated, however, casing may be perforated with a plurality of perforations through which production fluids flow from formation into wellbore . The electric submersible pumping system may be deployed into the wellbore via a conveyance or other deployment system which may comprise tubing , e.g. coiled tubing or production tubing. By way of example, the conveyance may be coupled with the electrically powered system via an appropriate tubing connector . In the illustrated embodiment, power cable is routed along deployment system . However, the electric submersible pumping system also can be suspended via the power cable to form a cable deployed electric submersible pumping system . In this latter application, the power cable is constructed as a robust cable able to support the weight of the electric submersible pumping system .
30
24
30
28
50
46
44
26
In the embodiment illustrated, electric power is provided to submersible motor by electrical power cable . The submersible motor , in turn, powers submersible pump which draws in fluid, e.g. production fluid, into the pumping system through a pump intake . The fluid is produced or moved to the surface or other suitable location via tubing . However, the fluid may be pumped to other locations along other flow paths. In some applications, for example, the fluid may be pumped along an annulus surrounding conveyance . In other applications, the electric submersible pumping system may be used to inject fluid into the subterranean formation or to move fluids to other subterranean locations.
24
24
26
24
30
52
As described in greater detail below, the electrical power cable is constructed to reduce or eliminate the potential for internal damage to the cable while maintaining a space efficient construction. This allows the power cable to consistently deliver electric power to the submersible pumping system over long operational periods in environments subject to high temperatures, high pressures, deleterious fluids, high voltages, and/or other conditions which can be detrimental to conventional power cables. The power cable is connected to the corresponding, electrically powered component, e.g. submersible motor , by an electrical connector , e.g. a suitable pothead assembly.
24
24
24
30
24
Depending on the application, the power cable may comprise an individual electrical conductor protected by various internal layers or a plurality of electrical conductors protected by the corresponding internal layers. In various submersible pumping applications, the electrical power cable may be in the form of a motor lead extension. In many of these applications, the motor lead extension is designed to carry three-phase current, and submersible motor comprises a three-phase motor powered by the three-phase current delivered through the three electrical conductors of the power cable .
FIG. 2
24
24
54
54
54
56
56
58
Referring generally to , an embodiment of power cable is illustrated. In this example, the power cable comprises at least one conductor , e.g. three conductors for three phase power. Each conductor may be coated or otherwise covered with an insulation layer . Additionally, each insulation layer may be coated or otherwise covered with a fluid barrier layer .
24
54
56
58
24
58
60
58
24
60
62
60
58
60
58
In a specific embodiment, the cable comprises three conductors which are each coated/covered with the layers , and then combined, e.g., laid adjacent to one another to form a generally flat power cable . In this example, each fluid barrier layer is surrounded by a protective layer which protects and cushions the fluid barrier layer against damage that could otherwise occur during assembly, transport, and/or use of the power cable . The protective layer may be a foamed protective layer formed from a foamed material . The protective layer may be formed around each fluid barrier layer individually or the protective layer may be formed around the plurality of fluid barrier layers collectively.
60
64
64
54
56
58
60
64
56
58
60
64
60
60
58
64
In at least some embodiments, a next layer surrounding the protective layers is an armor layer . The armor layer may be formed of a suitably strong material, e.g. a steel strip armor wrap, for ease of handling and to protect internal conductors and cable layers , , . The armor layer , combined with the cable layers , , , provides resistance to incursion of well fluids and also an outer protective shell. In some applications, the armor layer is in direct contact with protective layers . The protective layer may be formed to protect the fluid barrier layers without an additional jacket layer inside armor layer .
50
50
50
According to an embodiment, each conductor may be formed from a suitable, electrically conductive material, such as copper. As an example, cable conductors may be formed from high purity copper and may be solid, stranded or compacted stranded. Stranded and compacted stranded conductors offer improved flexibility, which may be useful in some embodiments. Each conductor also may be coated with a corrosion resistant coating to prevent conductor degradation from, for example, hydrogen sulfide gas which is commonly present in downhole environments. Examples of such a coating include tin, lead, nickel, silver, or other corrosion resistant materials including other alloys or metals.
56
56
56
58
58
56
58
54
56
Insulation layers may be formed from a variety of materials. By way of example, insulation layers may be formed from a polymeric material, e.g. polyetheretherketone (PEEK), EPDM, or another suitable electrical insulation material. In some applications, a low-swell EPDM or oil-resistant EPDM material may be used to form insulation layers . Similarly, fluid barrier layers may be formed from a variety of suitable materials depending on the parameters of a given application. By way of example, fluid barrier layers may be formed of lead, e.g lead layers extruded over the corresponding insulation layers . However, fluid barrier layers also may be formed from other suitable barrier materials, such as extruded or taped layers of fluoropolymers. For example, each fluid barrier layer may be formed from a polytetrafluoroethylene (PTFE) film wrapped about the corresponding insulation layer .
60
62
60
58
58
60
In some embodiments, the protective layer is formed as a foamed protective layer utilizing foamed material . The foamed protective layer may be extruded over the fluid barrier layer so as to form a continuous and contiguous covering atop the barrier layer . Depending on the application, the foamed protective layer may be formed from a polymer with very high stiffness and cut resistance. In some embodiments, the foamed protective layer may be formed from polyester, e.g., polyethylene terephthalate (PET). The polymer may be a cross-linked material, such as cross-linked polyethylene (XLPE), or a fluid resistant material, such as the fluoropolymers: fluorinated ethylene propylene (FEP) or perfluoroalkoxy polymer (PFA).
60
24
24
60
62
2
2
In some embodiments, the polymer used to form protective layer also may provide improved thermal stability properties and/or improved fluid resistance with respect to the power cable . Foaming of the polymer provides a protective, cushioning layer within the power cable . The polymer of protective layer may be foamed by using a suitable blowing agent. In some blowing agent embodiments, the amount of blowing agent may be between about 0% and about 20% or more (e.g., 1.5%, 2%, 3%, 5%, 8%, 11%, . . . , 19%, 20%). Depending on the embodiment, the blowing agent may produce CO, N, or other gases which form pockets (voids or closed cell spaces) in the polymer to create foamed material . In some embodiments, however, the polymer may be foamed through a gas injection process.
FIG. 3
FIG. 3
FIG. 3
62
60
62
60
62
60
24
Referring generally to , an example of the foamed material used in creating foamed protective layer is illustrated. illustrates an upper image, middle image, and lower image showing material with different percentages of blowing agent to create a desired protective layer . In this embodiment, a comparison is provided of a PET polymer with three different levels of blowing agent. In the example provided in , the PET polymer used to create material of protective layer is illustrated without blowing agent (top image; 200 μm resolution); with the PET polymer having 2% blowing agent (middle image; 200 μm resolution); and with the PET polymer having 5% blowing agent (bottom image; 500 μm resolution). The percentage of blowing agent may be adjusted and selected according to the parameters of a given application and/or environment in which the power cable is utilized.
60
62
66
66
60
66
64
FIG. 3
Use of the extruded foamed protective layer provides improved crush resistance during the armoring process. By way of example, foamed material may contain a plurality of closed internal air pockets (see ). The closed, internal air pockets resist or absorb force(s) exerted on protective layer during construction, handling, and/or use. For example, the internal air pockets are able to absorb forces during the armoring process of applying armor layer , thus reducing or preventing indentation of the fluid barrier layer, e.g. lead barrier layer.
60
24
24
60
58
60
54
24
56
60
58
54
58
26
Use of foamed protective layer also improves the radial strength of power cable compared to a cable with a non-foamed protective layer. For example, a flat power cable with a foamed protective layer suffers substantially less deformation of a lead fluid barrier after the armoring process. Without foamed protective layer , deformation of the elements, e.g. layers, between the conductors may worsen when the cable is deployed downhole and subjected to substantial heat which can cause expansion of the insulation layer . Over time, expansion of the insulation layer without protective layer can lead to creep and failure of the fluid barrier layers , e.g. lead barrier layers, between the conductors . Failure of the fluid barrier layers results in cable failure and substantial downtime with respect to the electric submersible pumping system .
24
60
24
60
54
24
24
60
FIG. 2
Construction of power cable with foamed protective layer also facilitates a “flatter”, more consistently shaped cable . By way of example, the foamed protective layer may prevent the armoring process from digging into the outer conductors . This allows the opposing external sides of the armored cable to be flatter (i.e. less rounded in cross-section) which facilitates both improved winding of the cable on a reel and improved clearance during installation. As illustrated in , the flat cable with the foamed protective layer has a very flat shape.
60
60
58
60
Other characteristic improvements also may result from use of the foamed protective layer . Examples include improved high temperature performance. The foamed protective layer allows room for thermal expansion so as to prevent the lead or other material of fluid barrier layer from deforming at high temperatures. Additionally, use of the foamed protective layer tends to improve manufacturing speed. For comparison, a braided layer can be applied at about 18-20 feet per minute while a high speed tape wrapping machine may process a cable at about 100-200 feet per minute. An extruded foamed protective layer, however, may be applied at a much higher rate of, for example 200-800 feet per minute or even 500-1,300 feet per minute or higher.
24
60
60
Reduced material costs and reduced overall cost of the power cable also may result from use of the foamed protective layer . Because the protective layer is a foamed extrusion, a substantial portion of the protective layer's volume is formed with gas pockets (e.g., air pockets). The gas pockets reduce the quantity of polymeric material otherwise used to fill the same volume or to provide the same thickness of protective layer. Additionally, the cost of buying a resin preformed into a fiber or tape is avoided because the resin can be purchased in raw material pellet form at a lower material cost.
FIG. 4
24
54
56
56
56
58
68
58
60
68
54
56
58
60
24
Referring generally to , another embodiment of power cable is illustrated in cross-section. In this example, three copper conductors are each separately covered by insulation layer formed of EPDM insulation. However, the insulation layer may comprise a plurality of layers as illustrated. The insulation layer(s) , e.g. EPDM insulation layer, is covered by fluid barrier layer in the form a lead barrier layer . The fluid barrier also may comprise a plurality of layers as illustrated. In this embodiment, the protective layer is in the form of an extruded foamed protective layer positioned directly over the lead barrier layer . Each conductor and the corresponding EPDM insulation layer , lead protective layer , and foamed protective layer are positioned sequentially adjacent, e.g. side-by-side, and subjected to an armoring process (e.g. by winding a metal armor strip in an overlapping helical fashion) to form a flat cable .
24
24
60
24
24
22
24
Depending on the application, the power cable may have a variety of shapes and/or components. For example, the power cable may have a variety of layers formed of various materials in various orders within the armor layer. Additionally, various layers may be disposed around the corresponding conductors individually or collectively. The foamed protective layer also may be formed from a variety of different materials which are foamed to create internal closed gas pockets of desired size and arrangement. The number, type, and arrangement of electrical conductors also may be selected according to the parameters of a given application and environment. For example, the electrical cable may have a round configuration, a rectangular configuration, or a flat configuration to accommodate certain spatial constraints. Various additives and materials may be mixed with or otherwise added to materials forming the various layers of the power cable . The power cable may be used to provide electrical power to downhole systems, e.g. electric submersible pumping system , however the power cable may be used in a variety of other types of applications.
Although a few embodiments of the disclosure have been described in detail above, those of ordinary skill in the art will readily appreciate that many modifications are possible without materially departing from the teachings of this disclosure. Accordingly, such modifications are intended to be included within the scope of this disclosure as defined in the claims.
BRIEF DESCRIPTION OF THE DRAWINGS
Certain embodiments of the disclosure will hereafter be described with reference to the accompanying drawings, wherein like reference numerals denote like elements. It should be understood, however, that the accompanying figures illustrate the various implementations described herein and are not meant to limit the scope of various technologies described herein, and:
FIG. 1
is a schematic illustration of a well system comprising an electric submersible pumping system positioned in a wellbore and powered via electrical power provided by a power cable routed downhole along the wellbore, according to an embodiment of the disclosure;
FIG. 2
is a cross-sectional view of an example of a power cable having a foamed protective layer, according to an embodiment of the disclosure;
FIG. 3
is an enlarged cross-sectional view of an example of the protective layer with different percentages of blowing agent to create the foamed, protective layer, according to an embodiment of the disclosure; and
FIG. 4
is a cross-sectional view of another example of a power cable having a foamed protective layer, according to an embodiment of the disclosure. | |
A design inspiration taken from Noah’s Ark; this building embodies a sense of detail and richness in design. The vibrant and diverse design features comprise a rich mix of traditional and contemporary design styles.
An amazing fore court garden envelopes the building in a serene tropical atmosphere enhancing its micro-climate. The Ark is covered partly with reclaimed wooden pallets. The combination of traditional architecture with modern elements gives the building an overall eclectic feel. This can be seen evidently
in the design of the interior spaces. The ground floor portrays a contemporary atmosphere. The first floor is a mix of traditional and modern architecture and the last floor (the loft) has a total traditional look with exposed wooden rafters. The ambiance in the building’s micro-environment is greatly enhanced by the presence of the fountain, the roof garden and the skylights. Overall, the design reflects a sustainable and eco-friendly solution to sub-urban living, featuring elements such as solar energy, large windows, cross ventilation and recycled materials. | https://iacgh.com/the-ark/ |
As the caption to a picture I posted to my personal Facebook page reads, somehow I ended up backstage at a runway show in Selçuk, Turkey with the models [don’t ask how]…and me in my shorts, sweaty, and about a foot too short. The reason for travelling to Selçuk, Turkey, was, of course, to see Ephesus, ancient Greek city, second largest city in the Roman Empire, home to one of the Seven Wonders of the Ancient World [The Temple of Artemis], and in the twenty-first century one of the best preserved ancient cities in the region. I did see Ephesus the day after this runway show, and it did not disappoint [more on that in a future blog post]. But I digress.
On my first day in Selçuk, instead of seeing one of the seven ancient wonders of the world [or the one column that is left of it] I was surrounded by seven wonders of modern Turkey: backstage with seven runway models.
Now you can ask, and the story goes like this…
As I was wandering around the town checking out the extant pieces of Roman aqueduct and trying to beat the brutal rays from the sun as much as I could, I happened to walk by a taxi cab. In the taxi were two very attractive people [model types] and someone who was, from the sound of their accent, from some former colony of the British Empire. There a heated discussion going on and I could hear the Brit-man trying to communicate with someone, but to no avail. I could also hear French being spoken by a young and very sculpted male model with dark hair, a brooding look, and a pseudo-Bieber haircut. There was a female model as well, quasi-Shakira looking, and she was silent, as far as I could tell. I’m still not sure exactly where she was from.
As I stopped, I gleaned that the conversation was something about directions, and since the model was speaking only in French, he apparently wasn’t getting his point across to the Brit. I walked up to the taxi and asked [in French] if I could be of assistance. The male French model stared at me for a moment, then with recognition in his eye [I guess he figured why the heck not] said yes.
I then became the language bridge between the Brit and the Frenchman, much like many an envoy had done in centuries past. This time, however, what the French model wanted to communicate to the Brit [who I assumed was some sort of impresario] was that he had to make one stop before he got to the location for the runway show – to pick up a bag/small suitcase [that was lost on me in translation].
I managed to make this clear to the Brit, and once this happened, the French model was so thrilled he asked me if I wanted to come along to see the show. I, of course, said “bien sûr.”
I hopped in, and we made our way by taxi to the quick stop at the male model’s “apartment,” and then to the location of the runway show. This place, a few miles outside of Selçuk, looked like a fortress, with security meeting us as we pulled up. I was quickly escorted into a [very] back row at the show, and within 20 minutes it began.
As a dance mix of Michael Jackson’s “Scream” blasted, the models came out one by one, modeling – in scorching heat – furs and leather. The clientele was all Russian and they were in the buying mood. Each model was wearing a number which corresponded to the piece they were wearing. Later I found out that one small jacket of some rare fur had a price tag of over 55,000 Euros. As I am anti-fur, and that price tag is just a bit high for me [ha], I marveled at the voracious fur-buying appetite of the Russians. I’ve been to Siberia, and yes it is cold there.
The runway show lasted only 20 minutes or so, at which time the Russians retired to the sales floor and began their purchases. The French model motioned to me to come back stage, so I did. I met all of the models, each of whom greeted me with a warm Turkish/European double kiss, and we then all took a photograph together. He thanked me in a particularly French way.
After, I took a cab back to my hotel in Selçuk, and added another strange and spontaneous experience to my travel log. C’était magnifique! | http://flyingnorth.net/2011/07/29/an-unexpected-trip-to-a-runway-show-in-selcuk-turkey/ |
🍎 Course Overview
What does Sudoku have in common with debugging, scheduling exams, and routing shipments? All of these problems are provably hard — no one has a fast algorithm to solve them. But in reality, people are quickly solving these problems on a huge scale with clever systems and heuristics!
In this minicourse, we’ll explore how researchers and organizations like Microsoft, Google, and NASA are solving these hard problems, and we’ll get to use some of the tools they’ve built!
3.35
Course
3.52
Instructor
2.46
Difficulty
2.27
Workload
🧭 Logistics
Instructor: Jediah Katz
Location: Towne 307
Time: Tues 5:15-6:30 PM
Course Sites: Please sign up for Piazza and Gradescope through Canvas!
19x Combined Lecture: You must also enroll in the combined lecture, which consists of 3 optional lectures. Please reach out if you want to take CIS 189 but have a conflict with the combined lecture.
🎓 Enrollment
Please sign up through the CIS waitlist!
CIS 121 is a prerequisite for CIS 189. You are expected to be fairly comfortable with programming and familiar with graphs. Experience with Python is helpful but not necessary. Knowledge of NP-completeness is not necessary but useful to motivate the course.
📝 Grading
Homeworks will consist of 4 medium-size programming projects, assigned roughly biweekly.
You will work in pairs on a final project of your choosing — you might choose to solve a practical problem using the tools we’ll learn, implement a solver with modern techniques, or even explore new methods of your own!
Homework: 60%
Final Project: 30%
Attendance: 10%
👋 Teaching Staff
Instructor
Office hours:
After class
Teaching Assistant
Office hours:
Mon 4-5pm, Fri 2:30-3:30pm (Zoom)
Teaching Assistant
Office hours:
Tue & Thu 12:30-1:30pm (Zoom)
Teaching Assistant
Office hours: | https://www.cis.upenn.edu/~cis189/ |
PROVIDENCE – The Department of Environmental Management (DEM) announced today that the Mobility Innovation Working Group has issued its final report. The report includes recommendations to reduce greenhouse gas emissions and maximize clean mobility options for all Rhode Islanders. Approximately 40% of Rhode Island's greenhouse gas emissions are generated from the transportation sector.
Members of the Working Group, including representatives from DEM, the Department of Transportation, Office of Energy Resources, Division of Statewide Planning, and stakeholders, have collaborated over the past five months to develop a statewide mobility strategy that builds on the state's existing clean transportation policies, regulations, and initiatives to further transportation options and promote economic development.
"The Working Group members feel the greatest contribution we can make is to encourage action and we feel confident that this report provides an ambitious and actionable framework," said Chairperson Colleen Quinn, who has worked for more than a decade as a pioneer and thought leader in the clean transportation sector. "We look forward to continued collaboration as we set Rhode Island on the path to a clean transportation future, one that promotes green economic development and ensures equitable benefits for all Rhode Islanders. "
The Working Group conducted a series of public meetings and sought input from a wide range of experts that spoke to the future of transportation and the importance of equitable policy in transportation and climate. The Group also heard from local and national organizations that advocated for policies and investments in clean transportation and mobility.
The final report highlights facts, trends, and issues that emphasize the impact of transportation emissions on climate change and the impact that climate change has on Rhode Islanders. Climate change is a clear and present threat to the lives and well-being of Rhode Islanders, and the state has several strategic advantages in taking action to mitigate the cause of climate change and prepare for the hazards expected to emerge over the coming decades.
Also, the report summarizes the Rhode Island mobility audit and state benchmarking. The state has engaged in numerous initiatives to promote clean transportation and improved mobility. The audit was an essential first step in developing recommendations to meet the state's mobility goals and establishes the baseline for the state's pursuit of its ambitious environmental, clean transportation, and equity vision.
In addition, the report summarizes and highlights three key areas of clean transportation investments. The portfolios were evaluated using an investment tool that translates dollar values into benefits including changes in vehicle-travel, greenhouse gas emissions reductions, air pollutant reductions, the value of health benefits, and economic benefits.
Lastly, the report includes recommendations and action steps including programmatic, legislative, and regulatory initiatives focused on developing accessible mobility options, reducing greenhouse gas emissions, and creating sustainable jobs. They are organized into six categories, as follows:
• Create a healthier environment for all Rhode Islanders with specific benefits for residents of our most overburdened and underserved communities;
• Establish Rhode Island as a nation-leader in bold transportation and climate commitments;
• Modernize, expand, and invest in state transit and transportation assets to effectively move more people and improve accessibility;
• Improve air quality by taking steps to electrify the transportation sector;
• Create a 21st century mobility infrastructure that capitalizes on the emerging changes in transportation technology; and
EIN Presswire's priority is source transparency. We do not allow opaque clients, and our editors try to be careful about weeding out false and misleading content.
As a user, if you see something we have missed, please do bring it to our attention. Your help is welcome. EIN Presswire, Everyone's Internet News Presswire™,
tries to define some of the boundaries that are reasonable in today's world. Please see our
Editorial Guidelines
for more information. | |
Liberia’s civil war between 1989 and 2003 left hundreds of thousands dead, and many more affected by the extreme violence that ravaged the country. The wars ultimately ended with the exile of then president Charles Taylor, the Comprehensive Peace Agreement of 2003, and the establishment of the National Transitional Government of Liberia, leading to elections in 2005.
The task of rebuilding Liberia, a divided and impoverished country even before the war, has been a daunting one. Almost all facets of the state and people’s lives were damaged or destroyed. The challenges of post conflict reconstruction include the establishment of a legitimate and effective government, reform of the security and justice sectors, and economic and social revitalization. The war was fought between war lords who forced people to divide along ethnic lines. A major task for Liberians is, therefore, to rebuild trust between all sections of society and find ways to live together peacefully. In 2005 the country voted for Liberia’s (and Africa’s) first elected female president, Ellen Johnson Sirleaf. Her government, along with strong support from the international community, began the transition from an emergency security and humanitarian support phase to a Post-Conflict development and reconstruction phase. To help address the wounds of war, the government funded an independent Truth and Reconciliation Commission (TRC) that began in 2006 to look into the causes of the war and recommend steps to address the issue of accountability.
As the 2011 presidential election nears, Liberia is once again at an important juncture on the path to its peaceful reconstruction. Much progress has been made, but enormous challenges remain as the government continues to work to implement the country’s Poverty Reduction Strategy. This study was undertaken to contribute to a more nuanced understanding of the population’s priorities for peacebuilding, of Liberians’ perceptions of their post-war security, and of existing disputes and dispute resolution mechanisms. The study is based on extensive consultation with local organizations, interviews with key informants, and a nationwide survey of 4,501 respondents randomly selected to represent the views of the population, implemented in November and December 2010.
By providing county-level as well as national data, the results of this study give a voice not only to Liberians as a nation but also as residents of each of the 15 counties. This is particularly meaningful in Liberia because county lines were in part drawn around ethnic groupings, and different counties (and their specific populations) were impacted in different ways and at different times by the civil war. By presenting the priorities, perceptions, and attitudes of Liberians in each county, this report aims at contributing to the ongoing dialogue about how to make a successful transition from war to peace. The first part of this report focuses on understanding areas of tension and disputes among the population. The second half explores Liberians’ views on ways to consolidate peace, resolve disputes, and prevent conflicts.
For a discussion of the broad challenges of Post-Conflict transition, Nicole Ball, “The Challenge of Rebuilding War-Torn Societies,” in Turbulent Peace: The Challenges of Managing International Conflicts, eds. Chester A.Crocker, Fen Osler Hampson and Pamela R. Aall (Washington, DC: United States Institute of Peace Press, 2001).
This is possibly the largest county-level nationwide survey on peace and reconstruction ever undertaken in Liberia. | http://www.peacebuildingdata.org/research/liberia/about-research |
BACKGROUND
DETAILED DESCRIPTION
1. Technical Field
The present disclosure relates to a hard disk backplane structure and a hard disk cooling assembly using the structure.
2. Description of the Related Art
A typical hard disk backplane structure for mounting a plurality of hard disks includes a printed circuit board (PCB). The PCB includes a plurality of data connectors and power receptacles. The hard disks are positioned at a side of the PCB, and each hard disk is connected to a data connector and a power receptacle, for receiving data and electrical power from the hard disk backplane structure. The PCB further defines a connecting slot for receiving an expansion card or other additional electrical components.
When the hard disk backplane structure is mounted in a computer case, a fan of the computer is generally positioned at the other side of the hard disk backplane structure away from the hard disk due to the limitation of space in the computer case. Airflow created by the fan should flow to the hard disks, but is often blocked by the additional electrical components on the printed circuit board, resulting in low cooling efficiency.
Therefore, there is room for improvement within the art.
FIGS. 1 and 2
FIG. 3
100
21
23
21
21
211
212
211
213
211
211
212
21
215
23
211
213
2131
2133
213
33
100
100
33
213
33
213
33
Referring to , an exemplary embodiment of a hard disk backplane structure includes a main body and an additional board . The main body can be a substantially rectangular printed circuit board (PCB). The main body includes a first side surface , a second side surface adjacent to the first side surface , and for example, four backplane connectors positioned on the first side surface in a row. The first side surface is substantially perpendicular to the second side surface . The main body defines a connecting slot for mounting the additional board at the second side surface . Each backplane connector includes a data connector and a power receptacle . The backplane connectors are connected to hard disks (shown in ), to receive electrical power from the hard disk backplane structure , and transmit data between the hard disk backplane structure and the hard disks . In the illustrated exemplary embodiment, the number of the backplane connectors is four, for mounting four hard disks . In alternative embodiments, the number of backplane connectors may be one or more according to the number of the hard disks .
23
230
231
230
233
230
231
231
233
215
23
21
23
100
The additional board includes a base , a plurality of additional receptacles positioned on a side of the base and a golden finger positioned on the opposite side of the base from the additional receptacles . The additional receptacles are used for mounting or connecting additional electrical components, such as serial attached small computer system interface (SAS) card, SAS integrated circuit card or SAS controlling integrated circuit card. The golden finger is inserted in the connecting slot to mount the additional board on the main body . In the illustrated embodiment, the additional board is a flexible PCB, to reduce the cost of the hard disk backplane structure .
FIG. 3
FIG. 1
200
100
30
200
35
213
33
21
211
33
213
21
35
33
21
33
23
33
100
23
21
215
100
Referring to , the exemplary embodiment of a hard disk cooling assembly using the hard disk backplane structure is shown positioned in a computer case . The hard disk cooling assembly further includes a plurality of spaced fans adjacent to the backplane connectors . The hard disks are mounted on a side surface of the main body opposite to the first side surface , and each hard disk is connected to a backplane connector correspondingly by a circuit in the main body . The fans create airflow across the hard disks to the main body for cooling the hard disks . The additional board is positioned out of the way of the airflow to the hard disks . The hard disk backplane structure includes an additional board for mounting the additional electrical components, and the main body may define cooling holes (shown in ) to further improve the cooling ability of hard disk backplane structure .
Finally, while particular embodiments have been described, the description is illustrative and is not to be construed as limiting. For example, various modifications can be made to the embodiments by those of ordinary skill in the art without departing from the true spirit and scope of the invention as defined by the appended claims.
BRIEF DESCRIPTION OF THE DRAWINGS
The components in the drawings are not necessarily drawn to scale, the emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the views, and both the views are schematic.
FIG. 1
is a plane view of an embodiment of a hard disk backplane structure.
FIG. 2
FIG. 1
is a side view of the hard disk backplane structure shown in .
FIG. 3
FIG. 1
is a plane view of a hard disk cooling assembly using the hard disk backplane structure shown in . | |
Music is one the most beautiful and special things in our lives. It’s something that can touch each and every person in a different way, and one of the rare things where language, religion or culture are irrelevant when it comes to enjoying it.
Music is food for the soul, and is something that can take us on a journey to some place far away, escaping our often-mundane reality. Music is an important part of many people’s lives. Whether you enjoy playing and making music, or simply listening to it, it is something that would make the world a far emptier place if it did not exist.
Contents
Teaching children about music
We are first introduced to the concept of music as children. Lots of parents like to sing and lullaby their babies to sleep, and this is usually our first connection with music and song. During their early years, most infants will make their own music – usually by banging something on the floor or table.
It is an important instrument (no pun intended) in our development and growth, and something that should be encouraged wherever possible.
Benefits of learning to play music as a child
Learning music as a child can help with patience and perseverance. Many children want things instantly, but with all aspects of learning, it takes time, and music is a great tool to help youngsters understand that somethings need practice and time. There are lots of places where children can learn musical instruments, such as the lvlmusicacademy that offers a great range of lessons.
Another benefit of learning music is that it promotes creativity. Some kids like to draw and paint, others like to build Lego models, while some like to sing and dance. Every child has a creative side, and music is one way in which they can express their creativity.
One final benefit worth mentioning is that learning a musical instrument can help improve your memory. There have been various studies that have shown that kids who have been musically trained have a much better memory and recall than those who haven’t been taught to play music. Another study compared the brains of musicians and non-musicians and found that those who were either professional or amateur musicians had an increase in their brains grey matter than those who had no musical training.
Learning the piano
There are lots of different instruments that are ideal to learn when starting out with music. These range from flute, classical guitar, violin and piano. The piano is one of the most popular musical instruments to learn, and also one of the more difficult.
It takes a lot of time, practice and patience to learn to play the piano. It’s an instrument that requires a lot of dedication to learn, which is why it might not always be the best option for every child. There are a number of reasons why learning the piano is a great choice, and in this article we will take a look at some of them.
Help improve concentration and focus
As mentioned earlier, learning any type of musical instrument is going to require concentration. This is especially true when it comes to learning the piano. It helps to improve the speed with which we think, so that the mind and the hand movements are working together as one. Playing the piano requires many skills being used all at once, and over time, they will have really improved their levels of concentration.
When learning to play the piano, staying focused on what you are doing is a very important skill. In fact, scientists have studied the brains of musicians and have found that when they are playing, their brain is going through the equivalent of a complete body workout.
Increase in memory capacity
Learning to play any type of instrument will have a positive effect on your memory function, and the piano is no different. Over time, musical students will begin to memorize certain parts of a piece of music, without the need to read the notes from the music sheet.
In youngsters, this is particularly important in their development, as skills such as short and long-term memory function are ones that will be helpful throughout their life. A number of studies have been carried out which have shown that people who have played music throughout their life have been able to avoid developing problems with their memory and brain function.
Helps with motor skill dexterity
The piano is one of the instruments that requires a great deal for finger movement and management, with both hands working separately to each other, performing different movements and functions.
Being able to achieve the required dual hand capability to play the piano is a difficult skill to gain, though it can be achieved through practice and perseverance. In addition to the hand and finger movements, the piano also requires foot movement as well on the pedals.
Helps with anxiety and stress
Children can go through a lot of stressful times. They are at school everyday. They often take part in after school activities and always have homework that needs to be completed. If you add up all of these elements, you can quickly begin to see why many children suffer from anxiety and stress.
Playing a musical instrument such as the piano can help reduce the levels of stress and can completely change the mood of a child. In fact, adults who play the piano are equally able to lower their levels of anxiety, and it can also help if they suffer from depression. Music is very powerful, and is something that can affect a whole range of emotions.
Some final thoughts
If your child has shown interest in learning a music instrument, then it’s something that you as a parent should greatly encourage. It can help their development on so many levels, and also offer an escape from their day to day grind at school.
There are lots of professional piano teachers who offer lessons for all levels of students. Many will offer a free introduction lesson, so you have the perfect opportunity to see if your child has any interest in learning this truly unique and majestic instrument. They may not become the next Mozart, but at least they will have a great time learning. | http://mrdetechtive.com/why-learning-the-piano-might-just-be-what-your-child-needs/ |
...
|Date raised||Issue||Area||Comments/Questions||Entered By||Date Moved from Parking Lot||Status|
|2017-07-??||Use of enter in place of click||checkout||SIG has asked that one click be required on pop-ups for checkout and check-in. This came up while discussing the proxy pop-up. If the person doing checkout hits enter, the items being checked out will be charged to the person standing at the desk, who may not be the intended borrower.||Holly Mistlebauer|
|2017-07-??||Lost items process||partly fees/fines||While discussing fines and fees, the issue of lost items was raised. It was pointed out that charging the fine is just one part of a multi-step process in reporting an item as lost. We need to develop this process.|
|2017-07-??||Historical item details for fees/fines||automated fees/fines||When we add /fines for overdue items, we will need to have historical information available about how the item looked at the time the overdue was charged. We need to know when the person checked the book out, when it was due, when they finally brought it back, etc.|
|2017-08-??||Localization of currency||fees/fines (and other monetary functions)|
Not all currencies use a period between the 1 and the change like the US does ($1.20). For example, in Germany they use 1,20 to represent 1 euro and 20 cents. We need to allow for the library specify what they want to have displayed. My understanding is that this feature is not slated for v1. Holly: you might want to double check the internationalization tab in the roadmap. My understanding is that v1 should handle currency, currency codes, decimals, and financial localization properly, since it will be needed for acquisitions financial management as well. Ann-Marie Breaux
|2017-08-??||Patron blocks||checkout||When discussing the patron checkout display data the issue of patron blocks arose. We need to determine how these will be handled, which includes entering a block, displaying the block, and enforcing the block.|
|2017-08-17||Permission overrides||checkout (and other places)||There needs to be a consistent way of doing overrides for circulation functions. When students or other workers don't have permission to do a certain task the supervisor needs to be able to step in and override the system to allow the task to be completed.|
|2017-08-24||Calculation/display of accruing fines||automated fees/fines||How do we represent fines that are still in the process of being calculated? e.g. an overdue fine for an item that has not yet been returned||Andrea Loigman|
|2017-08-24||Bursar rejects the transaction and sends it back to the library||transfer fees/fines||We need to be able to handle this situation. If the fee/fine is marked as closed it will need to be reopened.|
|2017-08-24||Keeping information about an item when a fee/fine has been paid||automated fees/fines||We want to keep loan history for items that have unpaid fees/fines, but what about items with closed fees/fines? We need background information supporting the fee/fine, but we don't want full loan history.|
|2017-08-24||Use of terms fee and fine||Fees/Fines||Some places we are using fees/fines, others just fees, and still others just fines. We probably need to be more consistent. comment - Fines and fees are two different things, so all for consistency, but we also need specificity.||October 2017||We have agreed to use "fees/fines" everywhere.|
|2017-09-26||Tenant setting for how much loan history to retain||Loans||Different institutions will have different rules around how much patron loan history to maintain. Need to design/discuss the setting. Also part of this discussion: if we clear patron data from loan history, should we retain general loan history for an item (scrubbed of patron identity) for reporting.||Cate Boerema||January 2018|
Discussed with RA SIG. FOLIO-1017 captures high-level requirements. Need to review with RA and Privacy SIGs.
|2017-09-26||Tenant setting for hiding the profile pic box||Users and Checkout||Many institutions won't have profile pictures/avatars in FOLIO. We need a tenant level setting for disabling this so the real-estate isn't occupied by a big empty box or generic avatar||September 2017||For checkout picture was moved to right side–it's absence won't leave a hole.|
|2017-09-26||Normalized call number||Loans; Fees/Fines||Need a normalized call number stored and used for sorting on Loans page. We are currently displaying a concatenated call number on the Loans page to save on real-estate. But sorting by this concatenated string isn't going to work. FOLIO needs to store a "normalized"call # and use that for sorting on Loans and elsewhere. From experience, this data really needs to be stored as opposed to generated on-the-fly. This is a must-have for v1.||Cate Boerema|
|2017-09-26||More than one preferred contact method||Users||Some SIG members thought it would be useful to allow users to have >1 preferred contact method so they could receive, for example, both texts and email notifications. The current design only supports one preferred contact method. Will come back to this.||October 2017||User story in backlog awaiting prioritization: UIU-275|
|2017-09-25||Sort by first author||Loans||We decided we wanted to support sorting by "first author" when there are multiple. I am going to hold off on entering this story until we have finalized the instance metadata and have a sense of how authors, creators etc will be made available. Parking for now.|
|2017-10-23||Change due date from Checkout screen||Checkout||We were looking at the new checkout screen and Wendy mentioned that she'd like a way to change the due date right from this page (as opposed to having to go to the Loans page to do this). Voyager allows right click to change date or something. Need to discuss further.||Cate Boerema|
|2017-12-12||Option to backdate check in date and time from the “check in” screen||Checkout||For example, if you are not able to check your campus book drops for several days due to weather (or whatever reason) and you need to check in all the books you finally are able to fetch out of them, you need to be able to “back date” the check in by date and time to avoid fines that may have accrued while sitting in the book drop.|
Cate Boerema (proposed by Rameka Barnes)
|2017-12-13||Solution for auto-checkout or remote/delayed checkout of In Transit for Delivery items||Requests / Checkout||When items are In Transit for Delivery to fill a Request with fulfillment preference of "Delivery" the items need to be checked out to the requester at some point, but facilities may not be available to actually perform the checkout when the item arrives at its delivery destination (e.g. a department mail room) Some systems have the option to do an automatic checkout for delivery items, which is not an ideal solution. SIG would like to design something better. Amazon lockers came up as an idea.||Tania Fersenheim|
|2017-12-13||"Recently Returned" status||Checkin||Need the ability to set a period for a "recently returned" status to be automatically applied to and removed from an item after it has been checked in at home and is available for loan. This period should be configurable by location, possibly by material type? The status is used to indicate to the public and to staff that, though the item has returned home and is available for loan, it may not have been reshelved yet|
|2017-12-13||Item statuses - some need to be manually editable||??||Some item statuses need to be manualy changed by eligible staff, and others need to be sacrosanct and only changeable by system actions like checkin, etc. The list of item statuses which can and cannot be manually changed needs to be defined.|
|2018-01-11||Request List page (need to revisit)||Requests||Need to revisit this page and make sure that we have all the data we need displaying here, filters etc. Also related, there are some request-related reports that may be launched from this UI or may be generated elsewhere.|
|2018-01-11||Check-in page when patron data is scrubbed from loans immediately||Check in|
How does the Check In page look different for institutions that immediately scrub patron info from loan data after check in? Relates to anonymizing work (FOLIO-1017) as well as to Darcy Branchini's work on the Check In page. | https://wiki.folio.org/pages/diffpages.action?originalId=11895151&pageId=11895152 |
“Oh, I’m bein’ followed by a moon shadow,
moon shadow, moon shadow . . .
Leapin and hoppin’ on a moon shadow,
moon shadow, moon shadow . . .”
~ Cat Stevens (AKA: Yusef Islam)
Those of you who have followed my weekly writings may recognize this photo from Summer 2016 when I was able to capture a full moon AND lightning with my cell phone camera. I was in heaven, and OH SO EXCITED to capture this shot!
I’ve always been fascinated by the sky . . . especially the night sky . . . since I was a toddler. It probably helped having an uncle who was a rocket scientist for NASA.
I remember sitting outside at my Grandmother’s house (lakeside – no street lights!) with my uncle Joe, listening to him point out the constellations and the phases of the moon in the pitch-black night sky. Somehow the joy and excitement in his voice still rings in my memory, as I continue to find myself looking for the next full moon each month!
He was the first to point out to me the changes of the moon, and I remember running outside at night to check to see how it had changed since the last evening. If the weather was cold, I’d run from window-to-window seeing if I could find the moon.
Fast forward several decades, and I am still fascinated by the moon. Living in an area where we got to experience the TOTAL ECLIPSE of 2017 this past Monday was quite exciting, to say the very least.
I intentionally took time out of my schedule to have lunch with my daughter and 3-year-old grandson to celebrate the eclipse. In spite of the intense heat and humidity (it IS St. Louis, after all), it was definitely an experience of a lifetime. Sharing it with loved ones made it even more special.
Teaching the newest generation in the family to “trace over all the crescent moons with your sidewalk chalk” . . . instilling the same awe and admiration for our planet, our constellations and the amazing, unbelievable once-on-a-lifetime experience of a total eclipse . . . PRICELESS. | http://daviscreative.com/2017/08/21/moon-shadow/ |
Mr Schenk is the Vice President Global Customs Policy & Public Affairs at UPS and brings over thirty-five years of experience in customs and trade facilitation work to the post. In his current position, Mr Schenk is responsible for shaping UPS’ global customs policy and border strategies to facilitate the smooth flow of shipments across international borders. He has been working directly with government leaders on reducing trade barriers, simplifying processes and supply chain security issues and, together with the US Council for International Business (USCIB), has actively contributed to national discussions on customs reauthorization and de minimis provisions.
Mr Schenk will take over the reins from Anthony Barone who in his role as a member, Vice-Chair and Chair, greatly impacted the activities of the ICC Commission on Customs and Trade Facilitation for close to eight years. Under Mr Barone’s Chairmanship, the commission produced several key products including the recently published survey on What border barriers impede business ability?, the revised ICC Customs Guidelines and new ICC Guidelines for Cross-Border Traders. He was also instrumental in maintaining ICC’s excellent relations with the World Customs Organization and in re-organizing the commission’s work around the theme of trade facilitation. ICC extends its deep appreciation to Mr Barone for his leadership of the commission.
“Norm Schenk is a very experienced internationalist and I am certain he will bring deep insight to the commission and its stakeholders,” said Mr Barone. “The commission has a challenging and important role to play amid the various multilateral agreements being discussed today. Governments need the practical insights the commission can provide.”
“The ICC Customs and Trade Facilitation Commission plays an important role in helping to develop the solutions and tools needed to implement the recent WTO Agreement on Trade Facilitation, ultimately driving economic growth,” said Mr Schenk. “Customs officials and those engaged in the supply chain will benefit from an open dialogue designed to improve the efficiency and effectiveness of border processes around the world.”
Oliver Peltzer is a Partner at Dabelstein & Passehl, a German law firm focusing on all legal aspects of international supply chains, logistics solutions and world transport systems. He is a lecturer of transport and logistics law at the Technical University of Hamburg and has served as a representative of ICC Germany and of the Federation of German Industries in a variety of national and international transport and logistics debates, including the revision of the ICC rules Incoterms® 2010, the recent reform of German shipping law, and the revision of the widely used Freight Forwarders Terms and Conditions.
“For centuries, transport and logistics has been the backbone of international trade. A restriction in the freedom of transport and logistics hampers each and any manufacturing and producing industry in the world. The Commission on Customs and Trade Facilitation has an important role to play in promoting worldwide supply chains for goods and services,” said Mr Peltzer.
The ICC Commission on Customs and Trade Facilitation has over 120 members from 25 countries. Commission members comprise customs, transport and logistics specialists from ICC member companies and business representative organizations. The central objective of the commission is to overcome trade barriers, to ensure that the liberalization of global trade and investment has a positive impact at the level of the individual international trade transaction. | https://iccwbo.org/media-wall/news-speeches/icc-announces-new-chair-and-vice-chair-of-commission-on-customs-and-trade-facilitation/ |
Essay on waste management in india
>
>
2. Solid Waste Management In India Environmental Sciences Essay In this report, some problems in solid OVERVIEW OF THE CURRENT WASTE MANAGEMENT & RECYCLING SITUATION Waste management is one of India s most urgent problems, but solutions are elusive and Improve your reasearch with over 2 pages of premium content about Waste Management In India OVERVIEW OF THE CURRENT WASTE MANAGEMENT & RECYCLING SITUATION Waste management is one of India s most urgent problems, but solutions are elusive and Article shared by: Essay on Waste Land Management in India! Wasteland is defined as land that is at present lying unused; or land which is not being used to its Solid waste management in India: Such papers accounted for 62% of the paper production capacity in 1995 as KJ NathMetropolitan Solid Waste Management in India. Bhattacharyya2 1Ph. The indisposed waste causes a lot of harm to our environment and breeds various diseases. professional scholars, quality services, instant delivery and other benefits can be found Municipal Solid Waste Characteristics and Management in Kolkata, India Swapan Das1, Bidyut Kr. Our depot contains over 15,000 free essays. Posted By : Prarthana What are the rules and regulations that guide waste management in India? 3. Get studying today and get the grades you want. Dec 07, 2010 · Home » Environment » Management » POSTED ON READERS SUGGESTION » Science and Environment » Solid Waste Management in India. Come browse our large digital warehouse of free sample essays. 0 Literature Review People are consume a lot of product and generate waste product much faster than the natural The MoEF issued MSW (Management and Handling) Rules 2000 to ensure proper waste management in India and new updated draft rules have recently been published [4]. The Waste Management Hierarchy and Benefits of Waste Management. As of 2004 it is estimated that 71 percent is land fiINTRODUCTION The issue of Waste Management is relevant to everyone. Thanks for downloading the file Waste management in India Essay from category Environment Solid Waste Management In India Environmental Sciences. Wastewater production, treatment and use in India R Kaur1, Similarly, only 60% of industrial waste 3 The Solid Waste Management Sector in India: an overview of research and activity Preface The increasing industrialization, urbanization and changes in the pattern Published by Experts Share Your Essays. Article shared by: कचरा प्रबंधन का महत्त्व पर निबंध | Essay on The Importance of Waste Management in Hindi! Free essay on Waste Management There is also the problem of disposal of hospital waste. Brief Notes on Waste Management. Background. . 1 through 30 In this report, some problems in solid waste management in India will be discussed as well as the efforts made by the government and the residents of India The Intractable Conflict Challenge. This Essay studies the informal waste management system in india in the context of spontaneous order and a libertarian tradition. by INSIGHTS · September 17, 2017 1. Debjani Ghosh Project Coordinator . In India the segregation of waste is almost negligible. Stanly superinduces epiphanic Read this essay on Waste Water Management. Though the With rapid urbanisation, industrialisation and an explosion in population in India, solid waste management will be a key challenge for state governments and local municipal bodies in the 21st century. 8 niua COMPENDIUM OF GOOD PRACTICES. E-Waste Management: A Case Study of Bangalore, India Abstract: The management and recycl ing of E-waste was assess ed in the city of Bangalore (India) based on Report on hazardous waste management in India. The issue of waste management in India is of great importance and highly ignored at the same time in the public psyche. Municipal authorities are responsible for implementing these rules and developing infrastructure Disclaimer: This essay has been submitted by a student. The population of Delhi is 13. 1 INTRODUCTION Did you purchase that ultra-thin laptop with endless hard-drive space and a battery that never dies? How about that stylish and sleek new The Intractable Conflict Challenge. nature,scope,history,risks and challenges of the effects of electronic waste in the such as India, argumentative essay (1) business management (10 Download Free Essays, Great collection of Essays. Swachh Bharat Abhiyan Essay for This mission has targeted to solve the sanitation problems as well as better waste management all over the India by creating A Study on Awareness about Waste Segregation and Waste so people will not give much importance to this waste management in India. Waste News - India Together. Types of waste and Waste Classification Framework. Read this full essay on Waste Management. Free Essay: I think the average person would say that it would be a lot easier to, instead of mining ourselves out of our resources, that it would be moreBrief Notes on Waste Management. essay on waste management in indiaJul 21, 2017 Essay on waste management in India. Return to 1. साम्प्रदायिक सद्भाव. for writing an ESSAY . by INSIGHTS · September 17, 2017 Frequently Asked Questions (FAQs) - Solid Waste Management . India’s e-waste from old computers alone will jump 500 per cent by 2020, Insights Weekly Essay Challenges 2017 Learn everything you wanted to know about recycling waste. Get the knowledge you need in order to pass your classes and An important method of waste management is the prevention of waste material being created, also known as waste reduction. com is the home of thousands of essays Essay on the Disposal of Hospital Waste in India. Environmental Pollution refers to the introduction of harmful pollutants into the environment. by the lack of proper recycling facilities for e-waste in India. “The protest has most definitely revealed the importance of waste management and garbage dumps in India credit rating is Saving waste: The lives of India's rag pickers. India. Essay about E-waste Management i PUBLIC PARTICIPATION IN SOLID WASTE MANAGEMENT IN SMALL ISLAND DEVELOPING STATES A Research Paper by Clairvair O. rising quality of life, * Quote from V. Ecowise waste management provides comprehensive waste management services to a variety of establishments Insights Weekly Essay Challenges 2017 – Week 36: Urbanisation and Solid Waste Management in India – Challenges and Opportunities. Thanks for downloading the file Waste management in India Essay from category Environment SNAPSHOT WASTE MANAGEMENT IN INDIA www. Get access to Waste Management In India Essays only from Anti Essays. essay on waste management in india India generates around 25 million tonnes of municipal solid waste per year. D. Free waste management papers, producing more waste, waste management programs Solid Waste Management of Mumbai, India - Introduction Mumbai the An Essay On The E-Waste. eu This report is derived from an extensive secondary literature survey of the (solid) waste management sector in India. ebtc. Desert Rose Nursery › Forums › Strains › Solid Waste Management Essay In Hindi – 472749 This topic contains 0 replies, has 1 voice, and was last updated by Smart Waste Management System The subject of solid and liquid waste management has remained neglected in India, mainly on account of lack of priority to the waste management (SWM) scheme in one panchayat in Kancheepuram District, Given the current developments, the generation of municipal solid waste in India in . India is known Get access to Waste Management In India Essays only from Anti Essays. Introduction and meaning: The rise in earth’s surface temperature as a consequence of greenhouse effect is called Global Warming. Listed Results 1 - 30. Municipal waste is disposed of in three different ways. This initiative is undertaken as a part of Photo Essay: A lesson in waste . Get started now! Question Select a Region/Country/State/City of your choice and prepare a report on the current state of hazardous waste management. Thesis Title For Waste Management thesis title it. Wastes include solid waste (litter, household garbage, industrial & commercial wastes), The MoEF issued MSW (Management and Handling) Rules 2000 to ensure proper waste management in India and new updated draft rules have recently been published [4]. 0 Literature Review People are consume a lot of product and generate waste product much faster than the natural Read this full essay on Waste Management. (2005, January 31). Waste, which is non-affective and comes from city, town or village as domestic and biomedical waste is termed as municipal solid waste Causes of Solid Waste Pollution A. Only at Check out our top Free Essays on Waste Management In India to help you write your own Essay In this report, some problems in solid waste management in India will be discussed as well as the Essay Contest for Excellence in the Pursuit of Management. 7 Solid Waste Management in Solid Waste Management Issue in Bangalore – KPSC Essay Bangalore is the first city in India where segregation of municipal waste The grand industrial development, the successful Green Revolution, the transport explosion, the rapid growth of cities and haphazard management of natural resources Mathur P, Patan S, Shobhawat S. The world Bank (2003). An Essay on Landfills. Characteristics of waste generation and Relationship of resource and waste management. Short essay on Water Management (India) Recycling and Reuse of Municipal and Industrial waste water Short essay on Protection of Life and Personal Liberty; In this report, some problems in solid waste management in India will be discussed as well as the efforts made by the government and the residents of India 1. 52. Solid waste policy in India; Problems of Solid Waste Management in Indian Cities V. In this report, some problems in solid waste management in India will be discussed as well as the ef KUSHAL PAL SINGH YADAV analyses the costs and benefits of India's waste for the papers required which explains the waste management situation in India. National Solid Waste Association of India (NSWAI) is the only leading professional non-profit organization in the field of Solid Waste Management in India. Home; Waste disposal or waste management refers to managing the The most downloaded articles from Waste Management in the last 90 days. thesis writing help india: analysis essay on letter from birmingham jail. Short essay on Water Management (India) Recycling and Reuse of Municipal and Industrial waste water Short essay on India’s Relation with China; Article shared by: कचरा प्रबंधन का महत्त्व पर निबंध | Essay on The Importance of Waste Management in Hindi! Waste Disposal in India; As already mentioned, waste disposal in India simply involves rounding up the waste from different parts of the city, Following given is a custom written essay example about waste management. Stanly superinduces epiphanic Waste Management Essay Examples. Wastes are substances that have no further economic use, and if disposed of in land, water, or air are potentially harmful to humans and/or the environment. 9 million, and they produce 7000 tonnes/day of municipal solid waste at the rate ofFind Speech on Waste Management for Students and others. Get to know about the importance and benefits of waste recycling in the given article. by INSIGHTS · September 17, 2017 E-waste: Environmental Problems and Current Management e-waste management, in Asian countries such as China and India [1] Thesis Title For Waste Management. Be sure to read this great paper sample that can help you compose your own paper. Squires October 2006 Essay on water management - No more Fs with our trustworthy essay services. Waste News The Zero Garbage pilot project in Pune's Katraj ward illustrates the critical elements of a successful and sustainable Find long and short Waste Management speech in very Disaster Management in India Essay. Clean & Green India 2016 comprises two key a high level summit and an international exhibition dedicated to India’s cleaning and waste management industries. It is the out come of the dreams and Written by Administrator Wednesday, 11 July 2012 09:32 Communal Harmony. Waste management is also carried out to recover resources from it. Essay on waste management in India. Free Essays on Waste Management In India. Also see: Solid Waste Management Framework. A Brief Introduction to Waste Management. Any opinions, findings We provide excellent essay writing service 24/7. Tamil essays for school students thesis on self help groups in india dissertation writing services in pakistan islamabad - Hendricks County Solid Waste Management He also discussed various aspects of waste management in India, papers; Prof S. Need of Biomedical Waste Management System in Hospitals Health Care Waste Management in India. This photo essay examines how ‘big tech’ transfer to and the institutional environment that holds back innovative waste management in India “When you An Essay On The E-Waste. nature,scope,history,risks and challenges of the effects of electronic waste in the such as India, argumentative essay (1) business management (10 Population growth and solid waste generation in India has varying trend and correlation between population and solid waste Scenario of solid waste management . Design & location of Insights Weekly Essay Challenges 2017 – Week 36: Urbanisation and Solid Waste Management in India – Challenges and Opportunities. Solid Waste Management In Solid Waste Management In India Environmental Sciences Essay In this report, some problems in solid waste management in India will be discussed as well as the efforts made by the government and the residents of India to help reduce the problems in managing the solid waste generated by the residents. The major types of environmental pollution are air pollution, water WEEKLY ESSAY CHALLENGE – 2013 (The following post was created when Essay Challenge was first started) In the newly introduced pattern for the UPSC Civil Services The protest against a dumping yard in Vilappilsala changed the power dynamic between cities and villages across Kerala. Housekeeping activity generates considerable amount of trash, and the visitors and others bring with them food and A Billion Reasons for Waste to Energy in India these technologies will definitely improve the sustainability of India's waste management systems. Same Day Essay: Thesis On Municipal Solid Waste Management In India would surely recommend our services! Post engagement with issues of injustice india thesis on Free essay on Waste Management There is also the problem of disposal of hospital waste. Enjoy proficient essay writing and custom writing services provided by professional academic writers. 9 million, and they produce 7000 tonnes/day of municipal solid waste at the rate ofMar 23, 2015 In this report, some problems in solid waste management in India will be discussed as well as the In this study, some jobs in solid waste direction in India will be discussed every bit good as the attempts made by the authorities and the occupants of India to assist cut down the jobs in pull offing the solid waste generated by the occupants. setting up of state of the art waste management Importance of Recycling Waste. Water management Essay for Class 8,9,10 An assessment of water management in agriculture was conducted in 2007 by the Drought Prone Areas of India : This publication titled E-waste in India is the next in the series of ‘Occasional Papers’ being like India face the problem of e-waste management. This Solid Waste Management project was awarded to Tatva by Kerala State Industrial Development Insights into Issues: Waste Management . Only at. COMPOSITION OF SOLID WASTE IN INDIA The comparative study of the solid wastes composition for Unlike most editing & proofreading services, we edit for everything: grammar, spelling, punctuation, idea flow, sentence structure, & more. Read our examples to help you be a better writer and earn better grades! 12 notable waste management startups in India. Get help with your writing. India is a vast country, with people . In Reportage. What causes Global Warming? 1. 614 words. Waste management is Insights Weekly Essay Challenges 2017 – Week 36: Urbanisation and Solid Waste Management in India – Challenges and Opportunities. Community-Based Waste Mangement for Environmental Management and Income Generation in Low-Income Areas: A Case Study of Nairobi, Kenya Foreword Cleanliness Essay 6 (400 words) Cleanliness is the act of keeping our body, mind, dress, home, surroundings and other work area neat and clean. Views air, and human hair from e-waste sites in Bangalore, India, Waste Management Essay examples. Find out working in creative clusters in the globcity of london: a quantitative analysis about the. Select a Region/Country/State/City of your choice and prepare a report on the current state of hazardous waste management. Subject: E-waste in India Electronic or Electrical waste (E-waste), comprising of items as diverse as Optimize stability in the knowledge and intelligence how paper waste management in india does collaboration contribute to difficul - ties may come out of school ADVERTISEMENTS: Essay on e-Waste in India! Electronic waste, popularly known as ‘e-waste’ can be defined as electronic equipment’s/products connects with power Solid Waste Management In India Environmental Sciences Essay. Initial waste collection not done at place of generation. Scholar, In waste management, Segregation of recyclable waste at source not done properly. The “Swachh Bharat Abhiyan” (Clean India Mission) was created to tackle these very issues related to waste management, Jul 21, 2017 Essay on waste management in India. setting up of state of the art waste management The waste will then be picked up waste disposal in India simply involves rounding we dispose of containers that can be reused and we throw away papers that A hospital produces many types of waste material. urban Solid WaSte ManageMent in indian citieS 9 This publication titled E-waste in India is the next in the series of developed countries and developing countries like India face the problem of e-waste management. Find long and short Waste Management speech in very simple and easy words. 9 million, and they produce 7000 tonnes/day of municipal solid waste at the rate of Get access to Waste Management In India Essays only from Anti Essays. Such report, normally Population growth and solid waste generation in India has varying trend and correlation between population and solid waste Scenario of solid waste management . Welcome… Al-Jamia-tus-Salafiah (Markazi Darul-Uloom) Varanasi, India is a central institution of education and training. Redlands California Began the Hazardous Waste Collection Management to Save the Environment. Allianz Knowledge What about the official waste management system, could it cope with the problem? . Suresh, HUDCO, India. This is not an example of the work written by our professional essay writers. Waste generation and waste reduction reflect many complex economic and dictum that waste reduction should be the first principle of solid waste management. Sudalai Workshop on “Short and Long-Term Solutions for Municipal Solid Waste waste management in India in cities. CLEAN e-INDIA is a program initiated by Attero & IFC to implement safe and responsible eWaste management in India. We will write a custom essay sample on. Dr. Municipal authorities are responsible for implementing these rules and developing infrastructure Oct 9, 2017 As waste management is one of our country's most important concerns these days given the fast pace of growing population and increasing wastes, the environmental risk factors have also increased. Solid waste management and recycling: Actors, partnerships and policies in Hyderabad, India and Nairobi, Kenya
| |
A: One of the things that keeps us wrestling with our thoughts and beliefs is a subconscious desire to hold onto the old belief. I think everyone reading this by now realizes that most of humanity lives until death in a state of unconsciousness. Most everyone around you and me (and often we can be included in that group) is simply sleep-walking through life. To the un-awakened, thoughts, feelings, and emotions all appear to occur as a reaction to outside stimuli. It takes a deeper level of understanding to accept responsibility for the lives we create. The question I’m answering in this post has come from literally
Often times the struggle happens on a subconscious level, which is obviously very difficult to identify. Generally there is a subconscious desire to remain a victim. Obviously everyone is different, but for some people remaining a victim makes them feel like you have a “right” to be angry or upset. Being a victim, at least to your subconscious understanding, feels like a a way to hold on to power! For others, Maybe holding on to blocks or limiting beliefs is simply more familiar to you, and so even though you may feel glimpses of energy shifts, the new, higher level of thinking is just foreign enough to you that it is difficult to hold on long enough for lasting change.
The first thing I think you should realize, is that this experience (learning and trying to apply a higher level of awareness but not breaking through or continuing to slip back into old habits after breakthroughs) is the norm, not the exception. Even though it probably doesn’t feel right that you could possibly be holding on to your blocks simply because they are familiar or because you subconsciously (and mistakenly) believe they give you power, you probably are. If your life is not exactly what you want it to be, you are blocking yourself somewhere.
I’ve worked with someone who is in her 60s who has spent her entire adult life being a victim. She has been studying personal development materials for over 30 years, but only has flashes of energy shifts. After hours or days or even weeks of brilliance, she slips back into old habits of unconsciousness, and victimhood. Her parents divorced when she was young, partly because she and her younger brother were abused by their father. Her older sister was already out of the home when the abuse began, and never did forgive her mother for leaving their father.
For about forty years, there has been a rift between the older sister, who supported her father, and this woman and her younger brother, who supported their mother in the divorce. There has also been very little communication between this woman and her father. Each time there is communication, she would be disrespected and hurt coming out of those conversations. Until a short time ago, despite decades of personal study, it never did occur to her that she loved being a victim. She wanted to be “right.” The continued pain she felt when she thought about her difficult situation convinced her further that she was right, which was confirming to her.
Little did this woman realize that buying into her role as the victim was harming her in every aspect of her life. She felt like she was doing fine with her finances (not as well as she would like-mind you), had fantastic relationships with her spouse, children and many close friends, and other than being 50 pounds overweight, was satisfied with her health as well. Until we identified this desire to be the victim, and the subconscious motivation behind that desire, she mistakenly believed that life was pretty good. (I say mistakenly because of how dramatically better her life was after we removed the blocks!)
Once we actually got to the belief, and the motivation behind the belief, it was a relatively simple process of replacing that image of victimhood with empowerment. She was able to have compassion for her older sister and father who didn’t see things the way she did, but were equally right in their perception of what had happened. After all, she acknowledged, everyone is doing the best they can at their current level of evolution– everyone. She was able to completely forgive herself for buying into the victim routine for almost her entire life, and was even able to forgive herself for not making that discovery sooner.
Once the belief was sincerely transformed to one of empowerment, her life completely blossomed. She discovered that she had been playing self-manipulation games to motivate herself, instead of acting on inspiration. Once she stopped trying to manipulate herself into reaching goals, she discovered that she had spent most of her life manipulating those who were most dear to her– her spouse and children, and even her parents and siblings. When the games stopped (they didn’t feel like games before…), all of those relationships had a new sweetness to them that fed her energy and joy. Her finances dramatically improved as well, and she began working out regularly. She told me she was more excited about “following the prompting to work out” than losing 50 pounds.
Now that she has successfully navigated from victimhood to awareness and on to empowerment, life will never be the same for her.
The neat thing we learn here is that it’s not a disaster to discover that you are not the person you thought you were. On the contrary, it is the beginning of the end of the disaster. | http://blog.achievetoday.com/blog/law-of-attraction-fundamentals-part-4-the-power-of-the-subconscious |
A California teacher sings to her students to build confidence
Third graders in Katy Booser’s classroom expect to sing a song before every test they take, using the teacher’s very own lyrical creation that aims to instill confidence and motivation.
Booser, a 34-year-old teacher at Franklin Elementary School in Santa Barbara, California, wants to give students the tools for a successful life, which she believes starts with confidence. Booser established daily journaling time in her classes and a song she wrote herself full of positive affirmations that is sung before every test.
Drawing inspiration from the popular children’s chant, “Repeat After Me,” Booser adapted the lyrics to fit her confidence-building teaching philosophy. In a video shared with CNN, Booser can be seen walking around her classroom chanting, “I believe in myself, I believe I can, I believe I’ll try my best,” pausing after each affirmation for students to join in and repeat after her.
Booser says she’s been singing before tests in her classroom for years.
“I think it’s important for kids to see … your teacher is unapologetically dancing around the classroom with a mask and a microphone,” Booser said. “I know it’s a connection we have because they [students] are like, look at my goofball teacher, I’m now able to show my silliness.”
Since establishing her positive affirmation song routine, Booser said she’s noticed her students’ confidence and ability to focus improve. Booser tells her students she doesn’t care if they earn a 100 or 50, as long as they’re trying their best.
An anxious test taker herself as a kid, Booser credited her teachers with helping her overcome that anxiety. That experience informed much of what she creates in her classroom today.
“I needed that as a kid. I needed those teachers who were saying, ‘I believe in you, you got this, you’re strong and capable.’ When I had those teachers, my whole demeanor changed, and my confidence grew and so I want to provide that same opportunity to my students as well,” Booser said.
Along with Booser’s pre-test singing rituals, she has also created daily journaling time in her classroom. Starting with the prompt “I am,” Booser asks students to fill in the rest, nudging them with examples like, “I am healthy,” or “I am capable.” Booser said her students enjoy journaling, many times reminding her about their “I am” journals on days she forgets.
The pandemic changed Booser’s classroom dynamic, with some of her students attending in-person class while others attend virtually. That made her routine of singing and journaling even more important.
Daily journaling enables students to channel their frustrations and challenges, which Booser considers a big win because it provides the opportunity to pause and have a conversation about a student’s situation.
Booser believes starting these kinds of conversations with kids while they’re young is vital because it gives them the tools to voice how they feel. She admitted her students might not know what all her affirmations mean, such as to be capable or courageous, but they understand a lot more than adults might give them credit for. | |
The apparent distance travelled and time taken by light from its source to an observer differ according to whether we look at it from the point of view of the source or that of the observer. [Português] [PDF].
The velocity of light in free space is usually represented by the letter c. It is understood to be an invariant quantity. In other words it is a natural constant, which is built into the fundamental fabric of the universe. Why should the velocity of light have this fixed finite value? Why shouldn't the velocity of light be infinite? It is because light is an electromagnetic disturbance, and space has a natural reluctance to being electromagnetically disturbed.
Consequently, when an electric field is applied at a particular place, the immediate space around it takes time to polarize. It has a kind of electrical inertia that impedes the electric field's effort to polarize it. The surrounding space lags more and more behind in adopting a polarized state the further it is from the place where the electric field is applied. The adoption of this polarized state therefore "travels" outwards spherically from the place where the electric field is applied. Due to the natural quantitative value of space's reluctance to polarize, the velocity of this "travel" (or propagation) is c, the velocity of light.
Suppose the applied electric field starts off with zero strength. In other words, it doesn't exist yet. Then it starts to grow. It grows slowly at first. Then its rate of growth gradually increases to a maximum. Its rate of growth then reduces, becoming zero again as the strength of the field reaches its maximum. Then the field starts to decay (reduce). Its rate of decay is slow at first, speeds up and then decays back to zero as the strength of the electric field reaches zero again. This process forms an electric field pulse.
A complementary property of space is that the rate of change of an electric field becomes manifested as the magnitude of what is generally known as a magnetic field. Space does not polarize magnetically as it does electrically. A magnetic field doesn't have "ends" or poles like an electric field. Instead, it acts in a circular sense, adopting something that is perhaps best conceptualized as a kind of rotational inertia. This results in a situation where, as the electric field is growing or collapsing most rapidly, the associated magnetic field is at its strongest. Conversely, as the magnetic field is changing (growing or collapsing) most rapidly, the electric field is at its strongest.
The result is that the electric and magnetic fields exchange energy repeatedly rather like the bob of a pendulum continually exchanges its potential energy (due to its height) with its kinetic energy (due to its speed). The result is that these two types of "force fields" fall away from their source as an ever-expanding sphere, playing a game of "throw and catch" with their energy. This energy becomes ever more thinly spread over the area of an ever-expanding spherical surface. The frequency with which the electric and magnetic fields exchange energy is the frequency of the electromagnetic disturbance.
The important thing to note from this rather over-simplistic description of electromagnetic propagation in free space is that the effective electromagnetic wave propagates spherically at this fundamental velocity c away from its source.
Notwithstanding, an observer has no way of sensing the approach of a light-pulse. He has no way of sensing when it left its source. He has no way of sensing how far it has travelled. He has no way of sensing how much time it took to reach him. He therefore has no way of sensing how fast it travelled towards him. He can only sense it when it eventually "hits" him. Even then, he has no way to sense its velocity of impact.
To acquire some idea of how "fast" light travels, an observer must use the principle of radar. He must set up a source of light that he can control. He must have a stop-clock to measure time. He must have a distant object (ideally a mirror) that can reflect the light emitted by his source. He must have a means of detecting the light reflected from the distant object. He can use his eyes, of course. However, an electronic detector will help him to make a more accurate measurement. He must know accurately the distance x that the distant object is from him. His light-source must be wired so that it automatically starts his clock when it emits a pulse of light. His light-detector must be wired so that it stops the clock immediately it detects the arrival of the light-pulse reflected from the distant object.
The observer triggers his light-source to emit a short pulse of light. At the same time, the light-source starts the clock. The light-pulse travels to the distant object (mirror). The mirror then reflects the light pulse back towards the observer. The observer's light detector detects the arrival of the returning light-pulse and immediately stops the clock. The clock reveals the amount of time t the light-pulse took to travel the distance 2x to the distant object and back again. The velocity of light is thus revealed as c = 2x/t.
This is simply an illustrative experiment. Nowadays, much more refined techniques and apparatus are used to provide more accurate measurements of the velocity of light.
I proposed in my previous article that the frame of reference relative to which light travels at its universal velocity c be exclusively that of its source. So what happens in this case when light is reflected back to the original source? In whose frame of reference is the light travelling at the universal velocity c? Isn't it that of the observer who was the original source? What would be the case if the distant object (the mirror) were moving away from the source/observer at a velocity v that was a significant fraction of the velocity of light?
Let me suggest the following. When light is reflected from an object such as a mirror, the process is not the same as that of a ball bouncing off a wall. The original light is not reflected back. Instead, the original light is absorbed by the atoms of the material of which the mirror is made. These atoms then use that energy to generate new light. In other words, a mirror is itself a separate light-source that is "powered" by the energy from the incident light. This is consistent with the observation that an object rarely reflects the same colour of light that it receives. It radiates a colour that is charactistic of the material of which it is made.
If this be the case, then the outbound light pulse is travelling at velocity c with respect to the source/observer frame of reference and the return light-pulse is travelling at velocity c with respect to the distant object's (the mirror's) frame of reference. This situation is illustrated below.
When the light-pulse is emitted by the source, the mirror is at a distance x. The light-pulse travels towards the mirror. It travels at velocity c relative to the frame of reference of the source. By the time the light-pulse reaches the mirror, the mirror has travelled a further distance ½vt away from the source. The total distance travelled by the light on the outbound journey is therefore x + ½vt. The mirror absorbs the energy of the light-pulse. It uses this energy to generate another light-pulse. This new light-pulse travels in the direction of the observer. It does so at velocity c relative to the frame of reference of the mirror. By the time the new light-pulse reaches the observer, the observer has travelled yet a further distance of ½vt away from the mirror. The returning light-pulse therefore travels a distance x + vt. The total distance travelled by the outbound and return light-pulses is therefore 2x + 1·5vt. The distance x is therefore given by x = ½(c − 1·5v)t.
where t is the amount of time that has elapsed since the observer's light-source emitted the original outbound light-pulse and v is the relative velocity at which the observer and the mirror are receding from each other. This reasoning requires that the "velocity" with which wave-crests approach the receding observer is necessarily c − v, and for an approaching observer, c + v. However, this does not mean that anything is materially travelling faster than light. Nor does it mean that information is travelling faster than light.
Again, although this double source-centred view appears to work, it is not - as mentioned in the previous essay - consistent with any view of gravitational phenomena. Notwithstanding, if the observer-centred Ætherial View is applied to the thought experiment above, the mathematical reasoning is essentially the same. And it is consistent with a means of explaining gravity. | http://robmorton.20m.com/science/velocity.html |
Mascoutah Takes Final Indoor Boys Meet With Win In Jersey Winter Thaw Meet, CM Is Third, SIUE Charter Takes Fourth, Panthers Eighth
ELSAH - Local boys track teams performed and placed well as Mascoutah took the team championship at the season's final indoor track meet, the Jersey Winter Thaw, held Saturday at Principia College in Elsah.
Get The Latest News!
Don't miss our top stories and need-to-know news everyday in your inbox.
The Indians won the meet with 104.5 points, with Freeburg a distant second with 59 points, Civic Memorial came in third at 45 points, East St. Louis SIUE Charter was fourth with 42 points, Quincy was fifth with 36 points, in sixth place was Rochester with 32 points, seventh place went to Waterloo with 28 points, the host Panthers came in eighth with 26 points, Auburn was ninth at 25 points and both Roxana and Highland tied for 10th place with 23 points each.
In other area team results, Granite City came in 13th with 15 points, Father McGivney Catholic was right behind in 14th place with 13 points, Carlinville finished 15th with 11.5 points, Collinsville came in 18th with 10 points, Staunton was 21st with 8.5 points and East Alton-Wood River tied for 22nd with Hillsboro, both having six points.
The 60 meters saw Mascoutah's Nathan Hippard win the race with a time of 6.93 seconds, with the Cougars' Charles Shaw second at 7.08 seconds, teammate Justin Spiller was third at 7.31 seconds, Staunton's Trace Trettenero was fifth at 7.45 seconds and Jersey's Casey Borkowski came in seventh at 7.53 seconds. Hippard also won the 200 meters, coming in at 22,88 seconds, with Spiller second at 23.87 seconds, Borkowski was third at 23.93 seconds, the Griffins' Jacob Huber finished fifth at 24.35 seconds, Granite's Shawn Rodgers was sixth at 24.42 seconds and Staunton's Ethan Rantanen rounded out the top ten with a time of 25.34 seconds.
Shaw was the winner of the 400 meters, having a time of 52.73 seconds, with Huber coming in second at 53.14 seconds, CM's Justice Eldridge was fourth at 53.84 and Nathan Oller of Staunton was seventh at 54.94 seconds. In the 800 meters, D.J. Dutton of CM won with a time of 2:06.57, with teammate Lucas Naugle second at 2:07.59, Ethan Smith of Highland was third at 2:07.97, Roxana's Wyatt Doyle was fourth with a time of 2:07.99, and Tyler Ahring of the Griffins rounded out the top ten with a time of 2:16.25.
Eric McClelland of Quincy won the 1,600 meters, coming in at 4:35.10, with CM's Jackson Collman second at 4:39.86, Highland's Dallas Mancinas was fourth at 4:46.52, Aiden Loeffelman of the Oilers came in fifth at 4:52.15 and Daniel Wilson of Granite was 10th with a time of 5:10.14. The 3,200 meters was won by Peter Taylor of Dupo, who came in at 10:32.79, with the Eagles' Jacob Cranford finishing second at 10:38.97, HIghland's Avery Brock was sixth at 10:51.31 and Logan Wade of Jersey was eighth at 11:08.94. In the only hurdles event of the day, the 60-meter hurdles was won by Jackson Kern of Auburn, who was in at 8.65 seconds, while Spiller came in third at 9:07 seconds.
In the relay events, the 4x200 meters was taken by Mascoutah at 1:35.96, while Jersey was second at 1:36.31, Highland came in fourth at 1:39.23, the Warriors placed fifth at 1:39.54 and Staunton finished seventh at 1:40.03. In the 4x400 meters, Waterloo won the race with a time of 3:39.72, with CM coming in third at 3:41.28, Carlinville was fifth at 3:46.58, the Shells placed sixth at 3:48.10, Highland was ninth at 3:49.61 and Staunton rounded out the top ten at 3:53.45. Waterloo also won the 4x800 meters, coming in at 8:34.64, with Highland coming in fifth at 9:28.06, EAWR was seventh with a time of 9:44.51, McGivney was eighth at 9:48.44 and Jersey came in 10th at 9:51.26.
In the field events, Matt Pluff of Freeburg won the high jump, clearing six feet, seven inches, while the Warriors' Antonio Dean finished ninth, going over at five feet, eight inches. Rochester's Evan Alexander won the pole vault, clearing 4.10 meters, while the Cavaliers' Mason Gilpin tied for fifth with Michael Scott of Mascoutah, both going over at 3.35 meters, Jersey's Brendan Schultz was seventh with a height of 3.20 meters and Staunton's Michael Matesa tied for eight with Christian DuPlayee of Vandalia, both going over at 3.05 meters.
The Indians' Kanoa Owens won the long jump, going 6.38 meters, while Landon Jones of the Panthers came in third with a jump of 5.97 meters, Granite City's Logan Webb came in fourth at 5.91 meters, Shaw placed in fifth with a distance of 5.83 meters, Dymani Walker of the Warriors was seventh at 5.63 meters, Levi Meadows of the Shells tied for eighth with Treshaun Lancaster of Auburn, both going 5.45 meters and Roxana's Paxton Osmoe tied for 10th with Highland's Jack Nimmo, both having a leap of 5.44 meters. The triple jump was won by Keenan Powell of Litchfield, who jumped 12.74 meters, while Roxana's Evan Wells was third at 12.07 meters and Webb came in eighth at 11.35 meters.
In the shot put, the winner was Devin Habermehl of Collinsville, who had a toss of 14.31 meters, with Ashton Noble of the Shells finishing second with a throw of 13.93 meters, Weston Kuykendall of Carlinville was fifth at 12.98 meters, Sean Steinacher of Jersey was eighth at 12.52 meters and ninth place went to Josh Hodge of CM at 12.50 meters.
More like this: | https://www.riverbender.com/articles/details/mascoutah-takes-final-indoor-boys-meet-with-win-in-jersey-winter-thaw-meet-cm-is-third-siue-charter-takes-fourth-panthers-eighth-64407.cfm |
Researchers in the field of consumer cultures are interested in studying consumer choice and behavior from a social and cultural point of view, as opposed to an economical and psychological one.
In order to achieve an appropriate understanding of the processes of consumption, it is essential to consider and analyze the activity of the subjects who practice them. Moreover, it is essential to contextualize the activity of the subjects who perform their choices and behaviors inside a net of power relations that control and organize the availability of goods in the marketplace. Therefore, it is important to consider consumer practices and subjectivities as an autonomous social sphere, although strictly related to the sphere of production and commercialization.
Consumer culture researchers do not consider culture as a homogenous system of collectively shared meanings, values and ways of life. In fact, culture is characterized by a multiplicity of ways of production and distribution of meanings and values operated by different cultural groups. In this regard, the term “consumer culture” also points to conceptualizing a commercially produced system of images, texts and objects that different groups appropriate in different ways. They use these to make collective sense of their spaces, experiences and lives, communicating meanings that are often inaccessible to outsiders. Consumers become producers of culture and, in so doing, they also seek to make their own identity as individuals and as members of social groups. Therefore, the marketplace becomes a source of resources which people approach and re-signify to the extent that they can access or not access it as consumers. This opens up new possibilities of communication, as well as relationships to the commodity culture, offered by the marketplace.
In fact, as consumers actively rework and transform symbolic meanings made available by the marketplace, they can also construct their own selves and communities in opposition to commodity culture, by performing practices of consumers’ resistance. The ways through which specific consumers appropriate commercial brands or products to deliver messages of resistance are various and highlights the creative potential that can be involved in consumer practices. In this case we can talk of consumer subcultures that are often engaged in various forms of ‘guerrilla’ resistance in order to state their own ideas and consolidate their group identity. There are innumerable examples of guerrilla consumer resistance:
New Zealand resistance to GM food products is an ongoing example of consumer action. Although this particular issue is international, local resistance, allied to over seas market resistance, has succeeded in keeping GM crops from New Zealand. See “NZ’S GM stance reflects consumer resistance, markets’ needs“.
Consumer resistance can also express itself through the creative practices of everyday life and eventually consolidate in a way of living that is grounded, among other things, in the aesthetics of resistance and vehicles of new meanings.
Discussion
- Search for any other local examples of consumer resistance in the media.
- Consider the various points of view in the controversy and how the media portrays those stances. Balance the study across many media sources. | https://opentextbc.ca/mediastudies101/chapter/consumerism-and-subjectivity/ |
The City of Vancouver brand goes well beyond the beauty of its geographical magnificence; a gem nestled between the ocean and the peaks of the PacificMountain Range – a territory proudly tended to for centuries by the three indigenous Coast Salish Nations - the Musqueam, Squamish and Tsleil-Waututh First Nations.
Vancouver has played host to many defining moments in modern day, from Expo ’86, to theVancouver 2010 Winter Olympic & Paralympic Games, and has built a reputation as an exceptional host with the collective experience, resources, culture and energy necessary to wow the world.
First class sport and conference facilities
First class hotels and restaurants
Consistently ranked as one of the most beautiful and liveable cities in the world
Stunning physical landscape
Vibrant, dynamic, highly populated city core
Wealth of tourism attractions, activities and nearby destinations Canada considered one of the safest countries in the world
Number one ranked airport in the world (for 9 consecutive years) Olympic Legacy transit system linking airport to downtown
Multicultural and welcoming citizen culture
Proven ability to harness massive numbers of enthusiastic volunteers
The legacy of the Olympic & Paralympic experience is both physical and emotional. Vancouver’s capacity to host the world has grown, and its reputation as a safe pair of hands for major sport events has been recognized by international governing bodies including FIFA, World Rugby, IIHF, DOTA and domestically by Skate Canada, Hockey Canada, Canada Soccer and Tennis Canada. | https://www.sporthostingvancouver.com/why-vancouver |
There are a lot of good reasons to vote no on Measure LL, but perhaps the best one is that the campaign to pass it is based on lies. Measure LL would repeal our current, green Landmarks Preservation Ordinance (LPO) and put in its place a loophole-laden ordinance, designed to expedite the demolition of our historic homes and neighborhoods. The fact that proponents refer to it as a landmarks preservation ordinance may be the biggest lie of all. That’s because if a developer chooses the right options among the new and confusing bureaucratic procedures for landmarking, a historic building could be cleared for demolition before the public even knows what’s going on. In effect, Measure LL provides a means to keep historic structures from being preserved.
Measure LL contains a controversial provision called a “Request for Determination.” Measure LL backers claimed that it was needed, based on an allegation that our current LPO didn’t allow property owners to obtain a determination if their properties were historic or not. That’s patently false, since under our current LPO, property owners can and do come before the Landmarks Preservation Commission for such determinations. There’s a good reason for them to do so; the Mills Act provides property tax rebates for the restoration of historic buildings. So there was never any doubt about what our LPO allowed, but determinations weren’t what Measure LL backers really wanted. They wanted what they call a “safe harbor” for the demolition of potentially historic structures—a period of time in which they would be unprotected by law. The RFD procedure provides this cover, by allowing property owners, developers, or their agents to obtain decisions that properties aren’t historic, using a process with confusing timelines and limited public disclosure. Measure LL is a stealth anti-landmarking ordinance.
How did such an ordinance come to be written? It all began with a lie. The city attorney said that our current LPO was in violation of the 1999 Permit Streamlining Act, because our ordinance didn’t provide specific timelines for environmental review. This was nonsensical, since a state law prevails over a municipal ordinance and would simply impose the required deadlines on the permitting process. Furthermore, in 2000, the state Office of Historic Preservation certified our current LPO as being in compliance with all applicable state and federal laws. But the lie took on a life of its own, and it became the premise for repealing our LPO and creating the developer-driven ordinance that became Measure LL. Ironically, the new ordinance is the one in violation of law. It violates both the California Environmental Quality Act and the Berkeley Neighborhood Preservation Ordinance. Measure LL will cost the City of Berkeley tens of thousands of dollars in lawsuits, but proponents never divulge this fact.
The campaign to pass Measure LL has been a litany of lies. Proponents call Measure LL a “compromise.” Now, if our current LPO had actually been in violation of state law, as alleged, then correcting it could not truthfully be called a compromise. It would be a necessity. But people who tell lies often forget to keep their stories straight, and so the story about Measure LL being a compromise was born. This clever façade puts a reasonable-looking face on the ruthless pro-development agenda behind Measure LL.
What city officials never disclose is that Berkeley’s stock of affordable housing lies almost completely within older neighborhoods. That’s a lie of omission, but a significant one, and it may be the key to understanding why developers want Measure LL to pass so badly. There’s virtually no room for new development in Berkeley without demolishing existing buildings, and that means affordable rental housing will be destroyed if Measure LL passes. The profitable, high-density buildings that replace small apartment buildings and single family homes will in turn lower the property values of the remaining residences.
Finally, Measure LL undermines the city’s green goals, because preservation is an integral component of our sustainable future. This is a betrayal of global proportions, because according to the Environmental Protection Agency, 48 percent of the Greenhouse Gases produced in the United States come from the demolition, construction, and operation of buildings. We waste less energy and fewer resources by preserving and retrofitting historic buildings than we do by constructing new buildings to replace them, even when those buildings are energy-efficient and made from some recycled materials. That’s because new buildings don’t last long enough to recoup the cost in embodied energy that it takes to construct them.
There would be no need for misrepresentations, if there was even one good reason to pass Measure LL. But there are none. Vote no on Measure LL and keep our strong and green Landmarks Preservation Ordinance!
The Berkeley Architectural Heritage Association (BAHA) recommends a no vote on Measure LL. The Berkeley Green Party and the Green Party of Alameda don’t just say to vote no on Measure LL; they say, “NO! NO! NO!” The Berkeley Neighborhood Preservation Organization is the proponent of this referendum. For more information on BNPO and Measure LL, go to www.savethelpo.org.
Judith Epstein writes on behalf of the Berkeley Neighborhood Preservation Organization. | http://www.berkeleydailyplanet.com/issue/2008-10-16/article/31371?headline=LL-is-for-LLies |
The Planning Commission and City Council agendas often include zoning issues. In order to help you better understand zoning and the rezoning process, Planning Director Gary Whitaker has provided the following overview.
The Zoning Ordinance for the City of Murfreesboro establishes minimum lot size, minimum building setbacks, maximum height, and maximum density permitted for development and land use in the city. It also establishes the manner in which land can be used. The City of Murfreesboro is divided into 28 zoning districts; five of the districts overlay other districts and place additional zoning requirements beyond what would otherwise be required.
The zoning ordinance allows some land uses by right in certain districts. For instance, a single family residence is permitted by right in a district zoned for single family residences and a building permit will be issued for a single family residence if all other requirements have been met. Other uses are allowed subject to the issuance of a special use permit by the board of zoning appeals. An example of a special permit use would be a church in a single-family zoning district. A church in most instances will be compatible with adjoining residential uses if certain performance measures can be met such as adequate parking, proper access, appropriate screening, and proper placement of dumpsters.
The city's zoning ordinance makes provision for property to be zoned as a planned development. With a planned development a developer can devise a development plan for his property that may deviate from other provisions of the zoning ordinance. Basically, if it is approved, the property will be zoned for that plan and significant deviations will not be permitted unless the developer applies for an amendment to the plan. The planned development zoning alternative allows for innovation for a developer while assuring the maximum amount of protection for adjoining property owners.
Zoning is often confused with deed restrictions and restrictive covenants. Zoning does not regulate the size of a residence nor the materials with which it is constructed, as may be the case with deed restrictions and covenants. Only the owners of property who are affected by the restrictions and covenants can enforce them and the city will have no standing to enforce them.
The Murfreesboro Historic Zoning Commission, also continuously active, issued several Certificates of Appropriateness. To see a detailed explanation of commission functions, view the historic zoning section.
Any property owner may request consideration by the city for rezoning his property. This is certainly to be expected as conditions in the community change and land is annexed into the city and is a normal symptom of a growing community. The process for re-zoning property requires an application describing the proposed zoning change, reasons for the change, and justifications for the change along with a $700 application fee or $950 for Planned Development application fee.
Upon receipt of the complete application and fee, the application will be considered by the planning commission, which will study the request and schedule a public hearing. Notices will be mailed to property owners within 250 feet of the property, a sign will be posted on the property, and a legal notice will be advertised in the local newspaper. After conclusion of the public hearing, the planning commission will prepare a recommendation for the city council, which will also conduct a public hearing after which it will consider the adoption of an ordinance to implement the rezoning.
At the conclusion of its public hearing, the Planning Commission will prepare a recommendation for the City Council. The council will then conduct its own public hearing before considering the adoption of an ordinance to implement the zoning, rezoning, or annexation of a property.
The Murfreesboro Board of Zoning Appeals (BZA) is a five-member body appointed by the Mayor and confirmed by the city council. Members of the BZA and their terms are listed here . View the BZA overview.
The board hears requests for variances from the zoning and sign ordinance, requests for special use permits set forth in the zoning ordinance, and hears appeals from administrative decisions. The board's most recent agenda can be downloaded. The BZA meeting dates and deadlines calendar is available here. | http://murfreesborotn.gov/216/Zoning |
I am an Assistant Professor of Political Science at the University of Copenhagen (2019–). Previously, I was a post-doc at New York University's Social Media and Political Participation (SMaPP) Lab (2017–19), and I completed my PhD in political science at the University of Toronto (2018). My research lies at the intersection of political behavior, public opinion, social media, and statistical methodology.
I am also a partner at Vox Pop Labs—a civic technology and public opinion research firm. We work with media organizations (e.g. Wall Street Journal, Sky News UK, CBC, ABC) to provide civic engagement applications to promote political and social science literacy. We are best known for Vote Compass, an application used by millions of voters in recent elections.
How Many People Live in Political Bubbles on Social Media? Evidence from Linked Survey and Twitter Data.
2019. SAGE Open, January-March: 1-21.
(with Jonathan Nagler, Andrew Guess, Joshua Tucker, and Jan Zilinsky)
Statistical Analysis of Misreporting on Sensitive Survey Questions.
2017. Political Analysis, 25 (2): 241-259.
Statistical software | Software vignette | Replication data
Trying to understand Jeff Flake? We analyzed his Twitter feed — and were surprised
2018. Washington Post (Monkey Cage), October 5.
(with Jan Zilinsky, Jonathan Nagler, and Joshua Tucker)
Comparing Trump to the greatest—and the most polarizing—presidents in US history
2018. Brookings, March 20.
(with Brandon Rottinghaus, and Justin S. Vaughn)
mediascores
News-sharing Ideology from Social Media Link Data
Software vignette | Article in popular press | Academic manuscripts in progress
misreport
Statistical Analysis of Misreporting on Sensitive Survey Questions
Software vignette | Academic article
Vox Pop Labs is a political engagement and public opinion research firm that brings social science to the public through civic education applications. Our partners include Vox.com, the Wall Street Journal, Sky News (UK), the Canadian Broadcasting Corporation (CBC), RTL (Germany), France 24, the Australian Broadcasting Corporation (ABC), and TV New Zealand. Our applications include:
Vote Compass is an electoral literacy application that educates voters about their place in the ideological landscape and provides them with information about the positions of the parties on a wide array of issues.
The Political Sentimeter is a family of applications that fit users into a discrete set of political, social, and identity-based archetypes or classes using a mixture model fit to data from large-scale surveys.
The Signal is a Bayesian dynamic linear election forecasting application developed for the most recent Canadian federal election, which ran with the Toronto Star and L'actualité. | https://gregoryeady.com/ |
Calrec Audio, a major supplier of audio mixing consoles for sports broadcasts, will discontinue production of analog audio consoles as of Nov. 30, 2010. The decision to stop making analog consoles results from the broadcast market’s general adoption of digital technology, which has both reduced demand for analog products and made dedicated analog components difficult to come by, according to Jim Wilmer, Calrec’s director of sales for the Americas. Calrec has already stopped manufacturing its T-Series and Q2 analog consoles.
Until Nov. 30 the company is offering an opportunity to make final purchases of its S2, C2, and M3 analog consoles as well as modules and card assemblies. Beyond that date Calrec will continue to provide component spares and repair services for the S2, C2 and M3 consoles for a minimum of 10 years from shipment date. Should original components or parts become unavailable, Calrec will offer appropriate alternatives, including hardware/software solutions.
Calrec distributors will contact existing Calrec analog console customers to advise them of the details of the final-purchase offer and to assess their need for consoles, modules and cards.
Wilmer said that while the transition to digital mixers has been a global phenomenon, the U.S. market has been faster on the take-up. He cited issues such as the increasing scarcity of key analog electronics components, such as potentiometers, as well as the higher cost of unleaded components as more countries have regulated the use of lead in electronics products. “It’s happening all over, but this is a kind of milestone, really,” he adds. | http://staging.sportsvideo.org/2010/06/29/calrec-says-goodbye-to-analog-mixing-consoles/ |
At key moments in history, artists have reached beyond galleries and museums, using their work as a call to action to create political and social change. Opening December 11, 2015, the Brooklyn Museum’s exhibition Agitprop! explores the legacy and continued power of politically engaged art through more than fifty contemporary projects and artworks from five historical moments of political urgency. Agitprop! will be on view from December 11, 2015, through August 7, 2016, in the Elizabeth A. Sackler Center for Feminist Art at the Brooklyn Museum.
The term agitprop emerged from the Russian Revolution almost a hundred years ago, combining the words agitation and propaganda to describe art practices intended to incite social change. Since that time, artists across the ideological and global landscape have adopted modes of expression that can be widely reproduced and disseminated. Agitprop! will feature a full range of these materials, from photography and film to prints and banners to street actions and songs, TV shows, social media, and performances. Connecting current creative practices with strategies from the early twentieth century, these projects show artists responding to the pressing questions of their day and seeking to motivate broad, diverse audiences.
In keeping with the collaborative spirit of agitprop, contemporary artists participated in the selection of the exhibition’s content, thereby opening up the process to reflect multiple perspectives and positions. Unfolding in three waves, Agitprop! kicks off on December 11 with five case studies in early agitprop and twenty contemporary art projects selected by the Sackler Center staff. The first round of participants will each invite an artist or collective, whose work will be added to the installation beginning February 17; that second group will invite a final round of artists, whose work will be incorporated on April 6. In total, more than fifty contemporary fusions of art and political action, involving hundreds of contributors, will be exhibited.
On view throughout the length of the exhibition, the historical examples explore the various ways that political aspirations took creative form in the early twentieth century, from women as subjects and makers of Soviet propaganda to the cultural campaigns for women’s suffrage and against lynching in America, and from individual practices such as Tina Modotti’s socialist photographs in Mexico to the government-sponsored Living Newspaper productions of the Federal Theatre Project. The contemporary projects address urgent struggles for social justice since the second half of the twentieth century, including antiwar demonstrations, AIDS activism, environmental advocacy, multipronged demands for human rights, and protests against mass incarceration and economic inequality. With past and present examples installed together, links between historical and contemporary work emerge, highlighting the intergenerational strategies, ongoing development, and long-term impact of politically engaged art over the past century.
“This exhibition continues the Brooklyn Museum’s commitment to providing a platform for public dialogue around political and artistic issues and for placing contemporary work in the context of a vast collection that encourages audiences to make connections between the past and the present,” said Anne Pasternak, Shelby White and Leon Levy Director of the Brooklyn Museum. “With a tradition of building inclusive relationships with communities that are simultaneously local and global, we are uniquely positioned to support the wide-ranging conversations that this exhibition and the included artists will inspire.”
The first round of invited artists includes Luis Camnitzer (U.S./Argentina), Zhang Dali (China), Chto Delat (Russia), Dread Scott (U.S.), Dyke Action Machine! (U.S.), Friends of William Blake (U.S.), Coco Fusco (U.S.), Futurefarmers (U.S.), Ganzeer (Egypt/U.S.), Gran Fury (U.S.), Guerrilla Girls (U.S.), Jenny Holzer (U.S.), Los Angeles Poverty Department (U.S.), Otabenga Jones & Associates (U.S.), Yoko Ono (Japan/U.S.), Sahmat Collective (India), Martha Rosler (U.S.), Adejoke Tugbiyele (Nigeria/U.S.), Cecilia Vicuña (Chile) and John Dugger (U.S.), and, in a collaborative work, The Yes Men (U.S.) with Steve Lambert (U.S.), CODEPINK (U.S.), May First/People Link (U.S.), Evil Twin (U.S.), Improv Everywhere (U.S.), Not An Alternative (U.S.), along with more than thirty writers, fifty advisers, and a thousand volunteer distributors (U.S.).
Agitprop! will be accompanied by extensive programming and performances, allowing artists and thinkers to respond to current events throughout the run of the exhibition.
Agitprop! is organized by the staff of the Elizabeth A. Sackler Center for Feminist Art: Saisha Grayson, Assistant Curator; Catherine Morris, Sackler Family Curator; Stephanie Weissberg, Curatorial Assistant; and Jess Wilcox, Programs Manager. | https://artofthetimes.com/2015/10/07/brooklyn-museums-elizabeth-a-sackler-center-for-feminist-art-presents-agitprop-exhibition-features-contemporary-and-historical-art-projects-devoted-to-social-change/ |
Start with a line drawing and create the WildFlower embroidery design in 6D™ Design Creator using a variety of stitch fills, outlines and techniques. Finishing touches are added in 6D™ Stitch Editor and 6D™Embroidery Extra. Step-by step working EDO & VP3 embroidery files are included if you need help.
To complete the project, use your design to embellish a Bedside Table Mat & Runner using the PDF instructions & pattern pieces. | http://angies.co.nz/wild_flower_6d_tutorial.php |
Duration of Task (2 days or 20 minutes, etc)
Allow an approximate completion time of task to be entered. Task A is two days, task B is 20 minutes, etc. Then if I have 30 minutes of free time before an appointment, I can view all tasks with estimated time of under 30 minutes.
28 comments
-
It would be pretty nice to have an ability of task's duration with its synchronization in the calendars (google calendar, iCalendar). To have the ability to visualize it.
-
Very much essential to have duration
-
Aravind Padmanabhan commented
A required feature. It is essential to identify the low hanging tasks.
-
Very important feature.
-
very important function
-
It would also be great if on the TODAY list it added up the amount of time that you assigned to each task, so you could see how much time you have booked already. So you don't overbook any one day.
-
Luca commented
By now I inaert duration ad "hashtags" in the "noto" field such as #30m #1h and so on. This allows me to search tasks with a specific duration, but not with (for instance) a duration less than x, nor to know the sum of tasks duration within a folder or such. Please add :-)
-
Jawaad commented
This looks like it can be consolidated with http://wunderlist.uservoice.com/forums/136230-wunderlist-2-feature-requests/suggestions/2321802-time-tracking.
Can we put them together and add up the votes so this gets higher visibility? If not, can everyone that voted for this move their votes to the linked suggestion so the votes get consolidated manually?
-
Klement commented
Time estimate to complete a task is essential to me too. For one thing knowing ahead what i am committing eliminates the overwhelm of a long task list and enables a realistic allocation for work and personal time
-
Linda commented
Being used to scrum way of working I would like to assign story points to my tasks. As this relates to the idea of estimated time I therefore give my vote to this feature request.
-
Marcin commented
Absolutely necessary feature. I would like to see sum of estimated time for a day or for stared items. It would help to schedule my time more realistic.
-
sayth commented
Maybe we should be able to compile the duplicate ideas like thishttp://wunderlist.uservoice.com/forums/136230-wunderlist-feature-requests/suggestions/6997565-create-a-spot-in-the-task-pane-detail-where-can-ad so that votes will be reflective of need.
I personally think this feature would be great and powerful imagine if you could see in wunderlist you have 3 tasks worth 25 hours and only 16 hours to complete
-
YES PLEASE!
-
Paul Hollander commented
Yes this would be very helpful. If possible have an option for two distinct hour 'boxes' per task: 1 box for Management hours , 1 box for Technician hours.
-
Joshua Goshorn commented
A good Idea.
-
Hakim A. commented
This is really a great idea that'll make it possible to student-users to organise their work more efficiently.
I, as a student who always find it hard to keep up with homework, projects, non-school activities, am very happy with Wunderlist as it makes me think about 'what to do' combined with a great design. (I even found myself trying to remember things a forgot, just because I really like using the app!)
By adding the option to add an estimated time, I'll be able to prioritize and organise what to do much better! (It just comes up in my mind that in this way, it'll be also possible to sync with your iCal calendar more precisely, as it is at the moment only possible to show your tasks in the 'daily' view of iCal)
-
Larry A Harris commented
Estimated times are good, but bear in mind that some projects are moving targets.
-
Sounds like a bit of work hopefully valuable for planning.
-
Loren commented
We would like to be able to maintain an estimate of the time it will take to complete each task on a list. Sort of an accountability of how much time we can allow on each task and whether or not we stayed withing the estimate of total time. We would then like to be able to see a running total of how much time has been estimated for all the tasks combined so we can use to plan out someones schedule and or hire more help.
-
Please rewrite the subject/title of your feature request to describe what feature you want. | https://wunderlist.uservoice.com/forums/136230-wunderlist-feature-requests/suggestions/3084506-duration-of-task-2-days-or-20-minutes-etc |
Age-related hearing loss is the second-most common health condition in Australia, affecting a whopping 58 per cent of those aged over 60, and 74 per cent of people aged 70 and above. And while this alone is concerning, the bigger issue many face is the close link between losing your hearing and being diagnosed with dementia.
Estimates suggest people with mild hearing loss are twice as likely to develop dementia as those with healthy hearing, and those with severe hearing loss are five times as likely to develop the condition. Now this may sound dire, but there is hope.
Emerging evidence suggests that treating an individual’s hearing loss can actually improve their cognitive performance, with the benefits spreading also to their partner and wider family.
Although most research to date focuses on individuals as separate entities, there’s untapped value in recognising people, not only as individuals, but also as parts of broader ‘cognitive systems’ involving their spouse, family, friends, and wider communities. Some researchers are beginning to view long-married older adult couples as individuals who have a lifetime of experience communicating, remembering and thinking together
This base of shared knowledge and experience leads to the formation of an interdependent system between the couple, where they provide crucial support for each other’s cognition. It really makes sense then that the health and wellbeing of one member of a couple can affect the health and wellbeing of the other. The effect of one’s disabilities on their spouse is known as third-party disability. Therefore, if one member of the couple has hearing loss, the negative effects of this are felt by both members of the couple.
As a baseline definition, researchers agree on three main factors of third-party disability from hearing loss: emotional impacts, communication impacts and lifestyle impacts.
Firstly, emotional impacts on the spouse can include experiences of being emotionally drained due to the requirement to adapt to the needs of their partner with hearing loss, and feeling stressed, depressed or anxious about their partner’s hearing loss. Communication impacts on the other hand can include a reduction in the willingness to engage in idle chatter with their partner. The use of a hearing device can become a contentious issue itself, making for defensive interactions within the couple.
Meanwhile, lifestyle impacts can include not socialising as frequently due to the partner with hearing loss becoming reluctant to go out (as they cannot hear others well), and the partner with hearing loss not wanting to be separated from their spouse who ‘helps them hear’.
Another area that hasn’t been considered previously is the cognitive impacts on the spouse, which is thought to include a larger cognitive load from the spouse needing to think for their partner, such as keeping them up to date in conversations. The impact of greater strains in emotional, social and lifestyle spheres is likely to also reduce cognitive capacities, and increase vulnerability to mental and physical health problems, cognitive decline and possibly even dementia.
A key point to remember is that there are two different ways to view our hearing: audiometric performance (how well you perform on a hearing test requiring you to detect different frequencies) and functional performance (how well you can hear in your everyday life).
A person can perform well on an audiometric test, but may struggle to hear in everyday life, such as in conversations or if someone is talking in another room. A big predictor of how much hearing loss impacts your life with your spouse is not how well you hear, it’s how well you and your partner think you hear. Common practice has been to wait until audiological results are poor and then you’re suggested hearing aids, but we instead urge that treating hearing loss at the first sign of functional impairment is better for protection against cognitive decline.
Third-party disability and the risk for dementia can both be reduced by addressing the underlying hearing loss. This can be done easily by either using a hearing device, or attending classes as a couple that teach communication skills to help overcome hearing loss difficulties, such as Group Audiological Rehabilitation.
Unfortunately, a major barrier to people receiving this care is the stigma that hearing loss is shameful and should be hidden. As a result, only one in five Australians who could benefit from a hearing aid actually use one. It begs the question, if you had high blood pressure, you would take medication right? So if you have hearing loss, why would you not use hearing aids? It can have a major impact on a couple, and not in a bad way.
Studies that have tracked couples before and after one member received hearing aids have shown benefits across mental health, communication and lifestyle factors for both people in the relationship. The good news is eligible individuals can access subsidised hearing aids through government support programs. This includes people who have a pensioner concession card.
This article was co-written by Gabi Picard and Viviana Wuthrich from the Centre for Ageing, Cognition and Wellbeing and Macquarie University.
IMPORTANT LEGAL INFO This article is of a general nature and FYI only, because it doesn’t take into account your personal health requirements or existing medical conditions. That means it’s not personalised health advice and shouldn’t be relied upon as if it is. Before making a health-related decision, you should work out if the info is appropriate for your situation and get professional medical advice. | https://startsat60.com/media/health/treating-hearing-loss-dementia-couples-relationship |
The ocean is a huge body of water that is home to many amazing animals, plants, and abiotic processes. It also helps control weather patterns and serves as a major food source for humans and other organisms.
Animals that live in the ocean include fish, krill, shrimp, jellyfish, and coral. They are all important to our planet’s ecosystem, and some even play an important role in the way we eat.
Some of the most interesting animals that live in the ocean are sharks, sea turtles, and dolphins. You’ll find these creatures in waters all over the world, from Australia to New Zealand.
Marine organisms make their energy from combining sunlight and carbon dioxide to create food for themselves. They also provide shelter for other animals and are a vital part of the food chain.
Algae (phytoplankton) is another important group of marine organisms. This group of free-floating algae helps supply oxygen to the planet and is a great food source for smaller fish, like sardines, salmon, and herring.
Plants in the ocean are also very important, and can help balance out the acidification of the water. They use a combination of sunlight and carbon dioxide to create organic matter that is used as food for other animals, such as krill.
Scientists don’t know how the ocean will respond to acidification, but they’re trying to find out through controlled laboratory experiments with various species of animals. They’re looking at their behavior, energy use, immune response and reproductive success to see how they might fare under more acidic conditions.
Currently, the acidity of the ocean is 30 percent more acidic than it was 200 years ago. That’s faster than any change in ocean chemistry in the geologic past.
The rate of change is accelerating because of the increase in greenhouse gas emissions. This is called ocean acidification and it’s caused by human activity.
This is a problem because more acidic ocean water can be hard for shell-building animals to survive in. It’s harder for them to build their skeletons and can also slow down their ability to reproduce and grow, which can have serious consequences on the ecosystem.
Other threats to the ocean are rising temperatures and changing nutrient levels. These changes can harm fish and other marine life, such as corals, which can become brittle, causing them to break off or erode faster.
The ocean is a huge body of water, and it affects almost every aspect of our planet’s climate. It also is a critical resource for humans, providing food and water as well as valuable resources such as oil, minerals and natural gas. | https://earthwatch2.org/lff/terry/2023/03/the-importance-of-the-ocean-3/ |
Science Stories: The Art of Scientific Storytelling
Science Stories showcases the impressive literary work done by graduate students who participated in the first run of “The Art of Scientific Storytelling,” a new course I taught in Spring 2020 at the University of Arizona. The course, developed in collaboration with my Creative Writing program colleague Christopher Cokinos, was eligible for credit in the new Graduate Certificate in Science Communications offered by the College of Science. Its aim was to inspire creative works that were science-smart, works that might enhance science literacy among readers. The class read contemporary writers who covet the perspectives of science and the personal stories of scientists who write for non-technical audiences. We read memoirs, essays, op-eds, and poetry. We read works inspired by chemistry, astronomy, paleobiology, traditional indigenous knowledge: Primo Levi, Hope Jahren, Robin Wall Kimmerer, Kathleen Jamie, Alan Lightman, Gary Paul Nabhan, and Maggie Nelson, among others. The students came from a range of disciplines including optical science, astronomy, geography, climate adaptation, hydrology, mathematics, speech pathology, and creative writing. The conversations were rich and the talent abundant. They surprised me each week with their inventive and insightful takes on writing assignments. We offer this showcase of our experiment in the meeting of art and science.
Living in the shadow of Volcán de Fuego, in central Guatemala, some days begin under a thin blanket of ashfall. The impossibly soft gray powder shrouds everything downwind of it about once a month. This ash, farmers tell me, is what gives Guatemalan coffee its rich taste and makes it one of just a few viable export crops for locals in the market. Locals here also host hiking tours for sightseers to marvel at Fuego’s novelties up close: the rotten smell of sulfur; even roasting marshmallows in its glowing cracks. During the years I lived nearby, I often scaled its sister summit, Acatenango, to take in the view beyond the smoking cone: villages tumbling across lush valleys in every direction, without considering that some of them could soon be leveled by landslides and buried in earth.
Some days, like June 3rd of 2018, the ash overwhelms, spewing from a massive eruption and curdling into pyroclastic flows. It melts tires to the ground and destroys any vestige of a coffee plant, or a milpa crop, or a shelter.
Vexing images of the 2018 eruption diffused widely through news and social media, presented with the mixture of human empathy and fascination often characteristic of such disasters when they make headlines. Hundreds of people were killed and a few towns razed completely that day. Briefly, the world turned its attention, and in a few cases aid, to Guatemala.
In addition to swift and horrific destruction on the ground, eruptions like Fuego can release hundreds of thousands of tons of toxic sulfur dioxide gas, SO2, into the atmosphere. Historically, volcanic aerosols have long disrupted atmospheric and climactic patterns. They’ve proven capable of causing prolonged winters and global famines by obstructing the sun. Until recent decades, these dramatic events were the most prolific force on the planet doing so. Today, they make up for less than 1 percent of aerosol emissions. Without fanfare, the world over, human-designed chemical combustions leach sulfur dioxide to the same and worsening effects.
Brimstone, sulfur, like that which gave Fuego its characteristic odor, is an omen older than the bible—evoking inferno in our lexicon long before uranium or plutonium. Though like the other most infamous elements, we hadn’t known its full destructive power until we tried to domesticate it. When combined with ubiquitous hydrogen and oxygen, sulfur excels at extracting other minerals from their casings. We repeat this process to make sulfuric acid, lifeblood of late capitalism; over 180 million tons of it a year. Production of the stuff is a “reliable indicator” of a nation’s industrial strength, according to economists. The U.S. produces more sulfuric acid than any other chemical, using it to manufacture pharmaceuticals and insecticides and antifreeze. To refine oil and reduce aluminum. To create the substances that give us domain over other elements and mediums to make our mark on this world—paints, inks, explosives, cement, gunpowder. Sulfur is a primary ingredient in the fertilizers that enable modern agriculture.
Distinct from those of volcanoes, the consequences of these industries go largely unnoticed at the time they are produced. But they have even more substantial ramifications for humans and their ecosystems over time. There’s a crucial difference between destruction by natural causes and destruction by human-made causes: the byproducts of the latter generate wealth—for some.
These, instead, are tragedies that require our active participation. The government-subsidized industrial farms in the U.S. powered by sulfur-based fertilizers and agrochemicals that provide access to cheap produce from anywhere in the world, regardless of the season, have their thumbs on the scales of the global food system. Guatemala is now forced to import most basic food items at a high premium, despite the region’s long history of subsistence farming, fertile growing conditions, and the fact that over a third of its workforce is employed in the agricultural sector. Driven away from autonomous food production in order to compete through more fickle export crops, farmers are left vulnerable to market and policy shifts, often losing their land to larger operations.
Climate change effects caused by related trends toward industrialization are making the rainy season come much later in Guatemala, decimating the harvests of subsistence crops that do remain. The sulfur dioxide gases emitted by largely foreign-owned megaprojects, livestock operations, and factories here eventually fall somewhere as acid rain, which can create eerie dead zones out of entire ecological communities. Maquiladoras and mines throughout Mexico and Central America introduce alarming amounts of pollution into local water sources while extracting vital resources from the land and its inhabitants for export goods. Many of these mines in fact profit from collecting the very sulfur found in the region’s volcanic deposits.
There are no aid campaigns like the 2018 relief effort to support the staggering number of Guatemalans whose lands and livelihoods have been eradicated in these more gradual ways. They are unable to produce an exact record of their losses, nor pinpoint a particular moment of catastrophe like an eruption to which we might respond. Because it is much more difficult to publicize the complicated and tedious disasters unfolding every day, and because they are often the same processes providing our modern conveniences, we usually don’t respond.
Locals have grappled with living alongside volcanos for thousands of years. What’s new are outsider interventions and extractive industries that do not present a rotten sulfur scent, nor a regular shower of ash for warning. But like the magma in Fuego’s lower chambers, these processes are building deadly pressure. There are other crucial differences between destruction by natural causes and destruction by human-made causes. The latter can be prevented, if we choose to act in response to current and coming catastrophes.
Header image by fboudrias, courtesy Shutterstock. | https://www.terrain.org/2020/science-stories/brimstone/ |
Morning sickness is one of the first signs of pregnancy. It is caused by pregnancy hormone human chorionic gonadotrophin (hCG). It is induced when fertilized egg implants into the placenta. It occurs in the first trimester of pregnancy in around six weeks that remains up to 12 to 15 weeks of pregnancy. Its symptoms include nausea, vomiting, loss of appetite and distress after the smell of food. It is experienced by most pregnant ladies usually in the morning. However, it can happen at any time in the day.
How Do I Know If I Have Morning Sickness?
Morning sickness is a condition of pregnancy characterized by nausea and vomiting mainly in the morning. However, it can occur at any time in the day. It is the early sign of pregnancy. About 90% of pregnant women experience morning sickness in their pregnancy.
Morning sickness is not an illness. It occurs in the first trimester of pregnancy around six weeks. It ends in 12-15 weeks of pregnancy. It can appear and disappear at any time during the day. It is represented by following symptoms-
- Nausea that can be mild or severe
- Vomiting
- Lightheadedness
- Feeling of sickness due to the smell of specific food item or all food items
- Lethargy or weakness throughout the day
- Aversion to a particular food
- Reduced appetite or loss of appetite
- Swelling or tenderness in the breast.
If pregnant ladies have these symptoms, then it is suggestive of morning sickness. It can be understood by following scale that represents the severity of the condition-
Very Minor– it does not cause much sickness. it definitely upsets the stomach. It causes slight nausea that settles down in an hour or more.
Minor– it happens mainly in the morning. The smell of food may kill the appetite temporarily. The pregnant women may feel vomiting but it does not happen.
Standard– the pregnant women who have standard morning sickness may feel sickness daily. She feels better as the day passes by. It is recommended to avoid cooking a few food items as its smell may trigger nausea. It sometimes induces vomiting also. The pregnant lady feels better gradually and can keep the food down.
Moderate– it appears nearly every day. It induces vomiting. The pregnant lady feels relieved in the day when she does not vomit. Vomiting may make her not to eat for a few hours. It is most common in the morning that gradually subsides in the afternoon.
Bad– the nauseating feeling remains throughout the day and increases the incidence of vomiting twice every day. The affected lady faces problem in keeping the food down and she loses appetite by the smell of certain foods. The smell of these foods also causes vomiting or dry heaving. Even the thought of eating that food causes vomiting.
Severe– severe morning sickness is characterized by vomiting four to five times a day, feeling weakness that keeps the person in the bed the whole day, inability to keep food down, nausea that causes loss of appetite, dehydration, and feeling of fever. It requires immediate treatment.
Morning sickness is similar to other stomach related illness. But it is better to rule out certain symptoms that distinguish morning sickness with Stomach Flu. The symptoms that can differentiate morning sickness with stomach flu or stomach related illness are-
- Watery stool
- Presence of blood in the stool
- Fever
- Headaches
- Abdominal cramps
- Body pain or muscle aches
Conclusion
Morning sickness is a normal sign of pregnancy that is present in the first trimester in most of the pregnant ladies. It is not an illness. It appears mostly in the morning that may cause nausea, vomiting, and loss of appetite. The symptoms discussed above may signal you that you have morning sickness.
Also Read:
- Home Remedies For Morning Sickness
- Effectiveness of Acupuncture in Treating Morning Sickness
- What is Hyperemesis Gravidarum & How is it Treated?|Difference Between Morning Sickness and Hyperemesis Gravidarum? | https://www.epainassist.com/pregnancy-and-parenting/how-do-i-know-if-i-have-morning-sickness |
Fallout: The Board Game
Fallout is a post-nuclear adventure board game for one to four players. Based on the hit video game series by Bethesda Softworks, each Fallout scenario is inspired by a familiar story from the franchise. Survivors begin the game on the edge of an unexplored landscape, uncertain of what awaits them in this unfamiliar world. With just one objective to guide them from the very beginning, each player must explore the hidden map, fight ferocious enemies, and build the skills of their survivor as they attempt to complete challenging quests and balance feuding factions within the game.
As they advance their survivors' stories, players will come across new quests and individual targets, leading them to gain influence. Who comes out ahead will depend on how keenly and aggressively each player ventures through the game, however if a single faction is pushed to power too quickly, the wasteland will be taken for their own, and the survivors conquered along with it.
Character Creation
Throughout each Fallout game, players will develop deeply unique survivors with which to adventure. At the core of the survivor board is the S.P.E.C.I.A.L. skill system, breaking up the survivors' abilities into seven main categories: Strength, Perception. Endurance, Charisma, Intelligence, Agility, and Luck. Each survivor will begin with two base abilities—one determined by their character type and another drawn at random. Depending on the quests pursued, enemies engaged, and encounters attempted, these skills may come in handy as the players progress throughout the game.
In order to gain new skill tokens, players must level up, allowing them to draw new skill tokens. The number of skills a player has collected will determine how quickly they are able to level up again, a function tracked beneath the S.P.E.C.I.A.L. tile slots. The peg, however, will only move to spots which fall under fulfilled skills, so the more skills a player has collected, the longer it will take them to advance to another one.
Additionally, if a player's leveling up results in drawing a skill they have already obtained, they are given the opportunity to claim a perk in that area instead, having already trained the skill. Perks are powerful, one-time-use abilities, such as collecting a piece of revealed armor for free or moving to another location on the map without following normal movement rules. While the perks may only impact the survivor's game once, used wisely, they can turn the tide of a game very quickly.
Also on the survivor board are three token slots, which may indicate positive or negative traits of a player. Some of these tokens can be acquired and changed while others will impact a player's entire game. Each of these tokens may force certain events should the inflicted survivor attempt to complete a quest or encounter, though whether the result is positive or negative can vary.
Along the bottom of this board is the all-important health track, though with one important difference to many others—radiation. On the right side of this track represents HP, while the left tracks a survivor's accumulated radiation (or rads). While HP in many games has to hit zero for a player to lose their life, in Fallout, survivors must keep a close eye on both HP and radiation. If the two pegs would ever occupy the same space or cross along the track, the survivor will die, respawning the following turn (a few possessions lighter) back at the starting space.
The survivor board is also surrounded by spaces for the player to keep their other accumulated items, such as armor, weapons, companions, and more. As each player collects, equips, and sells items, the identifying characteristics of their survivor will change, possibly having a significant impact on their play strategy going forward. Thus, while many elements of the game are shared from player to player, each survivor board is wholly unique, and will seldom, if ever, be exactly the same from game to game.
Scenarios and Quests
Though character creation is a significant piece of the Fallout experience, at the heart of the game are the quests and encounters that tell each scenario's story. At the beginning of the game, the players will choose one scenario which determines the map layout, warring factions, and major questline of that game. Once the scenario begins, players will cycle through their encounter decks and quest objectives, populating them from the 159-item card library.
The main quest in each scenario is driven by the conflict between two warring factions. With each card progressing the main quest will come opportunities for survivors to make tough decisions, both moral and mechanical. One option may seem like the "right" choice, but the other quest marker may be easier to reach, or more appealing for the sake of personal gain (or chaos). Throughout the game, players will be given more concrete ways to side with each faction or to benefit from their advancement. Regardless of a player's alliances or lack thereof, the factions are a constant presence and threaten to outpace the group of survivors at every turn.
In addition to the main quest, players will encounter a variety of side quests and encounters. The unique opportunities those cards present introduce a unique replayablity to the game. Though the major quest of each scenario remains the same, which path the players take will determine which cards they get to see in any given game. Multiple starting cards featuring the same text, but varied results, and also offer opportunities for players to make a choice they've made before, but ultimately facing a different outcome. Much like with your survivor board, between this breadth of cards, the modular game board, and unpredictable opponents, players will never to see the same game twice.
Exploring the Vaults
One beloved narrative element in the Fallout universe is the vaults, built by Vault-Tec in preparation for a nuclear war. Though heralded as a safe haven from the post-war radiation, many of these vaults were used to conduct nefarious societal experiments. In the board game, such vaults can be found around the map, offering players a chance to interact with these twisted communities.
When a vault is first discovered, players will create a new encounter deck, allowing any survivor who comes in contact with the vault to interact with whatever lies behind the vault door. Though the vaults are not always a core piece of a scenario's narrative, each one provides a unique and limited player experience within the game. When a survivor decides to engage with a vault, they will have unique opportunities to become ingrained in the vault's story, making difficult decisions and reaping the rewards or suffering the consequences of their choices.
Winning the Game
As each survivor progress through the game, they will advance both their own skills and the two faction objectives, though the ultimate goal for all players is to collect a certain amount of influence. Influence cards are distributed, one to each player, at the beginning of the game, though more can be collected by completing certain quests and encounters. Each influence card grants one influence point inherently, but they also present specific conditions under which the player holding the card can gain even more points. These could include pledging loyalty to one of the two factions or reaching certain milestones, such as collecting a certain number of S.P.E.C.I.A.L. tokens or revealing a particular amount of map tiles.
The number of influence points required to win the game is dependent on how many players are participating, which means that if two players reach that number simultaneously, they can win the game together. At the same time, each faction's power across the wasteland is tracked on the scenario card, and should they advance too far in their own goals, the game may end without a player win at all. Thus, players may be forced to work together, go their separate ways, or try to balance both teamwork and independence equally in order to get the outcome they desire. This presents a unique opportunity to experience the wasteland as more than just the Lone Wanderer, but one of a few ambitious adventurers.
Welcome to the Wasteland
Fallout brings the immersive and in-depth experience of the beloved video game franchise to the tabletop, including many familiar elements while maintaining a unique narrative of its own. From canine companions to frightful enemies, new and old fans alike will get to interact with some of Fallout's most friendly and ferocious wasteland inhabitants. From handfuls of loot to powerful armor, players will be offered the opportunity to collect and customize, building a character entirely unlike their opponents' across the table. With endless choices to make and countless locations, objects, friends, foes, and quests to encounter, both seasoned veterans of the video game and all-new Fallout fans will find everything that's made the digital experience so enthralling in this board game adaptation, with a couple unique twists to boot.
2 – 3 hours
1 – 4 players
Ages 14+
Products
Support
Rules
Other
News
©2020 Bethesda Softworks LLC, a ZeniMax Media Company. All Rights Reserved. | https://www.fantasyflightgames.com/en/products/fallout/ |
Ready to add some style to a wall in your home using paint? If so, try adding a fun accent wall using paint and Frog Tape! I have painted my whole house using this tape, including a few accent walls, and it always does a great job! In fact, we’ve tried most popular brands of painter’s tape, and I seem to keep coming back to Frog Tape to achieve crisp paint lines. Why paint an accent wall? Accent walls are one of my very favorite ways to decorate a space. It’s such a simple yet frugal way to change the... | https://namegenerics.com/blogs/news/tagged/allcrafts |
The structure and behavior of proteins plays an overarching role in determining their function in biological systems. In recent years, proteins have also been proposed as basis for new materials to be used in technological applications (Langer and Tirrell, Nature, 2004). It is known that protein crystals show very interesting mechanical behavior, as some of them are extremely fragile, while others can be quite sturdy. However, unlike other crystalline materials like silicon or copper, the mechanical properties of protein crystals have rarely been studied by atomistic computer modeling. As a first step towards more fundamental understanding of the mechanics of those materials, we report atomistic studies of mechanical properties of protein crystals using empirical potentials focusing on elasticity, plasticity and fracture behavior. Here we consider the mechanics of a small protein α-conotoxin PnIB from conus pennaceus. We use large-scale atomistic simulations to determine the low-strain elastic constants for different crystallographic orientations. We also study large-strain elastic properties including plastic deformation. Furthermore, we perform systematic studies of the effect of mutations on the elastic properties of the protein crystal. Our results indicate a strong impact of mutations on elastic properties, showing the potential of mutations to tailor mechanical properties. We conclude with a study of mode I fracture of protein crystals, relating our atomistic modeling results with Griffith's theory of fracture. | http://core-cms.prod.aop.cambridge.org/core/journals/mrs-online-proceedings-library-archive/article/atomistic-modeling-of-elasticity-plasticity-and-fracture-of-protein-crystals/D329007E88A26B5935E15632390EB300 |
Preparing for an Audit in a Virtual Environment
Angela Lawrence, Quality Control Coordinator at Klatzkin, contributed to this post.
The COVID-19 pandemic has affected nearly every aspect of daily life, so preparing for your company or nonprofit organization’s audit may have some significant differences this year, considering the remote environment that many of us are working in today. This post updates a previously published article regarding the important parts of preparing for an audit to account for the adjustments that might have to be made in light of the new remote work environment, including:
- Contact your auditor early and start establishing key dates and deadlines, particularly if business closures or adjusted work hours will affect the audit. After the audit begins, ask for regular updates, including during the fieldwork stage. If issues come up or the auditor needs additional documentation, you can see to it promptly. Communication will be as important as ever, before, during, and after your audit, especially if there’s a chance you may not see your auditor in person.
- Preparing Your Documents. Your auditor should give you a list of needed items or schedules and documents for you to prepare. You can start to compile some of this before your year-end, such as board minutes, contracts, and receipts. Examine the prior year’s adjustments with your auditor before closing out the books to make any additional necessary journal entries first. Go over the items-needed list to determine if there are no longer required things or if something new came up during the year that’s not reflected on the list. You may have provided documents in paper to your auditor; now, that may not be an option. Scan or obtain PDF copies of your documents and speak to your auditor about the best and safest way to electronically transfer them.
- Fieldwork. Some audits may require fieldwork, on-site inspections, or facility walk-throughs. Due to COVID-19 restrictions, this may be difficult or even impossible to do in person. Consider doing a virtual inspection, where the auditor can inspect a facility or view documents through a live stream. If the auditor needs to be on-site, take all necessary precautions and follow safety protocols. Keep in mind any workplace capacity restrictions or remote working schedules when the auditor is visiting, and be aware that they may need access to certain individuals who might not be there all the time. Make sure everyone is on the same page in terms of scheduling.
It’s essential to work together to ensure an efficient and productive audit, whether conducted in-person or virtually. Working with your auditor to begin early preparations can facilitate the process.
Contact Us
If you have any questions regarding virtual audits or need assistance with a tax or accounting-related issue, Klatzkin can help. For additional information, click here to contact us. We look forward to speaking with you soon.
©2021 Klatzkin & Company LLP. The above represents our best understanding and interpretation of the material covered as of this post’s date and does not constitute accounting, tax, or financial advice. Please consult your advisor concerning your specific situation. | https://www.klatzkin.com/preparing-virtual-audit/ |
Attending the American Speech- Language-Hearing Association (ASHA) Convention was a meaningful and extremely worthwhile experience as an aspiring Speech-Language Pathologist (SLP). An estimated 20,000 people of all different professions united with two interests alike, speech and hearing. Dr. Gregory and I presented our poster “Cultural Humility: Examining Microaggressions to Improve Clinical Encounters” on the first day of the three-day convention. Our poster was displayed amongst hundreds of researchers work on various topics including telepractice, cognitive disorders, health literacy, language in infants, and others focusing on different aspects within our career path. During our presentation time, professions from different regions of the world shared their perspective on microaggressions in the workplace and everyday life.
We were apart of relevant and sincere conversations that taught us new things regarding personal bias’ while being able to provide research that benefits all individuals within a community. A key take away from our presentation was the amount of people who identified with “self- evaluation” posing a huge barrier to achieving cultural humility. Self- evaluation is the ability to reevaluate and alter personal biases with the willingness to explore and appreciate a culture for what it is. Many people we spoke with addressed microaggressions as a “sensitive topic,” which placed even more emphasis on the importance of talking about how they impact our clients.
As we take all considerations and critiques away from our experience at ASHA, the next step is to implement our poster and checklist into local university clinics. Additionally, we would like this checklist to be feasible for practicing clinicians and professors to introduce in their coursework. As we try to spread awareness on the impact of microaggressive attitudes on a national level, it is equally important to ensure that we enforce the same beliefs here in our community. While continuing to focus on cultural humility, the survey and qualitative interviews on cultural competence and humility are underway as we work toward building questions that will give us reliable and significant results. The aim of this study is to identify undergraduate and graduate student experiences with microaggressions during clinical experiences. | http://ugresearch.blogs.pace.edu/category/2019-2020/skyler-oberry/ |
National Bishop Invites Church Into Prayer For Truth And Reconciliation Commission’s Closing Events
In a letter issued today, Evangelical Lutheran Church in Canada’s (ELCIC) National Bishop Susan C. Johnson invites the church, into prayer for the closing events of the Truth and Reconciliation Commission and for renewal in our commitments to healing and reconciliation.
The text of Bishop Johnson’s letter follows. A pdf version of the letter can be viewed here:http://www.elcic.ca/Documents/201505TRC.pdf
May 20, 2015
Dear friends in Christ,
Canada’s Truth and Reconciliation Commission (TRC) is holding its closing events in Ottawa, May 31-June 3, 2015. I am writing to invite you into prayer for the closing events and for renewal in our commitments to healing and reconciliation.
I begin by acknowledging the survivors of residential schools and their families who continue to live with the legacy of this tragic chapter in Canadian history. I offer my prayers for your continuing courage, strength, wisdom and healing. And I offer my prayers for all of us as we engage together the work of promoting right and renewed relationships.
For more than 120 years, tens of thousands of Indigenous children were sent to Indian Residential Schools funded by the federal government and run by the churches. They were taken from their families and communities in order to be stripped of language, cultural identity and traditions. Canada’s attempt to wipe out Indigenous cultures failed. It left an urgent need for reconciliation between Indigenous and non-Indigenous peoples. We also remember the over 4,000 children who died while attending these schools.
For the last 6 years, the TRC has been listening to the stories and gathering the statements of survivors of the Indian Residential Schools and anyone else who feels they have been impacted by the schools and their legacy in order to hear and document the truth of what happened. The TRC has also been considering what is required for reconciliation. While the work of the TRC is concluding, the recommendations of the TRC will be a new call to form more respectful, just and equitable relationships.
Our church is committed to participating in an ongoing process of finding truth and reconciliation. It is our hope that the sincerity of our covenant will be demonstrated in our actions and in our attitudes. We understand this to be both an urgent and a long-term commitment.
There are a variety of ways that you can engage this present moment:
- Pray. For survivors and their families, for the work of reconciliation, and for new understanding.
- Get involved. KAIROS Canada has prepared resources to encourage engagement in this “Time for Reconciliation.” Events are being planned for both Ottawa and across the country. Activities have been identified that can be done anywhere, including worship resources, planting a heart garden and watching livestream. (http://www.kairoscanada.org/events/time4reconciliation/)
- Make a commitment. Our Full Communion partner, the Anglican Church of Canada, has invited the Church into 22 Days of prayer and renewal in our commitments to healing and reconciliation among all people. (22 Days website and #22days) These 22 Days will take us to the National Aboriginal Day of Prayer, on Sunday, June 21st.
- Attend events. If you aren’t able to attend the closing Ottawa event, consider attending a regional event. The KAIROS website (www.kairoscanada.org/events/time4reconciliation/local-events/) has a list of events and check out whether there are other events being held in your area.
- Engage in the work of our church. The ELCIC National Convention will hear from TRC Commissioner Marie Wilson and will consider a resolution to repudiate the Doctrine of Discovery. In the coming weeks, you will see more details in the ELCIC’s Countdown to Convention e-newsletter and online at www.elcic.ca.
- Find out more. You can learn more about the TRC process and its recommendations at www.trc.ca.
Our Lutheran tradition teaches us that reconciliation is a gracious and precious gift from God our Creator. For true reconciliation to happen the Creator must stir hearts. It is the Creator who opens eyes and ears and souls that we may have the courage to speak truth, the patience to listen, the wisdom to confess and the humility to show respect. It is the Creator who calls us to hope for a better future and for a healing journey that will bring us to true community.
We will need to draw on many spiritual resources to make this journey. I pray that everyone will find appropriate spiritual and community support.
In these words from St. Paul to the Romans, we hear a call to humility, an invitation to listen, and a sign of hope for reconciliation.
Let love be genuine; hate what is evil, hold fast to what is good; love one another with mutual affection; outdo one another in showing honor.
(Romans 12:9)
Yours in Christ, | https://epiphanychurch.ca/2015/06/10/national-bishop-invites-church-into-prayer/ |
AUSTRAC accepts enforceable undertaking from National Australia Bank
AUSTRAC has accepted an enforceable undertaking from National Australia Bank (NAB) to uplift its compliance with Australia’s anti-money laundering and counter-terrorism financing (AML/CTF) laws.
The action follows an AUSTRAC enforcement investigation which identified concerns about NAB’s AML/CTF program, systems and controls.
AUSTRAC identified non-compliance in targeted compliance assessments, as well as through self-disclosures from NAB. AUSTRAC notified NAB of the formal enforcement investigation into five NAB reporting entities in June 2021, following ongoing regulatory engagement. The entities are National Australia Bank Limited, JBWere Limited, Wealthhub Securities Limited, Medfin Australia Pty Ltd, and AFSH Nominees Pty Ltd.
NAB has undertaken to implement a comprehensive remedial action plan, which will see improvements to its systems, controls and record-keeping, including:
- the NAB designated business group AML/CTF Program
- applicable customer identification procedures
- customer risk assessment and enhanced customer due diligence
- transaction monitoring
- governance and assurance.
AUSTRAC will monitor NAB’s progress to ensure that actions are taken within the timeframes, and maintain regular, ongoing discussions with NAB. An independent auditor will report to AUSTRAC annually on progress, with the final report to be provided to AUSTRAC by March 2025.
AUSTRAC formed the view at the start of the investigation that a civil penalty proceeding was not appropriate at that time. AUSTRAC has not identified any information during the investigation to change that view.
AUSTRAC Chief Executive Officer, Nicole Rose, said that the enforceable undertaking aims to ensure that NAB continues with its remediation programs to uplift its compliance and combat the risks of serious and organised crime.
“National Australia Bank has demonstrated a commitment to uplifting its AML/CTF controls, and has undertaken significant work identifying and implementing improvements to its programs.
"NAB has worked collaboratively with AUSTRAC throughout the investigation, and this enforceable undertaking will help to ensure NAB meets its compliance and reporting obligations,” Ms Rose said.
AUSTRAC regulates and collaborates with the financial services sector to build resilience, improve risk management and ensure they have appropriate systems in place and to help them identify, track and disrupt criminal exploitation of the financial sector.
“All AUSTRAC regulated businesses have a responsibility to have measures in place to protect the community from serious and organised crime. When these obligations are not met, AUSTRAC will not hesitate to draw on our range of regulatory tools and enforcement powers to maintain public confidence in Australia's financial system,” Ms Rose said.
In 2021, AUSTRAC released the Major banks in Australia risk assessment which found the overall money laundering and terrorism financing risk for the industry is 'high'. The report provides information and guidance to support major banks to assess their level of risk, strengthen their controls and report suspicious activity to AUSTRAC.
The enforceable undertaking and accompanying remedial action plan is available on the AUSTRAC website.
AUSTRAC’s regulatory approach
AUSTRAC uses a range of regulatory tools and powers to ensure compliance. Interactions are tailored based on the level of risk posed by the entities we regulate and their circumstances, and range from education and collaboration, through to regulatory interventions and enforcement.
AUSTRAC enforcement powers include:
- issuing infringement notices
- issuing remedial directions, which require a reporting entity to take specified action to ensure compliance
- accepting enforceable undertakings detailing the specific actions a reporting entity will commence or cease in order to comply with the AML/CTF Act
- seeking injunctions and/or civil penalty orders in the Federal Court.
AUSTRAC does not provide public commentary on individual reporting entities’ compliance.
Details of the consequences of not complying are available on the AUSTRAC website.
About AUSTRAC
AUSTRAC (the Australian Transaction Reports and Analysis Centre) is the Australian Government agency responsible for detecting, deterring and disrupting criminal abuse of the financial system to protect the community from serious and organised crime.
Through strong regulation, and enhanced intelligence capabilities, AUSTRAC collects and analyses financial reports and information to generate financial intelligence. | https://www.austrac.gov.au/news-and-media/media-release/enforceable-undertaking-national-australia-bank |
405 ILCS 80/ - Developmental Disability and Mental Disability Services Act.
Article IV
(405 ILCS 80/Art. IV heading)
(405 ILCS 80/4-1) (from Ch. 91 1/2, par. 1804-1)
Sec. 4-1. The Department of Human Services may provide access to home-based and community-based services for children and adults with mental disabilities through the designation of local screening and assessment units and community support teams. The screening and assessment units shall provide comprehensive assessment; develop individual service plans; link the persons with mental disabilities and their families to community providers for implementation of the plan; and monitor the plan's implementation for the time necessary to insure that the plan is appropriate and acceptable to the persons with mental disabilities and their families. The Department also will make available community support services in each local geographic area for persons with severe mental disabilities. Community support teams will provide case management, ongoing guidance and assistance for persons with mental disabilities; will offer skills training, crisis/behavioral intervention, client/family support, and access to medication management; and provide individual client assistance to access housing, financial benefits, and employment-related services.
(Source: P.A. 99-143, eff. 7-27-15.)
(405 ILCS 80/4-2) (from Ch. 91 1/2, par. 1804-2)
Sec. 4-2. Expenditures for services under Article IV of the Act shall be subject to available appropriations. | https://law.justia.com/codes/illinois/2017/chapter-405/act-405-ilcs-80/article-iv/ |
The ADNOC 3D survey. Currently being acquired by BGP Offshore is a large-scale seismic acquisition project offshore Abu Dhabi in water depths from zero to 30m. Differing seabed conditions and bathymetries introduce enormous challenges for seismic survey design, large volume data handling, operational organization and HSEQ, not least because of the Covid-19 pandemic. This paper focuses on how BGPO has developed an integrated solution to address the challenges such a large project presents in terms of equipment, key acquisition techniques, and including HSE measurement and control.
Introduction
Since the first shot in January 2019, BGP OFFSHORE has been acquiring OBN seismic data for ADNOC offshore Abu Dhabi covering an OBN area of 30,000km2and a TZ area of approximately 10,000km2. Considering the diverse nature of the geological targets in the survey area, the complex surface environment with a very large number of producing facilities and the differing seabed conditions, several different survey geometries have been selected for different survey sub-areas. In order to acquire the highest quality seismic data, BGPO has invested extensively in both equipment and technology. In this paper, we explain how our integrated seismic survey scheme has been able to solve the challenges posed by such large-scale operations in this complex oil production environment, including the equipment configuration chosen for each terrain, the OBN seismic methods employed, the organization of the operation and the measures selected to improve seismic data quality and ensure safety.
Integrated Acquisition Solutions
Integrated acquisition equipment solution
On the source side deep-water air-guns, shallow water air-guns and mini air-guns are used in water depths are greater than 6m, 4-6m and less than 4m respectfully. Explosives and land vibrators are deployed onshore close to the sea and on the various islands in the survey area. In the more challenging terrain, such as sabkha, a Transition Zone (TZ) vibrator was chosen as the optimum source solution. On the receiver side, a variety of marine and land nodes are deployed to record seismic data on the different terrains. Additionally different types of node handling vessels and other specialised vessels, such as mini workboats and Hegelong (TZ transport vehicles) are used for the survey.
Key OBN acquisition techniques
In the last five years, the industry has witnessed a dramatic increase in the volume of OBN seismic data volume being acquired – from ~2TerraBytes/day in conventional acquisition mode, to ~10TerraBytes/day in blended acquisition mode, to ~15 TerraBytes/day in high-density acquisition mode. To handle such enormous data volumes, a series of OBN seismic techniques have been incorporated into this ADNOC offshore and TZ project, including KL-AGQC, KL-NodeQC, KL-FBP, GeoEast and other scalable seismic acquisition software developed by BGP, that allows us to undertake comprehensive source QC across multiple source vessels, continuous seismic data combing, node repositioning, and seismic data processing.
Operation organization
Since 2009, DOLPHIN, a sophisticated navigation system developed independently by BGP for OBC / OBN operation, has provided multi-source vessel operation and navigation, node deployment/positioning and multi-source vessel operation. Utilizing this navigation system and the VTS system we can monitor the production status of the vessel fleet in real-time, and forewarn vessels operating close to production facilities and OFS vessels in transit in the survey area of any close passes or course adjustments required to ensure safe and secure operations in such high congested oil field production areas.
Measures to improve data quality
In order to reduce the effect of platforms and islands in the working area and improve the uniformity of coverage, BGP has implemented a number of methods to minimize the gaps in near-surface coverage such as increasing the density of shot points near obstacles and shooting offshore parallel to the island coasts in the survey area.
Measures to ensure safe operation
While ensuring the efficient operation of the project, BGP undertook additional safety measures, especially with regard to the prevention of Covid-19 on a large number of vessels (51) and crew (>2500), to ensure personnel safety. In addition to the extensive and rigorously enforced quarantine periods/C-19 testing require before personnel were allowed to join the vessel, weekly onboard H2S drills are carried out.
Since 2019, the project has been operated safely and efficiently, whilst the seismic data quality has been of the highest standard as approved by ADNOC. By investing heavily initially and continuously during the execution of the project in the acquisition equipment, the production efficiency has been steadily improving, reaching a maximum daily shot production of 74383 SPs/day. In 2022 we will introduce a shallow water electric-powered “hybrid” node and source vessel which will further enhance our SW/TZ capabilities.
Conclusions
Despite the impact of the global COVID-19 pandemic, the ADNOC large-scale shallow water and TZ seismic project has been a significant achievement, delivering the highest quality OBN data on an unprecedented scale. BGPO overcame many operational and logistical challenges to meet the high data quality, high safety standards and high-efficiency operation that ADNOC requires.
Acknowledgements
The authors thank the support from ADNOC during the seismic acquisition champion. | https://www.caomingqiang.com/2022/01/26/bgp-integrated-obn-solutions-for-adnoc-from-shallow-water-to-tz/ |
Professor’s Book Shows How a Traveling English Soccer Club Helped Popularize the Sport
California State University, Northridge’s Chris Bolsmann, a kinesiology professor and South African sociologist, recently published a history of the Corinthians Football Club, a London-based soccer team from the 1880s through the 1930s, which is credited with popularizing the sport around the world.
“English Gentlemen and World Soccer: Corinthians, Amateurism and the Global Game,” written in collaboration with Dilwyn Porter, an emeritus professor at De Montfort University, and published by Routledge, tells the story of the English soccer team that traveled the world and helped spark interest in the game around the world.
“What inspired this book is that I was doing some research many years ago in South Africa,” Bolsmann said. “I was given a photo of an old soccer team, the Corinthians, and it got me interested in why they [were] in South Africa in the first place.”
The Corinthians were a “super club” comprised of top-notch amateur talent that sparked the sport’s popularity in Britain and then toured the world, including Europe, South Africa, Canada, the United States and Brazil. The players were the self-proclaimed standard bearers for gentlemanly values in sport. They inspired teams, including the most popular club team in Brazil, Sport Club Corinthians Paulista in Saõ Paulo.
While much has been written about the Corinthians, mainly by club insiders, Bolsmann and Porter’s work is the first complete scholarly history to cover their activities in England and in other parts of the world. It examines the club’s role in the development of soccer and fills a gap in existing literature on the relationship between the progress of the game in England and globally. The book also re-examines the sporting ideology of “gentlemanly amateurism” within the context of late-19th century and early-20th century society.
It took several years to complete “English Gentlemen and World Soccer” because of the extensive research needed around the globe. | http://csunshinetoday.csun.edu/arts-and-culture/professors-book-shows-how-a-traveling-english-soccer-club-helped-popularize-the-sport/ |
Raw or cooked, choose chestnuts
Chestnuts are one of the foods that can make a person smile in chilly months, by picking up a few hot sugar-roasted chestnuts before dinner as a snack, or adding chopped chestnuts in hot congee in the morning.
The chestnut season starts in the fall and leads into winter. They are an ideal ingredient that brings warm joy to dishes, sweet and savory, especially in the holiday season. They are a versatile, delicious, gluten-free and highly nutritious food loaded with health benefits and calories — yes, if you are adding chestnuts into the meal, it can substitute some portions of the staple.
There are numerous ways to cook and eat chestnuts in Chinese cuisine. They can be a sweet starter, in hot main courses or in desserts. Some people like to eat chestnuts raw, which are crunchy with a refreshing sweet flavor like a fruit, but raw chestnuts contain more tannic acid.
Chestnuts come in a variety of shapes, sizes, flavors and textures. The large, rounder chestnuts are great in stews and braised dishes, while the cone-shaped, smaller chestnuts are perfect for roasting. There are also varieties with thinner shells that when fully roasted can be cracked and easily peeled by hand.
HelloRF
Chestnut congee with beef and shitake mushrooms
Qianxi County in Hebei Province is well-known for its chestnuts. Planting chestnuts there dates back over 2,000 years, and the county is known as the home of Chinese chestnuts, with nearly a third of the chestnuts exported from China produced in Qianxi.
Qianxi chestnuts have a smaller bottom and even shape, glossy reddish brown color and thin peel.
Chestnuts are delicious but cooking them can be difficult and even hazardous. The hard shell of the chestnuts is not easy to remove when they are still raw.
There are a couple of tricks to shell the chestnuts more easily. Use scissors to cut a long line on the pointy end of the chestnuts, then they can be heated up in a microwave to be ready when you can hear the sound of cracking. This takes about 10 to 20 seconds depending on the microwave and quantity of the chestnuts. After that, peeling becomes easier.
Without a microwave, boil the chestnuts in water with a pinch of salt for one minute and they can be peeled with a relative degree of ease.
The peeled chestnuts can be stored in the freezer, and there’s no need to thaw them for cooking. Fresh chestnuts with their shells become dull and bad quite quickly even in the fridge.
Chestnuts have a harder texture compared with similar starchy vegetables like potatoes and taro and require a longer cooking time. When braising chestnuts with meat, they usually go into the pot at the same time as the large pieces of meat.
Braised chicken and chestnuts is a fall and winter favorite that many people can’t get enough of. It combines the saucy, bold savory flavors of the chicken cooked with a generous amount of condiments with the starchy and sweet chestnuts that become soft and melting in the long cooking time.
HelloRF
Braised chicken and chestnuts
When roasting whole chickens, chestnuts make great stuffing because of their starchiness and sweetness.
As a staple themselves, chestnuts are often added in staple dishes to boost the texture and flavor, like in congees with other beans and grains, or steamed rice. Because it’s quite similar to pumpkin, sweet potato and taro in texture and flavor, chestnuts are often seen together with them in sweet-flavored staples.The combination of chestnuts and chicken also makes delicious savory congee, by cooking rice, chestnuts, shitake mushroom and chicken breast meat. Adding a few shreds of ginger in the savory congee can elevate the flavor greatly.
To make savory chestnut rice dishes, an extra step to boast the fragrance is to line the steamer with a lotus leaf and then mix the rice with chestnuts, cured pork or sausage and shitake mushrooms, all chopped in smaller pieces to release maximum flavor and cook evenly. The lotus leaf can infuse the rice with a distinct aroma to tone down the richness.
Desserts are where chestnuts really shine. Peeled chestnuts are often steamed until they are very soft in texture and then mashed into a paste and stir-fried with sugar and oil to use as a filling for pastries, buns and cakes.
As delicious as chestnuts are, they are not suitable for consuming in large quantities. The calorie count of chestnuts is high — 100 grams of chestnuts boast around 214 calories, and that’s around 12 chestnuts. And eating too much chestnuts may also cause stomach discomfort and bloating. Some people may also have allergies to chestnuts, with symptoms that include itching, swelling, wheezing and redness.
HelloRF
Chestnut rice
Oven-roasted chestnuts
A healthier and easier peel version of the wintertime street snack favorite for home kitchens.
Ingredients:
500 grams of chestnuts, preferably the Qianxi variety
5 grams of salt
20 grams of honey
Water and cooking oil
Steps:
1. Examine the chestnuts thoroughly and throw away any chestnuts with wormholes on the surface. Clean the hard shells with water and pat dry.
2. Take a small knife to cut lengthwise on the rounder side of the chestnut, avoid cutting into the flesh and be very careful not to injure yourself. Secure the chestnuts with a towel if necessary.
3. Put the chestnuts in a pot and add enough water to cover them, add in the salt and boil for 5 minutes before turning down the heat to simmer for another 10 minutes.
4. Take out the chestnuts and pat dry to remove the moisture on the surface, add in a mixture of honey, cooking oil and some water into the chestnuts and toss until they are evenly coated with the syrup.
5. Line a baking pan with foil and place the chestnuts on it, making sure every chestnut is standing individually to ensure even cooking.
6. Bake the chestnuts in a pre-heated oven at 200 degrees Celsius for 20 to 25 minutes, take them out and coat with the honey mixture again to cook for another 10 to 15 minutes. | |
Risotto is often one of those dishes we save for restaurants, under the assumption that it takes way too long to make at home. While the extra arm workout can be a plus, sometimes you simply don’t have the energy to stir a pot for an hour (we get it). Luckily, it turns out there are plenty of recipes that don’t require half your evening to make. Check out these 19 healthy risottos—authentic and not so much—that cook up in 30 minutes or less, so that you aren’t tuckered out by the time you tuck into them.
1. Butternut Squash Risotto With Sausage and Crispy Sage
Butternut squash purée makes this risotto extra creamy without a drop of actual cream necessary (plus, it’s an extra veggie serving). With nutmeg and sage adding classic fall flavors and a generous cup of white wine also in the mix, it’s hard to pick our favorite part of this recipe.
2. Baked Chorizo and Asparagus Risotto
Ingredients like chorizo and manchego cheese make this dish both comforting and unconventional. Stir the rice for a little while to release the starches and then let the oven finish the job, so that you’re getting classic risotto creaminess even after putting in literally half the work.
3. Goat Cheese and Bacon Quinoa Risotto
If you just can’t with taking the time to cook Arborio rice, use quinoa for a speedier way to get your risotto fix. Not only does quinoa cut down the cooking time to 20 minutes, but paired with the turkey bacon and the goat cheese, it also amounts to high amounts of protein per serving.
4. Steel-Cut Oats Risotto With Mushrooms and Chicken Sausage
This dish has the look and texture of a typical risotto, but that’s actually steel-cut oats instead of rice in the pan. Flavored with butter, chicken sausage, veggies, and Italian seasoning, it may cause traditionalists to wince, but there’s no denying how good (and fast) this recipe is.
5. Quick Pasta Risotto
Get the best of both the pasta and risotto worlds by using rice-shaped orzo in this dish. It cooks right in the pan in just 25 minutes while still looking and tasting like risotto. A perfect one-pot meal for a busy weeknight.
6. Mushroom and Chicken Risotto
Opting for regular rice instead of the Arborio kind, this simple risotto can be whipped up in a record 20 total minutes—we recommend going for brown rice for a nuttier, chewier, heartier texture with a higher fiber count.
7. Lemony Shrimp Risotto With Broccoli
With flecks of parsley scattered throughout the pan, this risotto is also a good-looking dinnertime centerpiece. The juice of an entire lemon squeezed in toward the end lends a refreshing zing to the otherwise hearty dish.
8. Saffron Shrimp Risotto
If saffron-scented risotto Milanese and garlic shrimp scampi had a food baby, this would be it. Rich in both protein and flavor, and enough for two generous servings, this one has “date night” written all over it.
9. Green Risotto With Seared Scallops
Ideal for entertaining (or for St. Paddy’s Day, given its color), this risotto is a showstopper. Just one look at its vibrant green appearance, and its unusual ingredients, such as fresh mint and sour cream, and you’ll know exactly what we mean. Plus, with just a tablespoon each of cheese and olive oil in the entire recipe, it’s actually a much lighter take on a traditionally heavy dish.
10. Prawn Lemon Rocket Risotto
You’re just 20 minutes away from al dente rice, juicy prawns, and leaves of arugula combined into one perfect risotto. If you don’t have a pressure cooker, this recipe alone might be a good enough reason to get one.
11. Salmon Bulgur Risotto
With chopped salmon, dill, capers, and olives, even this risotto seems to know the healthy (and tasty!) benefits of the Mediterranean diet. To add even more nutrition to the dish, high-fiber bulgur takes the place of rice.
12. Asiago Shrimp Risotto
Another product of the pressure cooker, this risotto opts for Asiago over Parmesan cheese for a semisweet touch, plus tarragon and parsley to give the dish some freshness. Our favorite part is the fact that the rice needs to be stirred for fewer than five minutes before the cooker takes over.
13. Parmesan Risotto With Lemon Butter Scallops
There’s a fair amount of butter in this recipe (hence its name), so you may want to file this under the “special occasions” category. Then again, given all its cheesy, melt-in-your-mouth goodness, we wouldn’t blame you if you suddenly decided that every day was a special occasion.
14. Eggplant Parmesan Quinoa Risotto
This nontraditional recipe combines two classics into one healthy meal: You’ll be sautéing the eggplant instead of frying and using protein-packed quinoa instead of rice. It may be inspired by traditional favorites, but you won’t find this on the menu at your local Italian restaurant.
15. 20-Minute Rainbow Veggie Risotto
This recipe was invented as a way to get picky children to eat their veggies, but you could serve it up to any discerning adult, and they’ll happily devour it. Why wouldn’t they? After all, colorful, creamy, cheesy carbs know no age limits.
16. Mexican Risotto With Sweet Corn and Cotija Cheese
Purists may want to look away for this one, but they’d be missing out. The combination of brown rice, cotija cheese, jalapeños, and cilantro may be a sharp detour from the classic risotto recipe, but what’s in a name when it tastes this good?!
17. Zucchini Mushroom Risotto
Simple but elegant, this risotto stirs in zucchini and mushrooms for some extra veggie action, while white wine and basil are easy but effective ways to add flavor. Reduce the amount of cheese if you want to keep it lighter; while a full cup sure lends a lot of richness, a little less will still go a long way.
18. Mushroom and Pea Risotto
This totally meatless and dairy-free risotto gets its creaminess from vegan butter and almond milk, which mix beautifully with the starches from the rice. If you couldn’t imagine a risotto without cheese, this might just change your mind.
19. Cauliflower Risotto
Anyone looking for a lighter way to enjoy their favorite carby dishes will especially appreciate this recipe. Pulsed cauliflower replaces rice (you’ll get a full cup of it per serving), and coconut milk makes sure the dish stays rich without dairy. We saved the best part for last: This takes a total of 10 minutes to throw together. | https://greatist.com/eat/healthy-risotto-recipes-that-wont-leave-you-stirring-forever |
This is Part 2 in a series trying to understand if the data in a medical record is true. Part 1 reviews some problems with Past Medical History data, and Part 2 here offers some high level solutions to ascribe confidence to issues in the Past Medical Record.
This is just a proposal of some considerations, it is not meant to be fully comprehensive, nor prescriptive. Though, I do think all medical records should implement Levels 1 through 3 immediately.
1. Why is displaying the ‘certainty’ or ‘truth’ of information in medical records important?
Obviously using real information will improve patient care (both quality of, and speed it can be delivered). But How?
Ideally, the electronic medical record user interface would display the level of confidence beside facts in the record. For instance, items in a patient’s Past Medical History list, or active medication list would each have an associated ‘level of confidence’ beside the particular issue or medication. If the confidence was low - the user interface would alert the clinician to this fact. [Yes, I realize, in the future the EHR shouldn’t even show the user low confidence information…but we’ll get there …slowly]
The benefit of displaying the confidence level beside each fact is that it greatly improves the clinician’s ability to review a patient’s chart, and improves the quality of the care given. Without this information, as we saw in the previous post, each new clinician would have to manually fact check the record.
Let’s look at a hypothetical 5 level scale of increasing complexity to help ascribe the truthfulness of an issue listed in the Past Medical History. As one moves from Level 1 to Level 5 the system becomes more automated, and relies more on aggregate data analysis techniques and machine learning and less on straightforward simple rules based approaches.
Level 1: Manual Linkage
I have been talking about manual linkage of data to issues in the Past Medical History for years. However, have yet to see a system that does this remotely well. In general the systems that I have seen incorporate a ‘free text box’ where people can write what they want beside that Past Medical History issue. This free text box often has horrible version control (ie. some silly med student can write over what an experienced physician wrote), and the box provides no linking to primary level material.
The way it should work, is that when item is added to the patient’ list of Past Medical Issues, as much as possible, that item should be linked to the primary source material that supports the claim being made. This material should be viewable as a ‘snippet’ and ‘summary’ of the related issue, and then the user can click on this to view the original documentation themselves.
For instance, if Chronic Obstructive Lung Disease was an issue listed, the gold standard for diagnosis is spirometry / pulmonary function tests. Therefore, the clinician who adds COPD to the Past Medical History should link directly link this to the patient’s lung tests that first diagnosed this. Ideally, if new information was added, these new tests would be linked to the issue.
Furthermore, if all the data that is available to make this ‘diagnosis’ is a note with patient symptoms, or a chest x-ray that is a bit hyper-inflated, then this should be linked. And then the clinician in the future reviewing the chart, will be able to see…ok…the prior person said the patient has COPD… it seems there is no spirometry to support this… I will not consider my confidence in this diagnosis ‘high’.
As mentioned, this user interface is easy to build into an electronic medical records. However, it can also be use with paper records. Lawrence Weeds as part of his Problem Orientated Medical Record, has advocated since the 1970s that the medical chart must have on the front page an itemized list of all the patient’s Past Medical History and ‘Active Issues’. It still boggles my mind that I have never seen this in clinical practice. (Though I have tried to institute something like this when looking after complex ICU patients for several weeks at a time). In theory, after this cover page, could be a documentation support index that outlines for each Issue what primary data supports the item in the Past Medical History.
Another quick example: If the patient had a “left cerebral ischemic stroke in 2013” link to that CT Brain report, as well as the documentation around that admission. Under “no residual deficits at 6 months, full recovery” link to the physiotherapy note indicating so.
Problems with manual linkage
The great benefit of manual linkage is if the EHR was designed properly, you could start using this today, and it would dramatically improve the quality of medical documentation and care and save considerable long term time.
However, there are multiple problems with manual linkage. A few that come to mind are:
- The process is manual, and relies on the good will of the clinician documenting that past medical issue to make the linkage to the source document.
- There is variability in the accuracy of items added to the Past Medical History. A medical student may enter that a patient has venous stasis (pooling of fluid in their legs), while missing the fact the patient actually has congestive heart failure.
- There is variability what clinicians consider is appropriate linkage. Each clinician may link to different type of supporting documentation. If the patient was admitted through the Medicine ward, the clinician may enter the patient had “6 seizures of unknown etiology in 2016” and link to an EEG report. However, if the patient was admitted by neurology, the neurologist may link that issue to the three sets of spinal fluid results, MRI brain scans, various blood samples people have never heard of, genetic and autoimmune tests, and multiple consultant notes. All supporting the “unknown etiology”.
- The linkage process is static in time and the data can become outdated. If the patient’s condition around that item in the Past Medical History changes, those changes are not reflected in their Past Medical History list. For instance, if the patient’s lung function drops from FEV1 of 65% to 35%, unless someone links the new lung function tests to ‘COPD’, the Past Medical History will continue to read “COPD - FEV1 65%”.
- The item in the Past Medical History may no longer be correct. A patient may have listed “Pneumonia Sept 2018” and “Pneumonia Oct 2018”. Each may link to a chest x-ray with an infiltrate. However, when the patient is diagnosed with lung cancer in November 2018 - nobody will go back and actually update the Past Medical List and update the two prior issues (that were in fact incorrect diagnosis), and merge them into the new diagnosis of Lung Cancer - with the first diagnostic result to link being the September 2018 chest x-ray.
- This technique is prospective. It does not auto-link all the patient’s past issues in their old charts, making it time consuming and not useful from data mining perspective.
Error of Omission
Another key downside with the manual linkage technique is that it does not solve errors of omission. As mentioned at the start, the problem with data in the medical record is both assuming that data inside of the record is true (when in fact it is not). But the other major error is seeing the absence of a reported fact, as truth of its absence.
Unless one manually enters (and as proposed above, links) issues in a patient’s Past Medical History to their source material, clinicians may not know the patient in fact has the condition.
Still unable to validate data’s truthfulness
But worst of all, the computer still cannot judge the veracity and truthfulness of an item in the Past Medical History simply based on manual user linkage. It makes it easier for the clinician to see what the previous clinicians were thinking - and this is a huge step forward - but it does not solve the problem we set out to solve.
Level 2: Guided Linkage
Guided linkage builds upon manual linkage, by proposing common source documents that should be linked for each issue.
For instance, if you enter COPD, in Guided Linkage the system would automatically propose the types of documents to link & actually show you the very documents.
This system requires a basic rule based set of associations of which documents should be associated with the most common diagnosis. We don’t need to create a system for all 100,000 items in the ICD-10 system, but starting with the most common items in the Past Medical History is a good place. I think a team of half a dozen or a dozen bright internets could easily create the guided linkage patterns required for over 80% of their clinical cases in a weekend retreat.
Level 3: Data Capsules
One of the issues with simple ‘linkage’ for past documentation to medical issue is that doing so may get very ‘busy’ and ‘complicated’.
This is another example of where the concept of ‘data capsules’ can help. I’ve written about this before (and will again soon). In short, a data capsule is a self contained unit that holds all the important data relevant to a Medical Issue.
For instance, the COPD data capsule would have the relevant medications, test, labs, imaging, notes, and clinicians related to this condition.
The capsule functions as
(a) a way to display all this related data at once
(b) an automated way to ‘pull’ and ‘import’ data that may be elsewhere into the record. For instance, the COPD data capsule may automatically ‘pull’ into it any chest x-rays, even if done for other purposes.
Similar to Level 2: Guided Linkage, the data capsule logic can be done using a straightforward rule based approach designed for the most common issues. Again, I really think this could be build with a small team of internists over a weekend retreat.
Level 4: Automated linkage and automated data encapsulation
In the examples above, the linkage patterns, and the addition of data to data capsules was dependent on the user ‘agreeing’ that the linkage is correct, or that the data should be associated with that data capsule.
However, ideally, the data linkage algorithms and data capsule schemas should be robust enough that they automatically aggregate this data whenever a clinician enters an issue or diagnosis.
This means the system could also retrospectively link data together, and in real-time link data together, to ‘propose’ to the clinician that a patient may in fact have a particular diagnosis. However, in this model, the clinician still has to ‘accept’ that the proposed diagnosis and issue for the Past Medical History is correct and makes sense.
Level 5: Automated diagnosis & automated data validation
The highest level of data verification would rely on techniques more sophisticated than the rule based Issue Manual and Data Capsules proposed above.
This would ideally consider multiple variables and factors in the patient’s chart to ascribe a diagnosis and corresponding level of certainty. (The example in Part 1 of the article shows how many considerations are needed even for a ‘very straightforward’ diagnosis like HIV or Diabetes).
The system would do this in real time - both on historical data and as new data is added.
Ultimately it would ensure that the medical record data is always up today. This is where we need to get to, it will just take time. I suspect that multiple different vendors and research groups will come up with different and better ways to do this type of Past Medical History problem list composition. Ideally, it should be able to plug-and-play into an electronic health record. Perhaps these types of call requests could be within the Clinical Reasoning part of FHIR.
How to visualize a data’s level of truth?
OK, let us pretend we have implemented some of the above solutions, and we can determine the level of truth corresponding to an item in the Past Medical History. How should this level of certainty be displayed in the medical record?
I’m not certain at this time the optimal strategy. However, my suspicion is that to start medical records should flag information that it believes is incorrect. It is easier to make a mistake here. As opposed to making a mistake when indicating information that the medical record believes is ‘correct’.
So, some visual flag for information that is incorrect.
Likely information that is of reasonable certainty should be in a ‘neutral’ style.
Information that has an extremely high level of certainty, perhaps needs a ‘verified’ stamp or unique style.
Ideally, clinicians will be able to interact with the user interface, and correct information the computer has mischaracterized. These corrections can be validated by a third party, and if accepted, the algorithms used to flag data will improve through continuous user feedback.
Consider reading Part 1: Is this true? Problems with the Past Medical History
Or you may enjoy: | https://www.gregoryschmidt.com/articles/confirming-validity-of-past-medical-history |
The stewardship of public dollars is a challenge as old as government itself, but nascent technologies are coming into the space with the intention of streamlining it. Blockchain-enabled tools are one such example.
The OpsChain Public Finance Manager (PFM), a new blockchain-based tool from Ernst & Young, is designed to allow governments to “focus more directly on the things that matter,” said Mark MacDonald, EY global public finance management leader.
The potential of this tool lies in helping governments track the “financial integrity of the way public money is spent” and the related outcomes that are achieved, MacDonald said. Essentially, the PFM promises to enhance the ability of governments to see how public dollars are connected to actual results, which should support further decision-making.
“In simple terms, it’s the integration between a financial view and non-financial view that can really help public managers manage more effectively, public budgeters budget more effectively, and ultimately it’s about trying to advance that cause of ‘better finance, better government,’” MacDonald said.
The PFM is based on the EY Ops Chain, which is a blockchain platform that entered its second generation earlier this year. According to EY , this platform can “support up to 20 million transactions per day on private networks” and has reportedly led to efficiency gains of more than 90 percent in certain cases.
Most governments utilize an enterprise resource planning (ERP) system to keep up with public funds. MacDonald said those systems are generally well understood, but he suggested a critical piece of the organizational puzzle is missing when it comes to linking ERP data to outcome data in other systems.
“The question becomes when I have an opportunity to try and connect financial data and information to another system that perhaps has my non-financial information in it, how easily am I able to do that?” MacDonald said.
Mike Mucha, deputy executive director of the Government Finance Officers Association, said his organization helps governments prepare and procure ERP systems, so he understands the challenge that MacDonald refers to. Mucha cited an example involving a school district. A district will have its financial data in one system (ERP), but student performance data will be stored in a student information system (SIS).
“If you’re trying to calculate like an academic ROI … you need to basically, through some sort of third-party tool or some sort of third tool, correlate your spending on various programs with the academic return that you’re getting out of your student data system,” Mucha said.
Additionally, MacDonald said governments often deal with a “complicated array” of contractors, partners and not-for-profits in delivering public services. The chances of these external agents being on the government’s ERP system are essentially zero, which creates a “hard organizational interface to try to overcome.” The blockchain component can help manage this kind of chaos, almost acting as an “ERP across ERPs.”
Another challenge is simply the idea of the government running multiple systems itself. Almost no organization runs just one ERP system, Mucha said. Then there’s the fact that public entities frequently house their own information even though those entities might need to work together for the common good. Although Mucha admits that he doesn’t know anything about the EY tool, he can imagine great potential for public entities wishing to work together.
“From a business intelligence perspective, you might want to pool that information together … so if you had an ERP across ERPs, then you could conceivably use data from each one of those individual entity’s ERP system in sort of a shared resource,” Mucha said.
MacDonald stressed that the blockchain aspect of the PFM is not “technology for technology’s sake.” Rather, the blockchain platform presents a logical opportunity for technology to address long-standing business challenges within the complexity of a government system.
“It [blockchain] has the ability to work at that network level across organizational boundaries, across different authorities, and so forth,” MacDonald said.
According to EY, the PFM has been tested by multiple governments around the world. MacDonald would not reveal all of those governments due to concerns related to privacy and confidentiality. It is public knowledge , however, that Toronto has tested the tool, but Toronto’s chief financial officer Heather Taylor could not be reached for further comment.
Looking for the latest gov tech news as it happens? Subscribe to GT newsletters.
| |
Adobe's Global Marketing Organization is the steward for one of the world’s most recognizable and beloved brands. From visual brand expression to data-driven strategy, social media campaigns and corporate social programs.
The Marketing Strategy & Operations (MSO) Manager will coordinate with the Senior Manager of Marketing Strategy & Operations, global marketers and key stakeholders around priorities and marketing activities that will move the company closer to achieving its goals.
This role is critical in helping us continue to build a smooth operations cadence and scale the impact of Digital Media Marketing on our business.
The ideal candidate will be an experienced specialist who excels in operations that facilitate scalability, predictability, and efficiency. | https://neuvoo.co.cr/view/?id=7a4518ff94ee |
- Name of company and the industry under which it operates, and the Six Digit NAICS Code.
- Background of the Company (headquarters, annual revenues, and state whether it operates internationally.
- Problem leading to the researching of the company’s environmental scan
- The study is organized into three sections. Section I discusses the external environment to address the opportunities and threats facing the industry and the hence the company. Section II assesses the company’s internal strengths and weaknesses in light of the company’s external scan. Section III provides a summary, and conclusions with recommendations for formulating and implementing strategies.
Section I External Environment———————————————————–35%
- The general environment
- The Industry Environment
- The Competitor Analysis
Section II The Organization’s Internal Environment——————————————————35%
- Resources
- Capabilities
- Core Competencies
Section III Summary, Conclusions, and Recommendations———————————————-10%
References—————————————————————————————————————–10%
APA Format and Errors Free
_____________________________________________________________________________100%
GUIDELINES for CASE 1
The company’s external environment consists of three major components:
- The General Environment is composed of dimensions in the broader society that influence and industry and the firms within it. Refer to Chapter 2 in the textbook for more detailed information regarding the Seven segments of the General Environment as follows:
- Global
- Demographic
- Physical
- Political/Legal
- Economic
- Technological
- Socio-Cultural
Based on the information that you have gathered on your Company, textbook, and research, you are to determine which aspect(s) of the general environment affect your company. Select the Top Three segments with supporting evidences (scholarly articles, textbook refences and the case study) the impact the most.
Next, you will make an evaluation(s) as to whether they are opportunities or threats to the company’s performance.
- The Industry Environment: Porter’s Five Forces Framework
The industry environment refers to the forces that directly influences the profitability of the industry as follows:
- Threats of new entrants
- Bargaining power of suppliers
- Bargaining power of buyers
- Rivalry among existing firms
- Threats of product substitutes. State, where necessary, if opportunities exist for product complement.
Porter’s five forces (and product complement) model is a conceptual framework for analyzing the industry environment. Based on the information gathered about your company, textbook and research, determine the importance of each force in Porter’s model; also, offer reasons and cite references.
For example, under the threat of new entrants, there are barriers to entry and how existing firms will react. Some of the barriers to entry include high initial capital set-up costs or government regulations that will make a weak or strong threat of new entrants. If so, what are the entry barriers?
- Competitor Analysis
The competitor analysis focuses on how the company competes directly with other companies in the same industry. For example, Walmart competes with Amazon; Apple vs. Samsung; Home Depot with Lowe’s.
Identify the Top Three competitors in your industry and briefly discuss the followings:
- Where/how does your company hold an advantage over the competitors?
- What will your competitors do in the near future (next 6 months to a year)?
- What should your company do to remain competitive?
- The Organization’s Resources
Resources, capabilities and core competencies are the foundation of competitive advantage. For this section, you are asked to do the followings:
- Briefly discuss the components of an Internal Analysis (Refer to Figure 3.1, chapter 3 in the textbook)
- Include a discussion of Value Chain Analysis
- Which value chain activity provides the most value to your company and why? Offer examples and cite references.
- Does your company engage in any outsourcing activities? If yes, explain and offer an example.
NOTE: Please be reminded that: | https://www.precisionessays.com/analysis-effects-environmental-scan-companys-performance/ |
Join us In Person at Northwest AHEC at 475 Deacon Boulevard, Winston Salem, NC.
March 17, 2023 from 9AM - 12:15PM
This training will provide an overview of the music therapy profession, including information about board-certified music therapists, approaches to music therapy, types of interventions used in treatment, collaboration with other healthcare professionals. In this session, you can expect to grow your understanding of music therapy and how it benefits a variety of patient populations. Come join this hands-on session as we learn more about music’s impact on the brain and how music is used as a therapeutic tool to reach non-musical therapeutic goals!
Upon completion of this course, participants should be able to:
- Describe the role of Music Therapists and the services they provide.
- Identify music-based interventions used in music therapy.
- Identify various approaches to music therapy practice.
- Discuss how you can integrate music and/or music therapy into the work you do each day for more positive outcomes.
Registration fee is $75, get signed up today!
Provided by:
Northwest Area Health Education Center (AHEC), a program of Wake Forest University School of Medicine and a part of the NC AHEC System with The Amos Cottage Therapeutic Day Program part of the Department of Developmental and Behavioral Pediatrics with Atrium Health Wake Forest Baptist. | https://go.northwestahec.wakehealth.edu/blog/more-than-a-playlist-2023 |
Day of Friendship
As we age, there is a tendency to shed family and friends which can hurt our mental and physical heath. Higher levels of social interaction can have a high payoff for elderly folks. On International Day of Friendship, celebrated on July 30th, Family Matters is challenging our community to befriend a senior.
Family Matters is dedicated to helping seniors stay engaged with their community. According to a survey of women over age 60, those who are socially engaged and visit with friends and family throughout the week are happier as they age. Social interaction gives seniors a sense of belonging and allows them to stay connected to the world around them.
Socialization is important for everyone, regardless of age, although seniors are more susceptible to the dangers caused by isolation. We challenge you to reach out to your loved ones, neighbors, family members, or even a stranger on the street to create a new friendship.
The Benefits of Friendship
Specific health benefits of social interaction in older adults include:
- Potentially reduced risk for cardiovascular problems, some cancers, osteoporosis, and rheumatoid arthritis
- Potentially reduced risk for Alzheimer’s disease
- Lower blood pressure
- Reduced risk for mental health issues such as depression
Social isolation carries real risks. Some of these risks are: | https://familymattersdc.org/day-of-friendship/ |
Dr. Robert Kegan is arguably the single most cited thought-leader in the field of adult development today. His ground-breaking works on human and organizational behavior--The Evolving Self, In Over Our Heads, Immunity to Change, How the Way We Talk Can Change the Way We Work, An Everyone Culture-- have helped create a whole new field of study, transformed practice in a host of professions, and unleashed the hope that we can all keep growing at any age.
Long a sought-after speaker to professional groups in every sector and geography, Kegan’s writings have been translated into twelve languages.
Now an emeritus professor, Kegan was the William and Miriam Meehan Professor in Adult Learning and Professional Development at Harvard Graduate School of Education, where he taught for forty years. He was also Educational Chair for the Institute for Management and Leadership in Education and the Co-director for the Change Leadership Group. He continues to serve as the Chief Knowledge Officer of the Growth Culture Institute, a consultancy he co-founded, where he provides high-level services to leaders and organizations throughout the world.
Regularly quoted in such publications as The New York Times and Forbes, he is as comfortable in the boardroom as the classroom. CEOs worldwide ask him to assist them personally, their teams, and their organizations, to define and implement high-value improvement processes. His insight into individual and collective “immunities to change” has resulted in significant and sustainable improvements due to generative shifts in underlying mindsets.
As one of the foremost researchers on organizational and human psychology, Kegan has intensely studied companies of all sizes, shapes and industries, from global to boutique. The demand for his consulting work means his advisory work is just as far-ranging, with particular emphasis on Healthcare, Professional Service Organizations and Education/Nonprofits.
Kegan attended Dartmouth College, graduating summa cum laude, and then took his interests in Learning from a psychological, literary and philosophical point of view to Harvard University where he earned an interdisciplinary Ph.D.
In 2019, Kegan was inducted into a select group that includes Nobel Prize winners, Pope Francis, and Keith Richards of the Rolling Stones, when The Disruptor Foundation honored his “life-long contributions and innovations in the field of developmental psychology, which have led to ground-breaking insights into the emergence of orders of human consciousness and the ongoing internal Copernican shifts that lead to self-transformation.”
A husband, father, and grandfather, he is also an avid poker player, an airplane pilot, and the unheralded inventor of the “Base Average,” a superior statistic for gauging offensive contribution in baseball. | https://www.gspeakers.com/topics/overcoming-the-immunity-to-change/ |
Artist, Dinesh Doshi, grew up with the arts, watching his mother as a young boy in the process of creating something from scratch. Finding early success at a young age, Doshi continues to build a name for himself as one of the painters in today’s world. With an early debut into the art world, Doshi, at twenty-one, was the youngest artist ever to be selected in a juried show at Mumbai’s prestigious Jehangir Art Gallery. Although he continued to paint, he chose not to exhibit or show for several years. 2013 marked his public reemergence with his first New York solo show at Chelsea’s Emmanuel Fremin Art Gallery. If you are a fan of inspiring stories, as well as vibrant and modern artwork, then check out this brief Q&A with Doshi for details on his latest and forthcoming work:
Growing up in Africa, what inspired you to become an artist? What is your earliest memory of the drive to create art?
At the age of five, I started to enjoy drawing faces from Bollywood magazines, and this interest of sketching carried with me through Italian missionary school located in Khartoum.
It wasn’t till I made it to prep school in Bombay, where I was seeked out by my art teachers who encourage me to continue with my art interests. Not only was I being encouraged to peruse my talents, but also guided through all the fields of art.
Your mother was an artist and as a child, you got to watch her in the process of creating art. How do you feel that impacted you?
As a child my earliest recollection of art was watching my mom draw on a piece of cloth, sketching her desired image for her embroidery work. In addition to her designs she created unique color pallets with the different colored threads, which opened my mind to color combinations.
Your artwork consists of vibrant but warm color tones—how would you describe your art?
My color pallet has always been rooted to the moods I'm feeling, the intention is to provoke emotions with within myself and question the essence of my story telling, leading me to think outside the box. I intentionally take up the challenge of using vibrant colors in unconventional ways, applying to larger number of my canvases to promote my story telling in a unique way.
At twenty-one years old, you had your first art exhibit at the Jehangir Art Gallery, how did that come about?
I was inspired by various Indian senior artists who have exhibited their shows at the Jehangir Art gallery. I wanted to live my life in that same position by exhibiting my work, with my drive and encouragement from friends I ended up fulfilling that dream at the age of 21.
All artists have a process prior to starting their work, in the midst of working, and even post-work, whether that is listening to music or locking yourself up in a room for days until a piece is completed. What does an average artwork day look for you?
It all starts with the moments I have experience through my life, it can be a memory from my past or a moment experienced that day. After inspiration has struck, it then starts with the size of ambition for the day weather it is accomplishing a large or small canvas. Then comes the selection of colors warm or cool and the type of brush. Creating a comfortable setting is important, so for me I need to put on symphony & instrumentals along with drinking a cold glass of water submerging myself the canvas allowing the colors and strokes to convey my story.
Some of your influences are the works of Van Gogh, Picasso, and Michelangelo. Are there any current artists who have influenced you and your work?
With my enjoyment and pleasures of viewing other artists, no I have intentionally avoided being influenced by their work. I want to keep my own style very unique so that it is translated throughout all my paintings.
It seems that you took quite a long break. You’ve just recently begun exhibiting your art again. Was did you decide to start showing your work again now?
I was becoming very possessive of my original Art and showed my work to a limited amount of people for reproduction. I then met with another artist who influenced me to no longer be a collecting artist, but to be a selling artist so more people can experience my work.
How do you feel your art has changed over the years?
That is the story of life, as one grows, matures, experiences is exposed, the art starts reflecting one's being. Leading you to a journey of various types of work processes and representation of one's desires and dreams.
Any upcoming exhibits where we can see your latest work?
Soon! It’s currently in the works, the exhibition will be known as “Journey of My Life."
In his Expressionistic work, Doshi journeys to the “core of living” where he explores nature, environment, his spiritual vision and balance. Over the recent months he has been receiving quite a bit of media coverage, announcing his return to showing his work. As he begins to once again exhibit his art, his career trajectory is on the rise.
You can see his work at: http://www.dineshdoshi.com. | https://www.huffpost.com/entry/dinesh-doshis-artist-journey_b_5870b814e4b0eb9e49bfbbb8 |
KEPRI Takes Lead in Developing Future Energy Technologies
KEPRI and other organizations co-hosted an industry-academia-research institution workshop on future promising technologies.
▲ Attendees at the workshop take pose for a picture.
KEPCO Research Institute (KEPRI, President: Park Soon-kyu) and other organizations co-hosted on July 10 the ‘Industry-academia-research institution joint workshop on future promising technologies in electric power industry’ at the International Convention Center in Jeju.
As part of the 2013 summer conference of Korean Institute of Electrical Engineers (KIEE), the workshop was co-hosted by KEPRI, Korea Electrotechnology Research Institute (KERI) and Korea Electrical Engineering & Science Research Institute (KESRI). In their papers presented at the workshop, specialists suggested development direction of future electrical grids, and low-carbon and new power generation technologies. They also proposed direction of electric power and energy technologies as agendas of the nation’s new growth engines.
In his paper on the trend of change in future electrical grids and outlook of technology development, Executive Director Sim Eung-bo at KEPRI said, “Future electrical grid technologies are predicted to be led by long distance power transmission technology that uses extra high voltage (EHV) and ultra high voltage (UHV), electric and electronic technologies needed for DC grids, and electric power system technology for safely operating renewable energies and energy storage systems (ESSs).” He also stressed, “Given its special regional condition, Korea needs to prepare for connecting with electric power systems in Russia, China and Japan by setting up smart grid networks. And it needs to concentrate efforts on developing electric power system control technologies and superconducting fault current limiter technologies for stabilized operation of such systems.”
In his paper on low-carbon and new power generation technologies, Senior Researcher Kim Beom-su at KEPRI stated, “Despite global carbon reduction policy, coal will remain to be an important source of power, but its importance is expected to be gradually diminished while more technologies will be developed for improving efficiency and reducing emission of pollution sources. Super critical and ultra super critical power generation facilities will play major role in the short run, and integrated gasification combined cycle (IGCC) technology will largely contribute to improving efficiency and reducing CO₂ in the long run, but companies need to first find ways of mitigating burden of huge construction costs.”
In his paper, Prof. Yun Yeong-tae at Seoul National University argued, “Micro grid industry will have the greatest growth potential in the future. In the smart grid environment, power companies need to develop core technologies in system integration, hardware and software, and to improve their structure as total solution companies based on them.”
In a paper on current technologies and outlook of new technologies for rotary-type electric motors, such as electric vehicles, high-speed trains and electric ships, Executive Director Kang Do-hyeon at KERI said, “If energy efficiency in electric motor segment is improved by 4% over the past, companies will be able to evade construction of two nuclear power reactors and save 1.1 trillion won annually. In case 1 million electric vehicles are operated in the future, Korea needs to build additional 5 nuclear power reactors.”
| |
Throughout the month of September, Worldwide ERC®’s articles highlighted a broad range of mobility topics – from the future of mobility and cyborgs, to talent mobility policy. Check out the five most-read articles from September and discover valuable insights to drive your organization’s mobility programs and policies forward.
In Notice 2018-75, 2018-41 IRB 1, issued 21 September 2018, the Internal Revenue Service holds that 2018 reimbursements or employer payments for employee moves occurring in 2017 remain excludable from the incomes of the employees.
Companies must make decisions as to whether to gross up for moving expenses that are now taxable in 2018. This includes decisions as to how handle gross-ups for state taxes.
Employers know that global assignments build leaders, and in a globalized world, weaving such experience into their workforces is an imperative for growth.
Want to discuss trending mobility topics with noted experts? Join mobility professionals from around the world at the 2018 Global Workforce Symposium next month in Seattle, Washington. There, you’ll be able to discuss and gain valuable knowledge on the latest industry insights, best practices, career development information, and the future of mobility in our much-anticipated Symposium. Register today to attend! | https://www.worldwideerc.org/article/worldwide-erc-s-top-articles-of-september-china-moving-expense-changes-more/ |
The literature review represents an overview of your question, giving an account of the state of that subject and the ongoing debates & research at that point in time. It should be wide ranging, systematic in approach and should use a range of sources within your field such as books, journal articles, government policies, web pages, theses, conference proceedings, legislation, statistics etc. Your reviewwill involve critical analysis of the arguments and positions, not just a description of the literature. The review should place your own research within its context.
It is :
• A critical evaluation
• A synthesis of available research
• Broad & deep / clear & concise
• Rigorous and consistent in its approach
It is not:
• A list or annotated bibliography
• An essay
• A simple summary or paraphrasing of works
• Confined to description
• Narrow & shallow
Do a scoping search
Decide on information sources to search & prepare a search plan
Decide on inclusion /exclusion criteria
Write up your initial review
Clear about what a literature review is and is not? It is an iterative process – re-visit both search and results frequently.
Allocate plenty time to plan, search and evaluate. Allow 5-10% of your overall project time.
Document search methods, sources used and results achieved. Felds such as health use rigorous literature review reporting guidelines.
Critical appraisal & synthesis of literature requires systematic recording. Consider the use of written tables for data extraction, coding, thematic analysis and synthesis.
Have a clear plan for reference management in place before you search and always record references in full as you find them.
Traditional or Narrative Literature Review
Broad in focus. Does not always address a specific question.
Not comprehensive in literature included.
Does not always state reasons for inclusion of papers.
Not structured in approach to searching for literature or critical appraisal
Integrative Review
Reviews, critiques, and synthesises literature on a topic in an integrated way.
Summarises past theoretical and empirical literature on a topic.
Often combines quantitative / qualitative / mixed methods studies
Attempts to generate new frameworks and perspectives on that topic.
Does not always use explicit systematic approaches in searching or data analysis,so quicker to complete (compared with systematic reviews).
Potential for bias and lack of rigour.
Systematic Review
A review of research literature using a systematic, explicit, accountable and documented methodology. Its purpose is to evaluate all research evidence relevant to a particular question. Widely used within health e.g. Cochrane Library methodology.
The key characteristics of a systematic review are:
Rigor: use of systematic methods to answer set research questions.
Transparency: every search step is described.
Replicability: a second researcher should also identify & critically appraise results, arriving at the same conclusions as the first researcher.
Systematic reviews are carried out by at least two individuals, or a team, and usually take 12 months or more to undertake. | https://libguides.napier.ac.uk/litrev/home |
*This sample paper was adapted by the Writing Center from Key, K.L., Rich, C., DeCristofaro, C., Collins, S. (2010). There is a need for detailed brainstorming to identify key search terms. NOTE: If you are considering purchasing a book, buy the 2nd edition (2017). Within a review context it refers to a combination of review approaches for example combining quantitative with qualitative research or outcome with process studies. For example, a systematic review about the relationship between school size and student outcomes collected data from the primary studies about each schools funding, students, teachers and school organisational structure as well as about the research methods used in the study (Newman et al. Approach the grey literature methodically and purposefully. Choosing the features of study design to review and critique is dependent on the A literature review is a survey of scholarly sources that provides an overview of statement or the study’s goals or purpose. Systematic Literature Review, sometimes known as systematic reviews, are associated with evidence-based healthcare practice, the idea that nursing and related healthcare disciplines should be grounded in the most up-to-date and accurate research evidence. It will help you to read or navigate a systematic review. This diagram illustrates what is actually in a published systematic review and gives examples from the relevant parts of a systematic review housed online on The Cochrane Library. Mixed studies review/mixed methods review Refers to any combination of methods where one significant component is a literature review (usually systematic). 2006). Systematic reviews can address any defined research question. Table 1 provides examples of questions that have been addressed in published reviews relating to stroke, and examples of resources relating to different types of reviews. Doing a Systematic Review by Rumona Dickson; Angela Boland; M. Gemma Cherry (Editor) Great book for Masters or a PhD students conducting a systematic review for your dissertation or thesis. Use of P ropofol and emergence agitation in children: A literature review . such as “systematic literature review.” At a minimum, this range of understanding can result in confusion and unclear expectations and, in some cases, it could even impact the success of the literature review as a stand-alone project or the success of associated downstream activities such as qualitative or quantitative research or However, they are … This article provides a step-by-step approach to conducting and reporting systematic literature reviews (SLRs) in the domain of healthcare design and discusses some of the key quality issues associated with SLRs. In conducting the literature search, there is a need to define the sources/databases to be searched, the search process and how the studies found will be selected – these processes need to be documented.It is usually best to seek the help of a librarian during this stage. Systematic Review Essay examples ... reader will first find the Background that discusses items that should be considered when determining the appropriate systematic review methodology, then there will be an analysis of the current question using those considerations. 2.2 Search Strategy. SLR, as the name implies, is a systematic way of collecting, critically evaluating, int … supervisor) before the review commences. The systematic literature review is a method/process/protocol in which a body of literature is aggregated, reviewed and assessed while utilizing pre-specified and standardized techniques. Literature review • “a review of the evidence based on a clearly formulated question that uses systematic and explicit methods to identify, select and critically appraise [all] relevant primary research, and to extract and analyse data from the studies that are included in the review.”(Cochrane Public Health) Systematic review allows the assessment of primary study quality, identifying the weaknesses in current experimental efforts and guiding the methodology of future research.
My Word Is My Bond Scripture, Light Novel Reading Tracker, Which Computer Language Is Used On The Internet, Lpu Batangas Appointment, Mckenzie Exercises Sciatica, Master Of Science In Business Analytics, Spa Deals Limerick, Dreams Huatulco Resort Map, Houston Zoo Phone Number, | https://portablehardnesstester.com/holiday-pay-unrv/systematic-literature-review-methodology-example-0f14a2 |
Ph.D. Business Management, Universiti Tenaga Nasional (UNITEN), Malaysia.
MBA, Universiti Tenaga Nasional (UNITEN), Malaysia.
MSC, Economics, Lahore College for Women University, Lahore.
Bachelor of Economics, Punjab University, Lahore.
Urgency towards economic diversification through effective reforms in Caspian Basin
ديسمبر 04, 2020
Structural Transformation and Impact of Oil Price Change in Saudi Arabia Economy: Input-Output Structural Decomposition Analysis
ديسمبر 01, 2020
SECTORAL PRODUCTIVITY IN HUNGARIAN ECONOMY: AN INPUT-OUTPUT LINKAGES APPROACH
نوفمبر 25, 2019
Depending on the pace of economic development and structural reforms in an economy, the sectoral output level also changes. Usually, sectoral capital accumulation, labor reallocation across sectors, and total factor productivity contributes to sectoral performance. This paper explores the pace of economic development and sectors’ role in the Hungary economy in the context of the demand and supply side. The current study aims to analyze input-output linkages to locate structural changes and inter-connectivity in the Hungarian economy. The main findings have shown that on the demand and supply sides, key sectors, such as manufacturing, metals, wholesale and retail trade, and telecommunications, are prominent. These sectors have an important place in the economy and need continuous monitoring to enhance productivity and output levels. The results lead to an important recommendation that the Hungarian economy needs to implement careful planning in order to attract Foreign Direct Investment (FDI) to be a hub of investment. There is also utmost importance to promote education to have human capital in order to meet long-term challenges. Lastly, the country still has a high level of global competitiveness, which sheds light on its new economic policies and its readiness for technical innovation, a successful marketplace, and specialization processes. | https://business.aau.ac.ae/ar/business/academic-staff/staff/tahira-yasmin |
Expert Detail ? Name: Ms Anuradha Chawla ? Designation: Founder ? Organization: Bbetter
Brief details of the event highlights: Department of Computer Applications, CBSA organized an expert lecture on the topic “Role of Innovation in Startup Journey” on 29th July, 2022. The aim of this event was to examine a set of creativity-related concepts, dimensions, patterns, different ways and techniques of generating ideas, developing talents, funding opportunities as well as protecting intellectual property rights, and, in particular, how all these factors affect the economy and sustainable business. Entrepreneurship emerges as an important factor in a rapidly changing world of business and transforming creative ideas into value-added.
Student Feedback: Students found it very interesting, informative and helpful. Students solved a specific innovation challenge and applied their knowledge into actual action. They participated enthusiastically and inquired multiple queries regarding Innovation, Startup and creativity.
Faculty Coordinator Name, Designation, Dept.Name& Email ID: Ms.Anuradha Saini, Assistant Professor, DCA, [email protected] Ms. Maneet Mander, Assistant Professor, DCA, [email protected]
Number of participants:
Student count- 80 and
Faculty count- 8
Learning Outcomes: Innovation is influenced by Institutional culture and that comes from a governance culture which is in the hands of school leaders. Innovation is led by ethics which is an integral part of a business organization. Students got the knowledge of the legal and ethical environment impacting business organizations and exhibit an understanding of the ethical implications of decisions. They also learned the importance of the impact of globalization and diversity in modern organizations. This event helped in attaining:
PO6: Professional Ethics: Understand and commit to professional ethics and cyber regulations, responsibilities, and norms of professional computing practices.
PO7: Life-long Learning: Recognize the need, and have the ability, to engage in independent learning for continual development as a computing professional.
PO11: Individual and Team Work: Function effectively as an individual and as a member or leader in diverse teams and in multidisciplinary environments.
PO12: Innovation and Entrepreneurship: Identify a timely opportunity and using innovation to pursue that opportunity to create value and wealth for the betterment of the individual and society at large.
PSO2: Develop techniques to improve skills for lifelong learning.
PSO3: Develop class environment congenial and competitive for generation of ideas, innovation and sharing. | https://www.cbsmohali.org/news/expert-lecture--ldquorole-of-innovation-in-startup-journey |
The Better Building Partnership, which is a private initiative of British commercial property owners, has released a new office fit-out toolkit for owners and occupiers. The toolkit provides a...
Country:
ALL EU
Sector / Industry:
Office and administration
Resources:
Energy
Materials
Water
Waste
Carbon
Type:
Guides, handbooks, information material
Language:
English
Tags:
Environmental management
Energy performance of buildings
Monitoring
Waste reduction
Water management
Energy management
Construction
Best practice database
On-line database of successful resource efficiency projects carried out in manufacturing companies in Germany. It enables user to filter the best practices according to the region, industry branch,...
Country:
Germany
Sector / Industry:
All manufacturing industries
Resources:
Energy
Materials
Water
Waste
Type:
Best practices
Language:
German
Tags:
Energy efficiency
Material efficiency
Product design
Monitoring
Energy Scan
The Flemish Enterprise and Innovation Agency (VLAIO) offers companies individual energy screening (energy scan) via contracted professional energy consultancies. SMEs can order the free energy scan...
Country:
Belgium
Sector / Industry:
All sectors
Hotel and restaurant
Resources:
Energy
Type:
Energy and material audit
Language:
Dutch
Tags: | https://resourceefficient.eu/en/support-programmes?f%5B0%5D=field_sme_tags%3A566 |
On Behalf of SKF Group, SD Solutions is looking for a talented AQA interested in Big Data.
The Big data development center is responsible for the data infrastructure and backend services of the SKF-Enlight solution, which provides an AI-Driven Industrial Intelligence solution. The SKF-Enlight solution uses advanced Artificial Intelligence to provide asset failure predictions based on monitoring sensors’ signal data in the cloud.
With its proprietary adaptive algorithms, SKF-Enlight can analyze sensor behavior, automatically learn how machines behave, and use this learning to predict machine failures before they occur.
Today we are about 65 people, located in Israel, Sweden, Eastern Europe, US and India
Big Data QA Automation Engineer Reporting to the QA lead of Big Data Development Center
Responsibilities:
— Design, develop and enhance automated test coverage for the backend functionality (REST API)
— Develop, maintain and execute E2E REST API automated
— Define, develop and execute load and performance tests on high scale applications
— Analyze test results runs
— Define test design documents for automation
Required skills:
— 3+ experience in programming languages (Java, golang, .NET, JS, TypeScript)
— 2+ years experience with load and performance testing of high scale cloud applications
— 3+ years experience in testing Back-End applications through REST API programming
— 3+ years experience in test automation development
— Knowledge and experience in Automation Frameworks
— 3+ years experience in testing Cloud Based Applications (AWS, Google Cloud)
— 2+ years experience working on Linux based system
— Experience in Databases
Desired skills:
— Experience with Cucumber automation framework (gherkin syntax) or similar BDD frameworks
— Experience in BI (Big Data) system
— Experience with Azure DevOps, DataDog
Note:
This is long term and full time position (30+ hours a week). | https://jobs.dou.ua/companies/sd-solutions/vacancies/186133/ |
Pure Raspberry Leaf
Raspberry leaf tea is has a high content of vitamins and miners like vitamins A, B, C, and E, potassium, phosphorus and calcium. Used to promote the overall health and well-being of women during pregnancy and is used as a tonic for pregnant women. Apart from these vitamins and minerals, the leaf contains alkaloids like fragrine that has been found to tone the muscles of the pelvic region and uterus so that the process of delivery becomes easier, faster and less painful. In addition, it may promote better blood circulation, ease morning sickness, prevent post-partum hemorrhage, relieve constipation, and promote lactation. | https://www.teatotal.co.nz/tea/pure-raspberry-leaf |
The nucleus of every radioactive element (such as radium and uranium) spontaneously disintegrates over time, transforming itself into the nucleus of an atom of a different element.
In the process of disintegration, the atom gives off radiation (energy emitted in the form of waves). Each element decays at its own rate, unaffected by external physical conditions.
These include the uranium-thorium method, the potassium-argon method, and the rubidium-strontium method. Thermoluminescence (pronounced ther-moeloo-mi-NES-ence) dating is very useful for determining the age of pottery.
When a piece of pottery is heated in a laboratory at temperatures more than 930°F (500°C), electrons from quartz and other minerals in the pottery clay emit light.
With sensitive instrumentation, this range can be extended to 70,000 years.
In addition to the radiocarbon dating technique, scientists have developed other dating methods based on the transformation of one element into another.
By measuring the amount of carbon-14 remaining, scientists can pinpoint the exact date of the organism's death. | https://best-car-news.ru/absolute-vs-relative-dating-methods-1583.html |
Recently, a metropolitan hospital system was assessing the impact of COVID-19 on its effectiveness in delivering care as well as on its business aspect. The patterns were pretty clear in terms of how healthcare had been delivered during the outbreak, and how it needs to change going forward to improve patient outcomes, for pandemic and nonpandemic health events.
They found that information sharing around changing patterns of effective treatment of the virus was difficult, considering the reactive thinking around patient care for most hospitals at times when the emergency centers were stressed. The sharing of information around evolution of treatments was more passive than it should have been.
They also noticed that patients with more traditional ailments, such as heart disease, stroke, and cancer, pushed off seeking treatment due to fears of the virus or not being able to gain access to clinicians who were focused on coronavirus. This will obviously result in some increase in mortality beyond the pandemic.
Their third finding was that business fell off tremendously. Many states banned elective procedures, hospitals postponed many of them due to the pandemic, or patients feared having a procedure done at a hospital that was also treating COVID-19. In many healthcare systems this revenue is used to offset less profitable treatments and its lack has sent many hospital systems into the red quickly.
How can cloud computing help?
Information sharing is the topic that most providers want to tackle first. In the heat of a pandemic, sharing vital information about treatments and outcomes needs to be automated and proactive.
As systems monitor diagnostics, treatments, and outcomes, trends emerge as to effective therapeutics. That information needs to be available in real time to clinicians. Having the best information possible raises the likelihood of making effective and life-saving treatment recommendations.
Of course, cloud computing is the best platform to accomplish information sharing, with the ability to provision the data storage and integration needed. These can occur on centralized systems that drive a single or many healthcare systems, and can optimize sharing of data and abstract calculations of the data. Many healthcare systems are moving forward with this strategy and leveraging cloud computing as a force multiplier.
The other two issues can be solved using distribution of point-of-care providers. It no longer should be a requirement that most elective and nonelective procedures, diagnostics, and treatments, including major surgery, occur in hospitals.
Distribution of care is made possible by cloud-based medical information systems, including diagnostics systems, that are now ubiquitous. While the assumption is that the centralization of information, diagnostics, and treatment is a good thing, postpandemic we now understand that patients going to locations that are closer to their homes, with fewer humans, means fewer chances of infection. Moreover, the same or better standard of care will lead to better outcomes for traditional health issues, that in many cases are lost in the panic of a crisis.
The same approaches and technology apply to those needing elective procedures as well. As somebody who recently had a shoulder replaced, I realize that although it was “elective,” I was in unbearable pain. While we can point to cosmetic surgery as an elective procedure as well, most elective procedures solve problems that lower your quality of life quickly.
In essence, cloud-based systems, along with emerging bandwidth availability such as 5G, means that we no longer should face compromises in level of care. Indeed, telemedicine connected with distributed diagnostics centers and lower latency of diagnostics to treatments means that there will be a rise in survivability, and this should come at a reduced cost to payers and patients. The time to move in these directions is now. | https://www.infoworld.com/article/3542596/where-to-direct-healthcare-spending-on-cloud-computing.html |
The idea of delayed hypertrophic supercompensation – the idea that your muscles can keep growing for several days after you complete a grueling block of training – is very contentious. A recent study provides us with the first evidence that it’s possible. However, there’s quite a bit more to the story.
This article is a review and breakdown of a recent study. The study reviewed is Delayed Myonuclear Addition, Myofiber Hypertrophy and Increases in Strength with High-Frequency Low-Load Blood Flow Restricted Training to Volitional Failure by Bjørnsen et al. (2018)
Key Points
This study had untrained subjects complete two blocks of high-frequency blood flow restriction training, with 10 days between blocks. Strength and muscle fiber cross-sectional area both appeared to follow a pattern of delayed supercompensation. Muscle fiber CSA decreased at first, and then increased until at least 10 days after the last session was completed. Maximal knee extension strength increased until at least 20 days after the last session was completed. Interestingly, muscle fiber CSA and whole muscle size followed different patterns of adaptation. Whole muscle size didn’t decrease initially, and it didn’t keep increasing after the training was completed.
I recently reviewed a study from Bjørnsen and colleagues with some interesting findings: Just two weeks of low-load training with blood flow restriction (BFR) caused really robust hypertrophy of type I fibers, providing the clearest evidence we have for fiber type-specific hypertrophy (2). The same group is back with another eye-catching study (1), potentially demonstrating delayed hypertrophic supercompensation for the first time. Delayed supercompensation is the idea that beneficial adaptations can keep occurring after a period of training is completed. It’s most often discussed in the context of overreaching: You train beyond your normal capacities for a time, but after several days of rest, you rapidly accrue beneficial adaptations. Most people think about delayed supercompensation from a performance perspective, and several theories of tapering and peaking are built around this idea. However, delayed hypertrophic supercompensation is much more controversial; the traditional view is that muscles stop growing when you stop training.
In this study, untrained subjects completed two five-day blocks of high-frequency, low-load training with blood flow restriction. The researchers measured maximal knee extension strength, muscle fiber cross-sectional area (CSA), and whole-muscle CSAs and thicknesses. While measures of whole muscle size increased quickly and potentially decreased a bit after the cessation of training (probably due to a reduction in swelling), muscle fiber CSAs and knee extension strength kept increasing long after the second block of training finished. The continued increase in fiber CSA and discordance between changes in fiber size and whole muscle size are very interesting and certainly worth a closer look.
Download a free PDF issue of MASS research review This article is part of our free sample issue of Monthly Applications in Strength Sport. Download the PDF to get 9 more articles and videos just like it.
Purpose and Hypotheses
Purpose
The purpose of this study was to “investigate the effects of two blocks with high frequency blood flow restricted resistance exercise, separated by 10 days of rest, on fiber and whole muscle areas, myonuclear and satellite cell numbers and muscle strength, and the time courses of those changes.”
Hypotheses
In previous research (3), hypertrophy due to high-frequency BFR training plateaued after seven days of training. It was hypothesized that the 10-day rest period between training blocks would reset the subjects’ responsiveness to the anabolic stimuli so that they’d experience hypertrophy, increases in satellite cell number, and myonuclear accretion during both blocks of training.
Subjects and Methods
Subjects
16 recreationally active adults with no resistance training experience participated in this study. Three subjects dropped out over the course of the study, so 13 subjects were included in the final analyses. Further details about the subjects can be seen in Table 1.
Study Overview
The whole study took place over 46 days for each participant. One week before training began, the subjects underwent baseline testing, including assessments of quad muscle size and strength, a blood draw, and a muscle biopsy.
The training itself consisted of two blocks of high-frequency, low-load knee extensions with BFR. Each block lasted for five days. During the first three days of each block, the subjects trained once per day, and they trained twice per day during the last two days of each block (accomplishing seven workouts in five days). For all sessions, the subjects performed four sets of blood flow restricted unilateral knee extensions to failure with each leg, with 20% of 1RM and 30 seconds between sets. All four sets were completed on the right leg first, followed by four sets with the left leg. The pressure cuff used to achieve blood flow restriction (inflated to 90mmHg for women and 100mmHg for men) was left on between sets.
The subjects had a 10-day break between the two blocks of training, and follow-up measures were assessed at 3, 5, 10, and 20 days following the second training block. The authors assessed strength using 1RM knee extensions; they assessed hypertrophy with ultrasound scans, muscle biopsies, and MRIs; and they performed blood draws to measure blood markers of muscle damage (creatine kinase and myoglobin).
For a schematic of this study, see Figure 1.
Findings
Training loads didn’t change over the course of the study, but rep performance increased. The subjects completed 80 ± 14 reps per session during the first block, and 89 ± 13 reps per session during the second block.
Markers of muscle damage were significantly elevated during the first block of training, went back to baseline during the rest week, and then did not increase significantly above baseline during the second block of training. Soreness (assessed via a visual analog scale) peaked during the third day of the first block, whereas creatine kinase and myoglobin peaked on the last day of the first block of training.
Muscle fiber CSA significantly decreased at first. The decrease was larger in type II fibers (-15% during the rest period) than type I fibers (-6% during the first block of training). After the initial decrease in fiber CSA, fiber size increased throughout the rest of the study. It was back around baseline for both fiber types three days after the last training session and was elevated above baseline 10 days post-training (+19% for type I, and +11% for type II). The difference from baseline at 10 days post-training was significant for type I fibers (p=0.01), but not quite significant for type II fibers (p=0.09).
Hypertrophy estimates from ultrasound scans tell a very different story. Rectus femoris CSA and vastus lateralis thickness increased significantly above baseline by the end of the first block of training (+6.8% and +5.6%, respectively), trended back toward baseline measures during the 10-day rest period (down to 1.5% and 3.4% above baseline), increased significantly again by the end of the second training block (up to 7.9% and 6.9% above baseline), and stayed elevated above baseline (decreasing non-significantly to 7.0% and 5.7% above baseline) during the 10 days following the last training session. MRI scans were only taken at baseline and five days post-training, but rectus femoris CSA, vastus lateralis CSA, and total quadriceps CSA all significantly increased as well. However, the relative increases tended to be smaller than those seen with either the ultrasound scans or the biopsies (+6.2% for rectus femoris CSA, +2.4% for vastus lateralis CSA, and +1.2% for quadriceps CSA).
Much like fiber CSA, 1RM knee extension strength initially decreased slightly, though significantly (-4%), from baseline to the rest period. Strength did not significantly differ from baseline at 3 and 10 days post-training, but was significantly elevated 20 days post-training (+6%). However, the total swing in mean strength was very modest, from 65kg at baseline, to 63kg during the rest period, to 69kg 20 days post-training.
Satellite cells per muscle fiber increased quickly in both fiber types (by ~70% in type I fibers and ~50% in type II fibers by day four of the first block of training). That increase more or less leveled off for type I fibers (peaking at an increase of 96% three days post-training), but satellite cells per type II fiber increased progressively (peaking at an increase of 144% 10 days post-training).
In both fiber types, myonuclei per fiber didn’t increase between baseline and the rest week. However, myonuclei per fiber then increased following the second training block, peaking at 10 days post-training for both fiber types (+30% for type I fibers, and +31% for type II fibers). Interestingly, myonuclear domain tended to decrease in both fiber types.
Since this is a research review for strength athletes and coaches, I won’t belabor the cellular signaling markers, except to say that the pattern of gene expression looked to be most in favor of anabolism 10 days post-training.
Prefer to listen? Check out the audio roundtable Every article we review in MASS has an accompanying audio roundtable just like this one. All of the MASS reviewers (Greg Nuckols, Eric Trexler, Eric Helms, and Mike Zourdos) get together to discuss the findings and applications in practical, easy-to-understand terms. Subscribe to MASS here.
| |
Study highlights link between environmental degradation and ill health in young children
Researchers from the National University of Singapore (NUS) have found that the loss of dense forest in Cambodia was associated with higher risk of diarrhea, acute respiratory infection, and fever – which are major sources of global childhood morbidity and mortality – in children younger than five years old.
Led by Assistant Professor Roman Carrasco from the Department of Biological Sciences at the NUS Faculty of Science, the team analyzed health survey data from 35,547 households in 1,766 communities between 2005 and 2014, to investigate the relationship between child health and protected areas across different forest types in Cambodia.
Mr Thomas Pienkowski, who is the first author of the study, said, "Currently, there are limited studies on the health benefits that forests may provide. Most research looking at the impact of deforestation on health focuses on single diseases, thus making it challenging to integrate into policy. Furthermore, it is unclear how these environmental threats can be mitigated, and if conservation tools such as protected areas can play a role."
The NUS team found that 10 per cent reduction in dense forest is associated with 14 percent increase in the incidence of diarrhea in children younger than five years old. In addition, the team's findings showed that an increase in protected area cover was associated with lower risk of diarrhea and acute respiratory infection.
"In this study, we showed that deforestation in Cambodia is associated with increased risk of leading causes of childhood mortality and morbidity. This highlights the link between environmental degradation and health, and suggests that conserving forests could help in mitigating health burden. Our findings suggest that public health impacts of deforestation should be accounted for when policy makers are assessing trade-offs in land use planning, and present new possibilities for simultaneous achievement of public health and conservation goals," said Asst Prof Carrasco.
Building on their findings, the NUS research team plans to expand the study to include regional analyses in sub-Saharan Africa and Asia, to assess how the relationship between health and tropical forest conservation may change under different socioeconomic and environmental realities.
We imagine a world where mobility is never an impairment: a world where limitations to natural movement caused by injury, disorder or disability are restored and where boundaries to human performance can be broken.
Species identification is important for ecological research and in particular to study the impact of natural hazards or environmental pollutants present, because it’s possible to determine the general health of the ecosystem through the diversity of life that are found in a given area.
The brain is very plastic, which means that the brain is able to adapt to new signals. In the case of bionic vision restoration, the photoreceptors have died, the brain is not receiving anything biologically, and you are going to then send something which is artificial, prosthetic, and has been created outside the body.
Other Useful Links
News-Medical.Net provides this medical information service in accordance
with these terms and conditions.
Please note that medical information found
on this website is designed to support, not to replace the relationship
between patient and physician/doctor and the medical advice they may provide. | |
My garden has taken on many looks over the years. It’s gone from a large garden with a great variety of vegetables, to a pile of weeds (they were the years of too much multi-tasking and not enough tending) to its present state of small and manageable for my current lifestyle. Although I don’t have the time to can and preserve my harvest for a year’s worth of healthy goodness, my tomatoes gave me some wonderful winter ‘tastes of summer’.
There’s nothing better than the taste of garden tomatoes in the middle of winter. Here are just a couple of things I did with mine.
Preheat oven to the lowest heat setting (150 – 200 F).
- Line a baking sheet with parchment paper. Arrange tomatoes on top, cut side up. Sprinkle lightly with salt and a little fresh basil cut up.
- Bake the tomatoes until the edges have shriveled and the insides are still slightly moist but not juicy. Timing depends on the size of tomato; the drying time will take anywhere between 2 and 6 hours.
- Set the pan aside until completely cool and then transfer the tomatoes to a clean and sterilized jar. Add a few sprigs of fresh basil and a clove of garlic, to the jar. Pour in olive oil, thoroughly covering the tomatoes to preserve them.
- Store in the fridge for 4 – 6 weeks. Use up the remaining olive oil in dishes that can benefit from the savory tomato flavor. YUMMY!
In addition to drying the plum tomatoes, I created some homemade gravy (sauce), that I used in this pan of lasagna, made a tomato gratin, and fried up the green tomatoes to use as sides and sandwich toppings.
Along with the tomatoes came a few zucchini that survived whatever wiped out the majority of my crop this year.
Even a small garden produced some great meals for my family! I’m looking forward to expanding my garden this year and adding a few more veggies as I move one step closer to eating cleaner and greener. | https://gardenerstouch.net/2013/01/15/reaping-the-rewards-of-the-harvest/ |
---
abstract: '[ Let $\A1$ be a unital . Denote by $\pa$ the space of selfadjoint projections of $\A1$. We study the relationship between $\pa$ and the spaces of projections $ \paa $ determined by the different involutions $\*a$ induced by positive invertible elements $a \in \A1$. The maps $\fp : \pa \to \paa $ sending $p$ to the unique $q \in \paa$ with the same range as $p$ and $\proj _a : \paa \to \pa$ sending $q$ to the unitary part of the polar decomposition of the symmetry $2q-1$ are shown to be diffeomorphisms. We characterize the pairs of idempotents $q, r \in \A1$ with $\|q-r\|<1$ such that there exists a positive element $a\in \A1$ verifying that $q, r \in \paa$. In this case $q$ and $r$ can be joined by an unique short geodesic along the space of idempotents $\qa$ of $\A1$. ]{}'
author:
- 'E. Andruchow, G. Corach and D. Stojanoff'
title: '[GEOMETRY OF OBLIQUE PROJECTIONS [^1] [^2]]{}'
---
\[section\] \[fed\][Theorem]{} \[fed\][Lemma]{} \[fed\][Corolary]{} \[fed\][Proposition]{} \[fed\][Remark]{} \[fed\][Example]{} \[fed\]
=8.0in 1[[A]{}]{} 1[[B]{}]{} a[\#a]{} 2a[M2(1 )]{} [H]{}
.6truecm
1truecm
1truecm
Introduction.
=============
Let $\H$ be a Hilbert space with scalar product $<,>$. For every bounded positive invertible operator $a : \H \to \H$ consider the scalar product $<,>_a $ given by $$<\xi , \eta>\sb a = <a\xi , \eta> \ , \quad \xi , \ \eta \in \H .$$ It is clear that $<,>_a$ induces a norm equivalent to the norm induced by $<,>$. With respect to the scalar product $<,>_a$, the adjoint of a bounded linear operator $x : \H \to \H$ is $$x^{\*a} = a \inv x^* a.$$ Thus, $x$ is $a$-selfadjoint if and only if $$ax = x^*a.$$
Given a closed subspace $S$ of $\H$, denote by $p = P_S$ the orthogonal projection from $\H$ onto $S$ and, for any positive operator $a$, denote by $\fp (a) $ the unique $a$-selfadjoint projection with range $S$. In a recent paper, Z. Pasternak-Winiarski [@[PW1]] proves the analyticity of the map $a \mapsto \fp(a)$ and calculates its Taylor expansion. This study is relevant for understanding reproducing kernels of Hilbert spaces of holomorphic $L^2$ sections of complex vector bundles and the way they change when the measures and hermitian structures are deformed (see [@[PW2]], [@[PW3]]). This type of deformations appears in a natural way when studying quantization of systems where the phase space is a Kähler manifold (Odzijewicz [@[O1]], [@[O2]]).
In this paper we pose Pasternak-Winiarski’s problem in the setting and use the knowledge of the differential geometry of idempotents, projections and positive invertible elements in order to get more general results in a shorter way.
More precisely, let $\A1$ be a unital , $G = G(\A1 ) $ the group of invertible elements of $\A1$, ${\cal U} = \U$ the unitary group of $\A1$, $ G^+ = \{ a \in G : a^* = a , \ a \ge 0 \}$ the space of positive invertible elements of $\A1$, $$\qa = Q(\A1 )= \{ q \in \A1 : q^2 = q\} \quad \hbox{ and } \quad
\pa = P(\A1 ) = \{ p \in \qa : p = p ^* \},$$ the spaces of idempotents and projections of $\A1$. The nonselfadjoint elements of $\qa$ will be called oblique projections. It is well known that $\qa$ is a closed analytic submanifold of $\A1$, $\pa$ is a closed real analytic submanifold of $\qa$ and $G^+$ is an open submanifold of $$\aut = \aut (\A1 ) = \{ b \in \A1 : b^{*} = b \},$$ which is a closed real subspace of $\A1$ (see [@[PR1]], [@[CPR1]] or [@[CPR3]] for details).
We define a fibration $$\vfi : \pa \times G^+ \to \qa$$ which coincides, when $\A1 = L(\H)$, with the map $(p,a) \mapsto \fp(a)$, the unique $a$-selfadjoint projection with the same range as $p$. This allows us to study the analyticity of Pasternak-Winiarski’s map in both variables $p$, $a$. The rich geometry of $\qa$, $\pa$ and $G^+$ give an amount of information which may be useful in the problems that motivated [@[PW1]].
Along this note we use the fact that every $p \in \qa$ induces a representation $\alpha \sb p$ of elements of $\A1$ by $2 \times 2$ matrices given by $$\alpha \sb p (a) =
\left( \begin{array}{cc} pap &pa(1-p) \\ (1-p)ap & (1-p)a(1-p)
\end{array} \right).$$ Under this homomorphism $p$ can be identified with $$\left( \begin{array}{cc} 1\sb {p\A1 p}&0 \\ 0&0 \end{array} \right)
= \left( \begin{array}{cc} 1 &0 \\ 0&0 \end{array} \right) .$$ and all idempotents $q$ with the same range of $p$ have the form $$q = \left( \begin{array}{cc} 1&x \\ 0&0 \end{array} \right).$$ for some $x \in p\A1 (1-p)$. This trivial remark shortens many proofs in a drastic way and the analyticity of some maps (for example $\vfi : \pa \times G^+ \to \qa $) follows immediately.
The contents of the paper are the following. Section 2 contains some preliminary material including the matrix representations mentioned above and the description of the adjoint operation induced by each positive invertible (element or operator) $a$.
In section 3 we study the map $\fp = \vfi (p, .) : G^+ \to \qa$, which is Pasternak-Winiarski’s map when $\A1$ is $L(\H)$ and $p$ is the orthogonal projection $P_S$ onto a closed subspace $S \inc \H$. For $a \in G^+$, let $\paa = \paa (\A1 )$ denote the set of all $\*a$-selfadjoint projections. This is a subset of $\qa$ and section 4 starts a study of the relationship between $\pa = \pau$ and $\paa$ and the way they are located in $\qa$. In particular we show that $\fia =
\vfi (., a): \pa \to \paa $ is a diffeomorphism and compute its tangent map. Another interesting map is the following: for $q \in \paa$, $\eps =
2q-1$ is a reflection, i.e. $\eps ^2 = 1$, which admits in $\A1$ a polar decomposition $\eps = \la \rho$, with $\la \in G^+$ and $\rho $ a unitary element of $\A1$. It is easy to see that $\rho =
\rho^* = \rho \inv $ so that $ p = \frac12(\rho+1) \in \pa$. In section 5 we prove that the map $\proj _ a: \paa \to \pa $ given by $\proj _a (q) =
p$ is a diffeomorphism and study the movement of $\pa$ given by the composition $\proj _ a \circ \fia : \pa \to \pa $. We also characterize the orbit of $p$ by these movements, i.e. $$\op := \{ r \in \pa : \proj \sb a \circ \fa (p) = r \ \hbox{ for some } \
a \in G^+ \} .$$ In recent years several papers have appeared which study length of curves in $\pa$ and $\qa$ (see [@[PR2]], [@[Br]], [@[Ph]], [@[CPR1]], [@[ACS]] for example). It is known that $\pa$ and the fibres of $\proj :\qa\to \pa$ are geodesically complete and their geodesics are short curves (for convenient Finsler metrics -see [@[CPR1]]). For a fixed $p\in \pa$, let us call horizontal (resp. vertical) those directions around $p$ which produce geodesics along $\pa$ (resp. along the fiber $\proj \inv (p)$). In section 6 we show that there exist short geodesics in many other directions (not only the horizontal and the vertical ones).
This paper, which started from a close examination of Pasternak-Winiarski’s work, is part of the program of understanding the structure of the space of idempotent operators. For a sample of the vast bibliography on the subject the reader is referred to the papers by Afriat [@[Af]], Kovarik [@[Ko]], Zemánek [@[Ze]], Porta-Recht [@[PR1]], Gerisch [@[Ge]], Corach [@[C]] and the references therein. Applications of oblique projections to complex, harmonic and functional analysis and statistics can be found in the papers by Kerzman and Stein [@[KS1]], [@[KS2]], Ptak [@[Pt]], Coifman and Murray [@[CM]] and Mizel and Rao [@[MR]], among others.
Preliminary results.
====================
Let $\H$ be a Hilbert space, $\A1 \subset L( \H)$ a unital , $G = G(\A1) $ the group of invertible elements and $\U$ the unitary group of $\A1$,
If $S$ is a closed subspace of $\H$ and $q$ is a bounded linear projection onto $S$, then $$\label{2.1}
p = qq^*(1-(q-q^*)^2)^{-1}$$ is the unique sefadjoint projection onto $S$. Note that, by this formula, $p \in \A1$ when $q \in \A1$. Several different formulas are known for $p$ (see [@[Ge]], p. 294); perhaps the simplest one is the so-called Kerzman-Stein formula $$\label{2.2}
p = q(1+ q-q^*)^{-1}$$ (see [@[KS1]], [@[KS2]] or [@[CM]]). However, for the present purposes, (\[2.1\]) is more convenient. We denote by $$\label{2.3}
\qa = Q(\A1) = \{ q \in \A1 : q^2 = q\} \; \hbox{and} \;
\pa = P(\A1) = \{ p \in \A1 : p = p ^* = p^2 \}$$ the spaces of idempotents and projections of $\A1$. Given a fixed closed subspace $S$ of $\H$, we denote by $$\label{qs}
Q_S= Q\sb S(\A1) = \{ q \in Q(\A1) : q(\H) = S\}$$ the space of idempotents of $\A1$ with range $S$. Note that, by (\[2.1\]), $Q\sb S$ is not empty if and only if the projection $p = p\sb S$ onto $S$ belongs to $\A1$. We shall make this assumption.
It is easy to see that two idempotents $q,r\in\qa$ have the same range if and only if $qr=r$ and $rq=q$. Therefore the space $Q\sb S $ of (\[qs\]) can be characterized as $$Q\sb S = \qpa = \{q\in\qa :qp=p ,\ pq=q \}.$$ In what follows, we shall adopt this notation $\qpa$, emphasizing the role of $p$, rather than $S$. This will enable us to simplify many computations. Moreover this operator algebraic viewpoint allows one to get the results below independently of the representation of $\A1$.
Recall some facts about matrix representations. Every $p \in \qa$ induces a representation $\alpha \sb p$ of elements of $\A1$ by $2 \times 2$ matrices given by $$\label{2.5}
\alpha \sb p (a) =
\left( \begin{array}{cc} pap &pa(1-p) \\ (1-p)ap & (1-p)a(1-p)
\end{array} \right).$$ If $p \in \pa$ the representation preserves the involution $^*$. For simplicity we shall identify $a$ with $\alpha \sb p (a)$ and $\A1$ with its image by $\alpha \sb p $. Observe that, with this convention, $$\label{2.6}
p = \left( \begin{array}{cc} 1\sb {p\A1 p}&0 \\ 0&0 \end{array} \right)
= \left( \begin{array}{cc} 1 &0 \\ 0&0 \end{array} \right) .$$ Moreover, $q \in \qsa = \qpa$ if and only if there exists $x \in p\A1 (1-p) $ such that $$\label{2.7}
q = \left( \begin{array}{cc} 1&x \\ 0&0 \end{array} \right).$$ Indeed, let $q =
\left( \begin{array}{cc} a&b \\ c&d \end{array} \right) \in \qpa$. Then $$\left( \begin{array}{cc} 1&0 \\ 0&0 \end{array} \right) =
p = qp =
\left( \begin{array}{cc} a&b \\ c&d \end{array} \right)
\left( \begin{array}{cc} 1&0 \\ 0&0 \end{array} \right)=
\left( \begin{array}{cc} a&0 \\ c&0 \end{array} \right),$$ then $a=1 $ and $c=0$. On the other hand $$q= pq = \left( \begin{array}{cc} 1&b \\ 0&0 \end{array} \right),$$ then $d =0$ and $b $ can be anything. We summarize this information in the following:
\[2.8\] The space $\qpa$ can be identified with $p\A1(1-p)$ by means of the affine map $$\label{2.9}
\qpa \to p\A1(1-p) \ , \quad q \mapsto q-p$$
Proof. Clearly, the affine map defined in (\[2.9\]) is injective. By (\[2.7\]) it is well defined and onto.
In the Hilbert space $\H$, every scalar product which is equivalent to the original $< , >$ is determined by a unique positive invertible operator $a \in L(\H)$ by means of $$\label{2.10}
<\xi , \eta>\sb a = <a\xi , \eta> \ , \quad \xi , \ \eta \in \H .$$ For this scalar product the adjoint $x^{\*a}$ of $x \in L(\H)$ is easily seen to be $$\label{2.11}
x^{\*a} = a\inv x^* a$$ where $*$ denotes the adjoint operation for the original scalar product. Operators which are selfadjoint for some $\*a$ have been considered by Lax [@[L]] and Dieudonné [@[D]]. A geometrical study of families of C$^*$-involutions has been done by Porta and Recht [@[PR3]].
Denote by $G^+ = G^+(\A1)$ the set of all positive invertible elements of $\A1$. Every $a \in G^+$ induces as in (\[2.11\]) a continuous involution ${\*a}$ on $\A1$ by means of $x^{\*a} = a\inv x^* a$, for $x\in \A1$. $\A1$ is a with the involution ${\*a}$ and the corresponding norm $\|x\|_a = \|a^{1/2}xa^{-1/2}\|$ for $x \in \A1$. The mapping $x \mapsto a^{-1/2}x a^{1/2}$ is an isometric isomorphism of $(\A1, \| \ \|, *)$ onto $(\A1, \| \ \|_a, \*a)$. In this setting, $\A1$ can be also represented by the inclusion map in $L(\H, <,>\sb a)$.
Note that the map $a \to <,>\sb a \mapsto \*a$ is not one to one, since (\[2.11\]) says that if $ a \in \zC . I$ then $\*a=*$. If we regard this map in $G^+$ with values in the set of involutions of $\A1$, then two elements $a, b \in G^+$ with $a = b z$ for $z$ in the center of $\A1$, $$\label{2.12}
\za = \{z\in \A1 : zc=cz , \quad \hbox{ for all } \quad c\in \A1\},$$ produce the same involution $\*a$.
\[2.13\] Recall the properties of the conditional expectation induced by a fixed projection $p\in \pa$. Note that the set $\A1\sb p$ of elements of $\A1$ which commute with $p$ is the C$^*$-subalgebra of $\A1$ of diagonal matrices in terms of the representation (\[2.5\]). We denote by $E\sb p : \A1 \to \A1\sb p\subset \A1 $ the defined by compressing to the diagonal: $$E\sb p (a) = pap + (1-p)a(1-p) =
\left( \begin{array}{cc} pap &0 \\ 0&(1-p)a(1-p) \end{array} \right)
\ , \quad a\in \A1 .$$ This expectation has the following well known properties ([@[St]], Chapter 2): for all $a \in \A1$,
$E\sb p(bac) = bE\sb p(a) c $ for all $b,c \in \A1\sb p$.
$E\sb p (a^*) = E\sb p(a)^*.$
If $b\le a$ then $E\sb p(b)\le E\sb p(a)$. In particular $E\sb p(G^+) \subset G^+$.
$\|E\sb p(a)\| \le \|a\|$.
If $0\le a $, then $2E\sb p(a) \ge a$.
Idempotents with the same range.
================================
The main purpose of this section is to describe, for a fixed $p\in\pa$, the map which sends each $a \in G^+$ into the unique $q \in \qpa$ which is $\*a$-selfadjoint. This problem was posed and solved by [@[PW1]] when $\A1=L(\H)$. Here we use $2\times 2$ matrix arguments to give very short proofs of the results of [@[PW1]]. Moreover we generalize these results and apply them to understand some aspects of the geometry of the space $\qa$.
Let us fix the notations: For each $a\in G^+$ denote by $$\label{3.1}
\auta = \auta (\A1) = \{ b \in \A1 : b^{\*a} = b \},$$ the set of $\*a$-selfadjoint elements of $\A1$.
\[3.2\]Let $\A1$ be a and $p \in \pa$ a fixed projection of $\A1$. We consider the map $$\fp : G^+ \to \qpa \quad \hbox{ given by }
\fp (a) = \hbox { the unique } q \in \qpa \cap \auta \ , \quad a\in G^+ .$$ Note that existence and uniqueness of such $q$ follow from (\[2.1\]) applied to the $\A1$ with the star $\*a$.
\[3.3\] Let $\A1$ be a and $p \in \pa$. Then, for all $a\in G^+(\A1)$, $$\label{3.4}
\fp (a) = p E\sb p(a)\inv a ,$$ where $E\sb p$ is the defined in (\[2.13\]). In particular, $$\|\fp(a)\|\le 2\ \|a\|\ \|a\inv\|.$$
Proof. Suppose that, in matrix form, we have $$a = \left( \begin{array}{cc} a\sb 1 &a\sb 2 \\ a\sb 2^*& a\sb 3 \end{array} \right) \quad
\hbox{ and then } \quad E\sb p(a) =
\left( \begin{array}{cc} a\sb 1 &0 \\ 0& a\sb 3 \end{array} \right).$$ Since $\fp(a) \in \qpa$, by (\[2.7\]) there exists $x \in p\A1(1-p)$ such that $\fp(a) =
p+x$. On the other hand, by (\[2.11\]), $p+x \in \auta $ if and only if $ a\inv (p+x)^* a = p+x $, i.e. $ (p+x^*)a = a(p+x)$. In matrix form, $$(p+x^*)a = \left( \begin{array}{cc} 1 &0 \\ x^*&0 \end{array} \right)
\left( \begin{array}{cc} a\sb 1 &a\sb 2 \\ a\sb 2^*& a\sb 3 \end{array} \right) =
\left( \begin{array}{cc} a\sb 1 &a\sb 2 \\ x^*a\sb 1&x^* a\sb 2 \end{array} \right) \quad
\hbox{ and}$$ $$a(p+x) = \left( \begin{array}{cc} a\sb 1 &a\sb 2 \\ a\sb 2^*& a\sb 3 \end{array} \right)
\left( \begin{array}{cc} 1 &x \\ 0&0 \end{array} \right) =
\left( \begin{array}{cc} a\sb 1 &a\sb 1x \\ a\sb 2^*& a\sb 2^* x \end{array} \right).$$ Then $(p+x^*)a = a(p+x)$ if and only if $a\sb 2 =a\sb 1 x$. Note that $a\in G^+(\A1) $ implies that $a\sb 1 \in G^+(p\A1 p)$. Then $$\label{3.5}
\fp (a) =
\left( \begin{array}{cc} 1 &a\sb 1\inv a\sb 2 \\ 0&0 \end{array} \right) ,$$ and now formula (\[3.4\]) can be proved by easy computations. Finally, since $2E\sb p(a)\ge a$, we deduce that $E\sb p(a)\inv \le 2a\inv $ and the inequality $\|\fp(a)\|\le 2\|a\|\|a\inv\|$ follows easily.
\[3.6\]There is a way to describe $\fp$ in terms of (\[2.2\]) with the star $\*a$. In this sense we obtain, for $p \in \pa$ and $a\in G^+$, $$\fp (a ) = p (1 + p - a\inv p a ) \inv
= p(a+ap-pa)\inv a .$$ Clearly $a+ap-pa = E\sb p(a) +2 a\sb 2^* $ and one obtains (\[3.4\]), since $p(a+ap-pa)\inv = pE\sb p(a)\inv $. However it seems difficult to obtain bounds for $\|\fp(a)\|$ by using this approach.
Consider the space $G^+$ as an open set of $\aut =\aut (\A1) ={\cal S}\sb 1(\A1) $, the closed real subspace of selfadjoint elements of $\A1$. Then the map $\fp: G^+ \to \A1$ is real analytic. Indeed, if $h \in \aut $ and $\|h\|<1$, then $$\label{3.7}
\fp (1+h) = p (1+E\sb p(h))\inv (1+h) =
p \sum\sb {n=0}^\infty (-1)^{n} E\sb p(h)^n (1+h),$$ and this formula is clearly real analytic near $1$. More computations starting from (\[3.7\]) give the more explicit formula $$\label{3.8}
\fp (1+h) = p + \sum\sb {n=1}^\infty (-1)^{n-1} (ph)^n(1-p),$$ again for all $h \in \aut $ with $\|h\|<1$. These computations are very similar to those appearing in the proof of Theorem 5.1 of [@[PW1]]. We include them for the sake of completeness. By (\[3.7\]), $$\begin{array}{rl}
\fp(1+h) & = \ \sum\sb {n=0}^\infty (-1)^{n} (php)^n(p+ph) \\
& \\
& = \ \sum\sb {n=0}^\infty (-1)^{n}(php)^n +
\sum\sb {n=0}^\infty (-1)^{n} (ph)^{n+1} \\
&\\
& = \ p + \sum\sb {n=1}^\infty (-1)^{n} (ph)^n p +
\sum\sb {n=1}^\infty (-1)^{n-1} (ph)^{n} \\
&\\
& = \ p + \sum\sb {n=1}^\infty (-1)^{n-1} (ph)^n (1-p). \end{array}$$ As a consequence (see also Theorem 3.1 of [@[PW1]]) the tangent map $(T \fp )\sb 1 : \aut \to \A1 $ is given by $$\label{3.9}
(T \fp )\sb 1 (X) = pX(1-p) \quad \hbox { for } \quad X\in \aut .$$ Actually, by (2.8) $\qpa$ is an affine manifold parallel to the closed subspace $p\A1 (1-p)$ which can be also regarded as its “tangent” space. In this sense $(T \fp )\sb 1 $ is just the natural compression of $\aut $ onto $p\A1 (1-p)$.
Note that formulas (\[3.7\]) and (\[3.8\]) do not depend on the selected star in $\A1$. Using this fact, formula (\[3.8\]) can be generalized to a power series around each $a \in G^+$ by using (\[3.8\]) with the star $\*a$ at $q= \fp (a)$. Indeed, note that for every $b \in G^+$, $<,>\sb b= (<,>_a)_{a\inv b}$ is induced from $<,>\sb a$ by $a\inv b$, which is $a$-positive. If $h \in \aut$ and $\|h\|<\|a\inv \|\inv$, then $a+h \in G^+$, $\|a\inv h \|\sb a = \|a^{-1/2} ha^{-1/2} \| \le
\|h\| \|a\inv \| <1 $ and $$\label{3.10}
\fp ( a+ h) = \fq ( 1 + a\inv h ) =
q + \sum\sb {n=1}^\infty (-1)^{n-1} (qa\inv h)^n(1-q) ,$$ showing the real analyticity of $\fp$ in $G^+$ and also giving the way to compute the tangent map $(T \fp)\sb a $ at every $a\in G^+$.
Formulas (\[3.9\]), (\[3.10\]) and their consequence, the real analyticity of $\fp$ for $\A1 = L(\H)$, are the main results of [@[PW1]]. Here we generalize these results to an arbitrary $\A1$. In the following section, we shall explore some of their interesting geometrical interpretations and applications.
Differential geometry of .
==========================
The space $\qa$ of all idempotents of a (or, more generally, of a Banach algebra) has a rich topological and geometrical structure, studied for example in [@[MR]], [@[Ze]], [@[Ge]], [@[PR1]], [@[CPR1]] and [@[CPR2]].
We recall some facts on the structure of $\qa$ as a closed submanifold of $\A1$. The reader is referred to [@[CPR1]] and [@[CPR2]] for details. The tangent space of $\qa$ at $q$ is naturally identified to $$\label{TQ}
\begin{array}{rl}
\{ X \in \A1 : qX+Xq = X \} & = \{X\in \A1 : qXq = (1-q)X(1-q) =0 \} \\
& \\
& = q\A1 (1-q) \oplus (1-q) \A1 q .
\end{array}$$ In terms of the matrix representation induced by $q$, $$\label{4.1}
T(\qa)\sb q = \{ \left( \begin{array}{cc} 0&x \\ y&0 \end{array} \right) \in \A1 \}$$ The set $\pa $ is a real submanifold of $\qa$. The tangent space $(T\pa )_p $ at $p \in \pa$ is $$\{ X \in \A1 : pX+Xp = X , \ X^* =X \} ,$$ which in terms of the matrix representation induced by $p$ is $$\label{TP}
T(\pa ) \sb p = \{
\left( \begin{array}{cc} 0&x^* \\ x&0 \end{array} \right) \in \A1\} =
T(\qa)\sb p \cap \aut .$$ The space $\qa$ (resp. $\pa$) is a discrete union of homogeneous spaces of $G$ (resp. $\U$) by means of the natural action $$\label{accion}
G\times \qa \to \qa \quad \hbox {given by } \quad
(g,q) \mapsto gqg\inv$$ (resp. $\U \times \pa \to \pa$, $ (u,p) \mapsto upu^*$).
There is a natural connection on $\qa$ (resp. $\pa$) which induces in the tangent bundle $T\qa$ (resp. $T\pa$) a linear connection. The geodesics of this connection, i.e. the curves $\gamma$ such that the covariant derivative of $\dot \gamma$ vanishes, can be computed. For $X\in (T\qa)_p$ (resp. $(T\pa)_p$), the unique geodesic $\gamma $ with $\gamma (0) = p $ and $\dot \gamma (0) = X$ is given by $$\gamma (t) = e^{tX'} p e^{-tX'} ,$$ where $X' = [X,p] = Xp-pX$. Thus, the exponential map $\exp \sb p: T(\qa)\sb p \to \qa $ is given by $$\label{4.2}
\exp \sb p(X) = e^{ X'}pe^{-X'} \ ,\quad \quad \hbox{ for } \quad
X \in T(\qa)\sb p .$$
\[4.3\] The inverse of the affine bijective map $$\Gamma: \qpa \to p\A1(1-p) \ , \quad \Gamma(q) = q-p .$$ of (\[2.9\]) is the restriction of the exponential map at $p$ to the closed subspace $p\A1(1-p) \subset T(\qa)\sb p$. That is, for $x \in p\A1(1-p)$, $ \exp\sb p (x) = p+x \in \qpa$.
Proof. Let $x \in p\A1(1-p)$. Then $$\begin{array}{rl}
\exp\sb p (x) & =
\exp \sb p \left( \begin{array}{cc} 0&x \\ 0&0 \end{array} \right) \\
& \\
& =
\exp \left( \begin{array}{cc} 0&-x \\ 0&0 \end{array} \right) \ p \
\exp \left( \begin{array}{cc} 0&x \\ 0&0 \end{array} \right) \quad
\hbox{ by (\ref{4.2}) }\\
& \\
& = \left( \begin{array}{cc} 1&-x \\ 0&1 \end{array} \right) \ p \
\left( \begin{array}{cc} 1&x \\ 0&1 \end{array} \right) \\
& \\
& = \left( \begin{array}{cc} 1&x \\ 0&0 \end{array} \right) = p+x . \quad
\hbox { \QED } \end{array}$$
The map $\fp$ of \[3.2\] can be also described using Proposition \[4.3\]. In fact, consider the real analytic map $$u_p : G^+ \to G(\A1 ) \quad \hbox{ given by } \quad
u_p(a) = \exp( - p E_p(a)\inv a (1-p) ) , \ a \in G^+ .$$ Then, by \[4.3\], $\fp(a) = u_p(a) p u_p(a)\inv $. This is an explicit formula of an invertible element which conjugates $p$ with $\fp(a)$. This can be a useful tool for lifting curves of idempotents to curves of invertible elements of $\A1$.
Now we consider the map $\fp$ by letting $p$ vary in $\pa$: $$\label{4.4}
\vfi : \pa \times G^+ \to \qa \quad \hbox { given by } \quad
\vfi (p,a) = \fp (a) = p E\sb p(a)\inv a ,$$ for $p \in \pa , \ a \in G^+$. Consider also the map $\phi : \qa \to \pa $ given by (\[2.1\]): $$\label{4.5}
\phi (q) = qq^*(1-(q-q^*)^2)^{-1} , \quad \hbox{ for } \quad q \in \qa .$$ This map $\phi$ assigns to any $q \in \qa$ the unique $p \in \pa$ with the same range as $q$.
\[4.6\] The map $\vfi : \pa \times G^+ \to \qa$ is a $C^\infty$ fibration. For $q\in \qa $, let $ p = \phi (q)$ and $x = q-p \in p\A1 (1-p)$. Then the fibre of $q$ is $$\label{4.7}
\vfi \inv (q) =
\{ (p ,\left( \begin{array}{cc} a\sb 1 &a\sb 1 x \\
x^* a\sb 1& a\sb 3 \end{array} \right) ) :
0< a\sb 1 \ \hbox{ and }
\ x^* a\sb 1 x < a\sb 3 \},$$ where the inequalities of the right side are considered in $p\A1 p$ and $(1-p)\A1 (1-p)$, respectively. Moreover, the fibration $\varphi $ splits by means of the $C^\infty$ global cross section $$\label{4.8}
s :\qa \to \pa \times G^+ \ , \quad \hbox { given by } \quad
s (q) =(\phi (q) , |2q-1 |) ,$$ for $q \in \qa$, where $|z|=(z^*z)^{1/2}$.
Proof. Let us first verify (\[4.7\]). Fix $q\in \qa$. The only possible first coordinate of every pair in $\varphi \inv (q)$ must be $p = \phi (q)$, since it is the unique projection in $\pa $ with the same range as $q$.
Given $ a =
\left( \begin{array}{cc} a\sb 1 &a\sb 2 \\
a\sb 2^*& a\sb 3 \end{array} \right) \in G^+$, we know by (\[3.5\]) that $\varphi (p,a) = q$ if and only if $ a\sb 2 = a\sb 1 (q-p) = a_1 x$. Then $a = \left( \begin{array}{cc} a\sb 1 &a\sb 1 x \\
x^* a\sb 1& a\sb 3 \end{array} \right) ) $. The inequalities $ x^* a\sb 1 x < a\sb 3$ in $ (1-p)\A1 (1-p)$ and $a_1 >0 $ in $p\A1 p$ are easily seen to be equivalent to the fact that $\left( \begin{array}{cc} a\sb 1 &a\sb 1 x \\ x^* a\sb 1& a\sb 3 \end{array} \right) \in G^+$. This shows (\[4.7\]).
Denote by $\eps = 2q-1$. It is clear that $\eps ^2 = 1$, i.e. $\eps $ is a symmetry. Consider its polar decomposition $\eps = \rho \la $, where $\la = |\eps | \in G^+ $ and $\rho $ is a unitary element of $\A1$. >From the uniqueness of the polar decomposition it follows that $\rho = \rho ^* = \rho \inv $, i.e. $\rho$ is a unitary selfadjoint symmetry. Then, since $q = \frac{\eps +1}{2}$, $$\la \inv q^* \la = \la \inv \left( \frac{\eps ^* +1}{2} \right) \la =
\la \inv \left( \frac{\la \rho +1}{2} \right) \la =
\frac{ \rho \la +1}{2} = \frac{\eps +1}{2} = q .$$ Therefore $q \in {\cal S}\sb \la (\A1)$ and $\vfi (p, \la ) =
\vfi (s (q) ) = q$ proving that $s$ is a cross section of $\vfi$
\[cin\] The space $\pa$ is the selfadjoint part of the space $\qa$. But each $a\in G^+$ induces the star $\*a$ and therefore another submanifold of $\qa$ of $a$-selfadjoint idempotents. Let $a \in G^+$ and denote the $a$-selfadjoint part of $\qa$ by $$\label{4.9}
\paa \ = \paa (\A1 ) \ = \ \{ \ q\in \qa \ : \ q^{\*a} = q \ \} .$$ We are going to relate the manifolds $\pa$ and $\paa$. There is an obvious way of mapping $\pa $ onto $\paa$, namely $p \mapsto a^{-1/2}pa^{1/2} $. Its tangent map is the restriction of the isometric isomorphism $X \mapsto a^{-1/2}Xa^{1/2} $ from $\aut $ onto $\auta$ mentioned in section 2. We shall study some less obvious maps between $\pa$ and $\paa$.
For a fixed $a \in G^+$, consider the map $$\label{4.10}
\fa : \pa \to \paa \quad \hbox { given by } \quad \fa (p ) = \varphi (p, a)
, \ p\in \pa .$$ Then $\fa$ is a diffeomorphism between the submanifolds $\pa$ and $\paa$ of $\qa$ and $\fa \inv $ is just the map $\phi $ of (\[4.5\]) restricted to $\paa$. The problem which naturally arises is the study of the tangent map of $\fa $ in order to compare different $\paa$, $a \in G^+$.
The tangent space $(T \paa )\sb q $ for $q \in \paa$ can be described as in (\[TP\]), $$\label{4.11}
(T \paa )\sb q = \{ Y =
\left( \begin{array}{cc} 0&y \\ y^{\*a} &0 \end{array} \right) \in \A1\} =
T(\qa)\sb q \cap \auta ,$$ where the matricial representations are in terms of $q$. Therefore any $Y \in (T \paa )\sb q $ is characterized by its 1,2 entry $y = q Y$ by the formula $$\label{4.12}
Y = y + y^{\*a} = qY + Yq .$$
\[4.13\] Let $p\in\pa $, $a \in G^+$ and $X = \left( \begin{array}{cc} 0&x \\ x^* &0 \end{array} \right) \in (T \pa )\sb p $. Denote by $q = \fa (p) \in \paa$. Then, in terms of $p$, $$q \ (T\fa )\sb p (X) =
\left( \begin{array}{cc} 0& a\sb 1 \inv x (a\sb 3 - a\sb 2^* a\sb 1 \inv a\sb 2) \\
0 &0 \end{array} \right) = y .$$ Therefore $(T\fa )\sb p (X) = y + y^{\*a}$ and $\|(T\fa )\sb p (X)\|\sb a =
\|y \|\sb a$.
Proof. We have the formula of Proposition \[3.3\], $$\fa (p)= \fp(a) = p E\sb p(a)\inv a = p\ (\ pap + (1-p)a(1-p)\ )\inv \ a .$$ By the standard method of taking a smooth curve $\gamma$ in $\pa$ such that $\gamma (0) = p$ and $\dot \gamma (0 ) = X$, one gets $$(T\fa )\sb p (X) = \left[ X- p E\sb p(a)\inv (Xap + paX - Xa(1-p)
-(1-p)aX ) \right]
E\sb p(a)\inv a \ .$$ Since $p$ and $E\sb p(a)$ commute, $pE_p(a)\inv (1-p)aX = 0$. In matrix form in terms of $p$, by direct computation it follows that $$\begin{array}{ll}
(T\fa )\sb p (X) = & \\
& \\
=\left[ \ \left( \begin{array}{cc} 0&x \\ x^* &0 \end{array} \right) -
\left( \begin{array}{cc} a\sb 1 \inv &0 \\ 0 &0 \end{array} \right)
\left( \begin{array}{cc} xa\sb 2^* +a\sb 2 x^* & a\sb 1x - xa\sb 3 \\
0 & 0 \end{array} \right) \ \right]
\left( \begin{array}{cc} 1&a\sb 1 \inv a\sb 2 \\
a\sb 3 \inv a\sb 2 ^* &1 \end{array} \right) & \\
& \\
= \left( \begin{array}{cc}
-a\sb 1\inv x a\sb 2^* -a\sb 1\inv a\sb 2 x^* & a\sb 1 \inv xa\sb 3 \\
x^* & 0 \end{array} \right)
\left( \begin{array}{cc} 1&a\sb 1 \inv a\sb 2 \\
a\sb 3 \inv a\sb 2 ^* &1 \end{array} \right) & \\
& \\
= \left( \begin{array}{ccc} -a\sb 1\inv a\sb 2 x^* & &
a\sb 1 \inv (xa\sb 3 - xa\sb 2^*a\sb 1\inv a\sb 2 -a\sb 2
x^*a\sb 1\inv a\sb 2)
\\ x^* & & x^*a\sb 1\inv a\sb 2 \end{array} \right). &
\end{array}$$ Multiplying by $q = \varphi (p,a)$, by (\[3.5\]), one obtains $$\begin{array}{ll}
q \ (T\fa )\sb p (X) = & \\
&\\
= \left( \begin{array}{cc} 1&a\sb 1 \inv a\sb 2 \\ 0 &0 \end{array} \right)
\left( \begin{array}{ccc} -a\sb 1\inv a\sb 2 x^* & &
a\sb 1 \inv (xa\sb 3 - xa\sb 2^*a\sb 1\inv a\sb 2 -a\sb 2 x^*a\sb 1\inv a\sb 2)
\\ x^* & & x^*a\sb 1\inv a\sb 2 \end{array} \right) & \\
& \\
= \left( \begin{array}{cc} 0& a\sb 1 \inv
x (a\sb 3 - a\sb 2^* a\sb 1 \inv a\sb 2) \\
0 &0 \end{array} \right) = y , &
\end{array}$$ as desired. The fact that $\|y\|\sb a = \|Y\|\sb a$ is clear by regarding them as elements of $(\A1 , \*a ) $ and using (\[4.11\]) .
The polar decomposition.
========================
In this section it is convenient to identify $\qa$ with the set of symmetries (or reflections) $\{ \eps \in \A1 : \eps ^2 = 1 \}$ and $\pa $ with the set of selfadjoint symmetries $\{ \rho \in \A1 : \rho = \rho ^* = \rho \inv \}$ by means of the affine map $x \mapsto 2x-1$.
Recall that every invertible element $c$ of a unital admits polar decompositions $c = \rho _1 \la _1 = \la _2 \rho _2 $, with $\la_1, \la_2 \in G^+$ and $\rho_1, \rho_2 \in \U $. Moreover, $$\la_1 = |c|, \ \la_2 = |c^*| \quad \hbox{ and } \quad
\rho_1 = \rho_2 = |c^*|\inv c = c|c|\inv .$$ In particular, if $\eps $ is a symmetry, its polar decompositions are $\eps = |\eps^*| \rho = \rho |\eps | $ and $$\label{rho}
\rho = \rho ^* = \rho \inv \in \pa .$$ This remark defines the retraction $$\label{5.1}
\proj : \qa \to \pa , \quad \hbox{ by } \ \proj (\eps ) = \rho .$$ The map $\proj $ has been studied from a differential geometric viewpoint in [@[CPR1]]. If $\eps \in \qa$, it is easy to show that $|\eps ^*| = |\eps |\inv$ and $|\eps ^*|^{1/2} \rho =
\rho |\eps ^*|^{-1/2}$ (see [@[CPR1]]). This section is devoted to study, for each $a\in G^+$, the restriction $$\label{5.4}
\proj \sb a = \proj |\sb {\paa} : \paa \to \pa .$$ Observe that, with the identification mentioned above, $\paa = \qa \cap \auta = \qa \cap {\cal U} _a $, where ${\cal U} _a = \{u \in G : u\inv = u^{\*a} \}$ is the group of $a$-unitary elements of $\A1$.
\[5.5\] For every $a \in G^+$ the map $\proj \sb a : \paa \to \pa$ of (\[5.4\]) is a diffeomorphism.
Proof. By the remarks above, for every $\eps \in \qa$ $$\label{5.6}
\proj \sb a (\eps) = \rho = |\eps | \eps$$ which is clearly a $C^\infty$ map.
Set $b = a ^{1/2}$ and consider, for a fixed $\rho \in \pa$, the polar decomposition of $b\rho b $ given by $ b\rho b = w |b\rho b |$, with $w \in \U$. Since $ b\rho b $ is invertible and selfadjoint by (\[rho\]), it is easy to prove (see [@[CPR3]]) that $$w = w^* = w\inv \in \pa \ , \quad w\ b\rho b = b\rho b \ w
\quad \hbox { and } \quad
w \ b\rho b = |b\rho b | \in G^+ .$$ Let $\eps = b\inv w b$. It is clear by its construction that $\eps \in \paa $. Also $\eps \rho = \la >0 $, since $$b \eps \rho b = w \ b\rho b = |b\rho b | \in G^+ .$$ Therefore the polar decomposition of $\eps $ must be $\eps = \la \rho $. So $\la = |\eps ^*|$ and $\proj \sb a (\eps) = \rho$. Therefore $$\label{5.7}
\proj \sb a \inv (\rho ) = a^{-1/2} \left( a^{1/2}\rho a^{1/2}
|a^{1/2}\rho a^{1/2}|\inv \right) a^{1/2} ,$$ which is also a $C^\infty $ map, showing that $\proj \sb a$ is a diffeomorphism.
\[5.8\]The fibres of the retraction $\proj $ over each $p \in \pa $ are in some sense, “orthogonal” to $\pa$. In order to explain this remark, consider the algebra $\A1 = M_n (\zC )$ of all $n \times n$ matrices with complex entries. Then $M_n (\zC )$ has a natural scalar product given by $<X,Y> = tr (Y^*X)$. It is easy to prove that for every $p \in \pa $, $(T\pa )_p $ is orthogonal to $(T\proj \inv (\pa ))_p$. The same result holds in every with a trace $\tau$. Then the map $a \mapsto \proj \sb a (\rho) \inv $ of (\[5.7\]) can be considered as the “normal” movement which produces $a$-selfadjoint projections for every $a \in G^+$.
On the other hand, the map $\fp $ of (\[3.5\]), which was studied also in [@[PW1]], gives another way to get $a$-selfadjoint projections for every $a \in G^+$. In terms of the geometry of $\qa$ this way is, in the sense above, an oblique movement. A related movement is to take for each $a \in G^+$, an $a$-selfadjoint projection $q'$ with $\ker q' = \ker p $.
Combining, for a fixed $a \in G^+$ the maps $\fa $ of (\[4.10\]) and $\proj \sb a$ of (\[5.4\]), one obtains a $C^\infty$ movement of the space $\pa$. The following proposition describes explicitly this movement.
\[5.9\] Let $a \in G^+$. Then the map $\proj \sb a \circ \fa : \pa \to \pa$ is a diffeomorphism of $\pa $. For $p \in
\pa $, let $\fa (p) = q = p+x$ and $\eps = 2q-1 $. In terms of $p$, $x = a\sb 1 \inv a\sb 2 $ if $a = \left( \begin{array}{cc} a\sb 1 &a\sb 2 \\
a\sb 2^*& a\sb 3 \end{array} \right) $ and $$\label{5.10}
\begin{array}{rl}
\proj \sb a \circ \fa (p) & = \
\left( \begin{array}{cc} 1+xx^* &0 \\0& 1+x^*x \end{array} \right) ^{-1/2}
\left( \begin{array}{cc} 1 &x \\x^*& -1 \end{array} \right) \\
& \\
& = \ \left[ qq^* + (1-q)^*(1-q) \right] ^{-1/2} (q+q^*-1).
\\
\end{array}$$
Proof. In matrix form, $\eps = 2\fa (p)-1 =
\left( \begin{array}{cc} 1 &2x \\0& -1 \end{array} \right)$ so that $$\eps ^* \eps = \left( \begin{array}{cc} 1 &0 \\2x^*& -1 \end{array} \right)
\left( \begin{array}{cc} 1 &2x \\0& -1 \end{array} \right) =
\left( \begin{array}{cc} 1 &2x \\2x^*& 4x^*x+ 1 \end{array} \right) = | \eps |^2 .$$ On the other hand, by (\[4.8\]), $q \in P_{| \eps |}(\A1 )$. Therefore, by (\[3.5\]),\
$| \eps | =
\left( \begin{array}{cc} b &bx \\x^*b& c \end{array} \right) $ with $b, \ c $ positive. Straightforward computations show that $$b = (1+xx^*)^{-1/2}\quad \hbox { and } \quad
c^2 = 4x^*x +1 - x^* (1+xx^*)\inv x .$$ Since $x^*(1+xx^*) = (1+x^*x)x^*$, $$\begin{array}{rl}
c^2 & = 4x^*x+1 - (1+x^*x)\inv x^*x \\
& \\
& = 4x^*x + (1+x^*x) \inv \\
&\\
& = (1+x^*x) \inv (4(x^*x)^2 + 4x^*x +1 ) \\
&\\
& = (1+x^*x) \inv (2x^*x +1 )^2 .
\end{array}$$ Then $c = (1+x^*x)^{-1/2} (2x^*x+1)$ and $$\label{modeps}
| \eps | = \left( \begin{array}{cc} 1+xx^* &0 \\0& 1+x^*x \end{array} \right) ^{-1/2}
\left( \begin{array}{cc} 1 &x \\x^*& 2x^*x+1 \end{array} \right) .$$ Now the two formulas of (\[5.10\]) follow by easy matrix computations.
It is interesting to observe that the factor $q+q^*-1$ of (\[5.10\]) has been characterized by Buckholtz [@[Bu]] as the inverse of $P_{R(q)}-P_{\ker q}$.
A natural question about these movements is the following: for $p \in \pa$, how far can $\proj \sb a \circ \fa (p) $ be from $p$. In order to answer this question we consider the orbit $$\label{5.11}
\op := \{ r \in \pa : \proj \sb a \circ \fa (p) = r \ \hbox{ for some } \
a \in G^+ \} .$$ The next result is a metric characterization of $\op$ based on some results about the “unit disk” of the projective space of $\A1$ defined by $p$ (see [@[ACS]]).
\[5.12\] Let $p \in \pa$. Then $$\op = \{ r \in \pa : \|r-p\| < \frac{\sqrt {2}}{2} \}$$
Proof. Fix $a \in G^+$. Let $q = \fa (p)$, $\eps = 2q-1$ and $r = \proj \sb a \circ \fa (p)$. By (\[4.8\]), $r$ is also obtained if we replace $a$ by $|\eps |$, since $\fa (p) = q = \vfi \sb { |\eps |} (p) $ and $r = \proj \sb a(q) = \proj (q) = \proj \sb {|\eps |}(q)$. Note that $|\eps |$ is positive and $\rho$-unitary, i.e. unitary for the signed inner product $<,>\sb \rho$ given by $\rho = |\eps | \eps = 2r-1 $. Indeed, by (\[rho\]), $|\eps |^{\#\sb {\rho}} = \rho \inv |\eps | \rho = |\eps |\inv $.
Since $\eps = \rho |\eps | = |\eps |^{-1/2} \rho |\eps |^{1/2} $, also $q = |\eps |^{-1/2} r |\eps |^{1/2}$. In [@[ACS]] it is shown that the square root of a $\rho$-unitary is also $\rho$-unitary. Then $|\eps |^{-1/2}$ is $\rho$-unitary. In [@[ACS]] it is also shown that $$\|r - P\sb {R(\la r \la \inv )} \| < \frac{\sqrt {2}}{2}$$ for all positive $\rho$-unitary $\la$. Note that $p = P\sb {R(q)} $ and then it must be $\|p-r\| < \frac{\sqrt {2}}{2}$. In Proposition 6.13 of [@[ACS]] it is shown that for all $r\in \pa $ such that $\|r-p\| < \frac{\sqrt {2}}{2} $, there exists a positive $(2r-1)$-unitary $\la$ such that $p = P\sb {R(\la r \la \inv )}$. In this case $r = \proj \sb \la \circ \vfi \sb \la (p) \in \op$.
New short geodesics.
====================
Lengths of geodesics in $\pa$ have studied in [@[PR2]], [@[Br]], [@[Ph]] and [@[ACS]]. It has been proved that if $p,r \in \pa $ and $\|p-r\| <1 $, then there exists a unique geodesic of $\pa$ joining them which has minimal length. On the other hand, the fibres $\proj \inv (\pa )$ are geodesically complete and the geodesic joining $q_1, q_2 \in \proj \inv (\pa )$ is a shortest curve in $\qa$ [@[CPR2]]. This final section is devoted to show the existence of “short oblique geodesics”, i.e. geodesics which are not contained neither in $\pa$ nor in the fibres.
More precisely, the idea of the present section is to the use different stars $\*a$ for $a \in G^+$ in order to find short curves between pairs of non selfadjoint idempotents of $\A1$. Basically we want to characterize those pairs $q, r \in \qa$ such that there exist $ a \in G^+$ with $q,r \in \paa$. If $q$ and $r$ remain close in $\paa$, they can be joined by a short curve in the space $\paa$.
The first problem is that the positive $a$ need not be unique. This can be fixed up in the following manner:
\[unigeo\] Suppose that $a \in G^+$ and $p,r \in \pa \cap \paa$. Then $\|p-r\| = \|p-r\|\sb a$ and, if $\|p-r\|<1 $, the short geodesics which join them in $\pa $ and $\paa $ are the same and have the same length.
Note that $\pa \cap \paa$ is the space of projections commuting with $a$. Let $\B1 = \{a\}' \cap \A1 $, the relative commutant of $a$ in $\A1$. Since $a = a^*$, $\B1$ is a . Moreover, $\pa \cap \paa = \pb $. Now, since $\|p-r\|<1$, $p$ and $q$ can be joined by the unique short geodesic $\gamma$ along $\pb$ (see [@[PR2]] or [@[ACS]]) and $\gamma$ is also a geodesic both for $\pa $ and $\paa $. The length of $\gamma$ is computed in the three algebras in terms of the $norm$ of the corresponding tangent vector $X$. But since $X\in \B1$, its norm is the same with the two scalar products involved.
We shall give a characterization of pairs of close idempotents $p, q \in \qa$ such that $p,q \in \paa $ for some $a \in G^+$. The characterization will be done in terms of a tangent vector $X \in T(\qa )_p $ such that $q = e^X p e^{-X}$. First we give a slight improvement of the way to obtain such $X$ which appears in 2) of [@[PR2]]:
\[el X\] Let $p \in \pa $ and $q\in \qa$ with $\| p-q\| < 1$. Let $\eps = 2q-1$, $\rho = 2p-1$, $$v_1 = \frac{\eps \rho +1}{2} = qp +(1-q)(1-p) \quad \hbox { and }
\quad v_2 = \frac{\rho \eps +1}{2} = pq +(1-p)(1-q) .$$ Then $\|v_1 -1\| = \|v_2 -1\|=\|p-q\| < 1 $ and $$\label{log}
X = (Id - E_p)(\log v_1 ) = \frac12 \ ( \log v_1 - \log v_2 )$$ verifies that $X \in T(\qa)_p $ (i.e. $pXp = (1-p)X(1-p) =0 $) and $q = e^X p e^{-X}$.
Note that $\rho = \rho ^* = \rho \inv \in \pa$. Then $$\|v_1 -1\| = \| \frac{\eps \rho - 1 }{2} \| =
\frac12 \| (\eps - \rho )\rho \|=\| q-p\| < 1 ,$$ and similarly for $v_2$. Let $X_i = \log v_i $ for $i = 1, 2$. Since $v_1 \rho = \rho v_2$, and each $X_i$ is obtained as a power series in $v_i$, we obtain also that $X_1 \rho = \rho X_2$. Then, if $X= \frac12 (X_1 - X_2)$, we have that $X\rho = -\rho X$, and then $X\in T(\qa )_p$.
Note also that $v_1 \rho = \eps v_1$ and $\|v_i -1\|<1$ for $i = 1,2$. So $v_1 p v_1 \inv = q$. Easy calculations show that $v_1$ and $v_2 $ commute. As before this implies that $X_1$ and $X_2$ commute. Then $$w = v_1 v_2 = v_2 v_1 = e^{X_1 +X_2} = ( \frac{\eps +\rho}{2} ) ^2$$ commutes with $v_1 , \ v_2 , \ \rho , \ \eps ,\ p$ and $q$. Denote by $$w^{-\frac12} = e^{- \frac{X_1 +X_2}{2}} .$$ Since $(X_1 +X_2 )\rho = \rho (X_1 +X_2 )$, $w^{-1/2}$ commutes with $\rho$. Note that $$\label{elX}
X = \frac{X_1-X_2}{2} = X_1 -\frac{X_1 + X_2}{2}.$$ This implies that $e^X= e^{X_1} w^{-1/2} = v_1 w^{-1/2}$ and therefore $$e^Xpe^{-X} = v_1 p v_1 \inv = q .$$
Finally, since $X$ has zeros in its diagonal and $\frac{X_1 + X_2}{2}$ is diagonal in terms of $p$, we deduce from (\[elX\]) that $X = (Id - E_p)(X_1)$ and the proof is complete.
\[6.1\] Let $p\in \pa$ and $q \in \qa$ such that $\|p-q\|<1$. Let $X =
\left( \begin{array}{cc} 0 &x \\y& 0 \end{array} \right) \in T(\pa )\sb p$ as in (\[log\]), such that $e^X p e^{-X} = q$. Then the following are equivalent:
There exists $a \in G^+$ such that $p, q \in \paa$.
There exists $a \in G^+$ such that $pa=ap$ and $X^{\*a} = -X$.
There exist $b \in G^+(p\A1 p) $ and $c \in G^+((1-p)\A1 (1-p) )$ such that $$y = - cx^*b .$$
Proof. Condition 2. can be written as $$a = \left( \begin{array}{cc} a\sb 1 &0 \\0& a\sb 2 \end{array} \right) \quad
\hbox{ and } \quad a\inv X ^* a = -X .$$ In matrix form $$\begin{array}{rl}
a\inv X ^* a & =
\left( \begin{array}{cc} a\sb 1\inv &0 \\0& a\sb 2\inv \end{array} \right)
\left( \begin{array}{cc} 0 &y^* \\x^*&0 \end{array} \right)
\left( \begin{array}{cc} a\sb 1 &0 \\0& a\sb 2 \end{array} \right) \\
& \\
& =
\left( \begin{array}{cc} 0 &a\sb 1\inv y^*a\sb 2
\\a\sb 2\inv x^*a\sb 1 & 0 \end{array} \right) \\
& \\
& = \left( \begin{array}{cc} 0 &-x \\-y&0 \end{array} \right) ,
\end{array}$$ which clearly is equivalent to condition 3.
Condition 1. holds if $X^{\*a} = -X$, since in that case $e^X$ is $a$-unitary and then $q \in \paa$. In order to prove the converse, we consider $\*a $ instead of $*$ and so condition 1. means that $p,q \in \pa$. Then, with the notations of (\[el X\]), we have that $v_2 = v_1 ^*$ and $$X_2 = \log v_2 =\log v_1^* = X_1^* \quad \Rightarrow \quad
X^* = \frac{X_2 - X_1}{2} = -X ,$$ showing 2.
Let us call a direction (i.e. tangent vector) in $\qa$ “good" if it is the direction of a short geodesic. Proposition \[6.1\] provides a way to obtain good directions. Other good directions occur in the spaces $p \A1 (1-p)$ and $(1-p) \A1 p$, determined by the affine spaces of projections with the same range ($\qpa $) or the same kernel as $p$, where the straight lines can be considered as short geodesics.
Still another good directions can be found looking at pairs $p, q \in \qa$ such that, for some $a\in G^+$, $\proj^a (q) = p$, where $\proj^a$ means the retraction of (\[5.1\]), considering in $\A1$ the star $\*a$. These pairs can be characterized in a very similar way as Proposition \[6.1\]. In fact, in condition 3 (with the same notations), $y = -bx^*c$ should be replaced by $y =bx^*c$. These directions are indeed good because it is known [@[CPR2]] that along the fibers of each $\proj ^a$ there are short geodesics that join any pair of elements (not only close pairs).
[XXXXXX]{} Afriat S. N.; Orthogonal and oblique projections and the characteristics of pairs of vector spaces, Proc. Cambridge Philos. Soc. 53 (1957), 800-816. Andruchow E., Corach G. and Stojanoff D.; Projective spaces for s, preprint. Brown L. G.; The rectifiable metric on the set of closed subspaces of Hilbert space, Trans. Amer. Math. Soc. 337 (1993), 279-289. Buckholtz D.; Inverting the difference of Hilbert space projections, Amer. Math. Monthly (1997), 60-61. Coifman R. R. and Murray M. A. M.; Uniform analyticity of orthogonal projections, Trans. Amer. Math. Soc. 312 (1989), 779-817. Corach G.; Operator inequalities, geodesics and interpolation, Functional Analysis and Operator Theory, Banach Center Publications, Vol. 30, Polish Academy of Sciences, Warszawa, 1994, pp. 101-115. Corach G., Porta H. and Recht L.; Differential geometry of systems of projections in Banach algebras, Pacific J. Math. 140 (1990), 209-228. Corach G., Porta H. and Recht L.; The geometry of spaces of projections in s, Adv. Math. 101 (1993), 59-77. Corach G., Porta H. and Recht L.; The geometry of spaces of selfadjoint invertible elements of a , Integral Equations and Operator Theory 16 (1993), 771-794. Dieudonné J.; Quasi-hermitian operators, Proc. Internat. Symp. Linear Spaces, Jerusalem (1961), 115-122. Gerisch W.; Idempotents, their hermitian components and subspaces in position p of Hilbert space, Math. Nachr. 115 (1984), 283-303. Householder A. S. and Carpenter J. A.; The singular values of involutory and idempotents matrices, Numerische Math. 5 (1963), 234-237. Kerzman N. and Stein E. M.; The Szegö kernel in terms of Cauchy-Fantappiè kernels, Duke Math. J. 45 (1978), 197-224. Kerzman N. and Stein E. M.; The Cauchy kernel, the Szegö kernel, and the Riemann mapping function, Math. Ann. 236 (1978), 85-93. Kovarik Z. V.; Similarity and interpolation between projectors, Acta Sci. Math. (Szeged) 39 (1977), 341-351. Lax P. D.; Symmetrizable linear transformations, Comm. Pure Appl. Math. 7 (1954), 633-647. Mizel V. J. and Rao M. M.; Nonsymmetric projections in Hilbert space, Pac. J. Math. 12 (1962), 343-357. Odzijewicz A.; On reproducing kernels and quantization of states, Comm. Math. Phys. 114 (1988), 577-597. Odzijewicz A.; Coherent states and geometric quantization, Comm. Math. Phys. 150 (1992), 385-413. Pasternak-Winiarski Z.; On the dependence of the orthogonal projector on deformations of the scalar product , Studia Math., 128 (1998), 1-17. Pasternak-Winiarski Z.; On the dependence of the reproducing kernel on the weight of integration, J. Funct. Anal. 94 (1990), 110-134. Pasternak-Winiarski Z.; Bergman spaces and kernels for holomorphic vector bundles, Demonstratio Math. 30 (1997), 199-214. Phillips N. C.; The rectifiable metric on the space of projections in a , Intern. J. Math. 3 (1992), 679-698. Porta H. and Recht L.; Spaces of projections in Banach algebras, Acta Científica Venezolana 39 (1987), 408-426. Porta H. and Recht L.; Minimality of geodesics in Grassmann manifolds, Proc. Amer. Math. Soc., 100, (1987), 464-466. Porta H. and Recht L.; Variational and convexity properties of families of involutions, Integr. Equat. Oper. Th. 21 (1995), 243-253. Ptak V.; Extremal operators and oblique projections, Casopis pro pestování Matematiky, 110 (1985), 343-350. S. Stratila, Modular theory in operator algebras, Editura Academiei, Bucarest, 1981. Zemánek J.; Idempotents in Banach algebras, Bull. London Math. Soc. 11 (1979), 177-183.
1truecm
[^1]: 1991 Mathematics Subject Classification: 46L05, 46C99, 47B15, 53C22, 58B25
[^2]: Research partially supported by CONICET, ANPCYT and UBACYT (Argentina)
| |
This disclosure generally relates to three-dimensional (3-D) visualization systems. In particular, this disclosure relates to tagging computer-generated images of a 3-D model of an object with metadata representing the location of a virtual camera.
Image files are composed of digital data in one of many image file formats that can be rasterized for use on a computer display or printer. An image file format may store data in uncompressed, compressed, or vector formats. Once rasterized, an image becomes a grid of pixels, each of which has a number of bits to designate its color equal to the color depth of the device displaying it. It has become common practice to include camera location metadata in digital image files.
As used herein, the term “viewpoint” is the apparent distance and direction from which a virtual camera views and records an object. A visualization system allows a user to view an image of an object from a viewpoint that can be characterized as the apparent location of a virtual camera. As used herein, the term “location” includes both position (e.g., x, y, z coordinates) and orientation (e.g., look direction vector or yaw, pitch and roll angles of a virtual line-of-sight).
A 3-D visualization system may be used to display images representing portions and/or individual components of one or more 3-D models of an object within a graphical user interface. Visualization systems may be used to perform various operations with respect to the image of the object. For example, a user may use a visualization system to navigate to an image of a particular part or assembly of parts within the object for the purpose of identifying information for use in performing an inspection. During such navigation, the image observed by the user may be translated, rotated and scaled to reflect user-initiated changes to the location of the virtual camera. In addition, the image can be cropped in response to changes to the field-of-view of the virtual camera.
Navigating in 3-D visualization environments can be difficult and time consuming, often requiring a moderate level of skill and familiarity with the 3-D visualization controls. As one illustrative example, a 3-D visualization system may be used to visualize different types of aircraft being manufactured at a facility and data about these aircraft. More specifically, a 3-D visualization application running on a computer may display a computer-aided design (CAD) model of an aircraft comprising many parts. With some currently available visualization systems, filtering the extensive amount of data available in order to obtain data of interest concerning a particular part may be more difficult and time-consuming than desired. Some 3-D visualization systems require training and experience in order for the user to easily navigate through the CAD model of the aircraft. In particular, an interested user may find it difficult to remember or re-create a particular viewpoint of a CAD model of an aircraft (or other vehicle) to be displayed on a display screen.
It is often the case in working with computer graphics applications that viewpoints need to be saved and then recovered at a later date or by other people. Images of a specific scene may be stored by the user, but if a separate file containing the location data for the virtual camera is not saved at the same time, it can be difficult to return to the exact viewpoint in the 3-D environment where the image was generated. There are several types of existing applications that address the viewpoint recovery problem. Some existing 3-D visualization solutions use named virtual camera locations or have separate or proprietary approaches to recalling predefined viewpoints. The most common approaches involve storing a separate viewpoint file or integrate a thumbnail image into a custom session file. These approaches result in additional complexity (multiple files that have to be managed) and/or reduced flexibility (not able to view images and data in standard viewers).
In addition, determining the viewpoint location offset from a set of computer-generated images is often required for subsequent analysis or motion planning purposes, and can be difficult to determine by using the images alone.
It would be desirable to provide a process for adding virtual camera location data to computer-generated images, which location data could be used later to return to the viewpoint in the 3-D environment where the image was generated. It would also be desirable to provide a process for determining the relative offset between such computer-generated images.
| |
Implications for Treatment
If we as clinicians are able to understand the experiences of people with gaming addictions, then we will be better placed to help them. Very little research has explored treatment for gaming addiction, and even less has looked at the experiences of gaming addicts who have received treatment. This was part of the reason why I undertook research for my Master's dissertation on exactly this topic. These findings are available online but I wish to discuss the key findings here and specifically the implications that these findings have for clinicians working with this client group. Since completing this research I have had the opportunity to work with and talk to a great number of people experiencing an addiction to gaming which has reinforced my belief in the importance of clinicians understanding and being able to address the themes I will discuss here.
The main findings from my research were that gamers in treatment experienced conflicting feelings of hope and fear. Hope, that they would receive help for what they had identified as a very serious problem in their lives, and fear of judgement, dismissal, shame or not being taken seriously. These feelings are common to most clients seeking therapy or mental health support for many issues, but specific themes emerged from conversations with gamers that were unique to their experiences in treatment. I will discuss these themes and the clinical implications arising from them here.
Judgement
Many people with gaming addictions speak about fear of being judged when seeking treatment. Some describe experiences in which they have been, or have felt judged by clinicians. There are therefore two points here that are worth addressing - expectations of judgement, and actual experiences of judgement.
Because gaming addiction is not an officially recognised diagnosis, many gamers spoke about expecting that they would be judged since they assumed clinicians also would be unaware of gaming addiction as a significant problem. Gamers spoke about fears of being judged as simply having low self-control, as being 'weak', or as embodying 'gamer stereotypes' of being socially awkward and incompetent. Many of these fears reflected the stigma and judgement that have tended to surround other addictions, particularly before these other addictions became more commonly recognised as being psychological conditions outside of a person's direct control. Fear of judgement by clinicians was heightened when working with clinicians who they experienced as having little awareness or understanding of gaming, as these clinicians were perceived as being more likely to see gaming in a negative light. In addition to fears of being judged by clinicians, most gaming addicts expressed fears about being judged by others in their life and having a tendency towards hiding or being secretive about their gaming behaviours because of this fear.
Many gamers also spoke about experiences in which they felt they did experience judgement from clinicians. A number of them described situations in which clinicians stated or implied that they should 'just stop', or that changing their gaming behaviours was a simple matter of willpower. Others spoke about clinicians seeming incredulous about the amount of time that they spent gaming, which they experienced as conveying a sense of judgement.
The implications of this theme are that clinicians need to be aware of gamers' expectations of judgement, and aware of their own thoughts, feelings and assumptions about gaming. Gamers described feeling more comfortable and had lower expectations of being judged when they experienced clinicians as somewhat knowledgeable about gaming, and curious about the gamers' experiences rather than venturing their own opinions or thoughts about gaming. As clinicians, it is important then (as it is with most aspects of clinical work!) that we are able to notice but suspend our own thoughts and feelings about what the client presents, and be curious about understanding and hearing it from their own perspective. There is of course nothing new in this - it should be a part of our work with all clients, but for those of us who are not familiar with gaming it may be helpful to take the time to think about our own beliefs and attitudes around gaming and how these might influence our responses with clients. It will likely also be useful to explore the client's feelings and expectations about how we might react to discussing their gaming, and to acknowledge that there have tended to be historical stigmas and stereotypes surrounding gaming.
Dismissal
A closely related theme that emerged from discussions with gaming addicts was fears and experiences of being dismissed or not taken seriously by clinicians. Again, at least in part due to the lack of any formal diagnosis for gaming addiction, many gaming addicts described an expectation that clinicians would not believe them, or would not recognise their gaming as being a problem. This was mirrored by the reality, where many gaming addicts described experiences of clinicians dismissing their concerns about gaming. A number spoke about clinicians wanting to focus on other possibly related issues such as depression and social anxiety, rather than the gaming itself.
Depending on the person, clients responded to these experiences of dismissal in different ways. Some spoke about feeling that they had to fight hard to be taken seriously, and feeling that they were the 'exerpts' in their own problems, or that they had to struggle to advocate for themselves. Others spoke about becoming shut down by clinicians and going along with treatment plans and suggestions that ultimately proved unhelpful, as they felt that they were not addressing the 'core' issue.
Clinicians therefore need to be aware that for some clients, addictive gaming may be a core issue that is at least in part a cause - rather than a consequence - of other related problems. It is important that clinicians listen carefully (but not blindly!) to client narratives, and be willing to trust client's accounts of their own experiences. Clients reported that when clinicians seemed open to hearing their perspectives, and were curious and challenging about these perspectives without being dismissive, that they felt more supported and able to benefit from treatment.
Belief In Self
From a more positive perspective, clients experienced greater commitment to treatment and described benefiting more when they had a strong belief in self and in the validity of their own experiences. Because many clients did report experiencing judgement or dismissal in their attempts to find treatment, those who benefited the most were those who felt able to continue seeking support despite these experiences. This was often driven by a strong belief in their own perspective.
Clients who felt that this belief in self was mirrored and supported by clinicians described feeling positive about treatment and hopeful about the possibility of change. A number of clients described a point at which they felt they had found the 'right' clinician, which was often associated with feeling that they were being heard and their viewpoint acknowledged by the clinician. Clients also described feeling a greater sense of belief in self and their ability to control or overcome their addiction as they gained greater knowledge about addictive processes in treatment. As they came to understand addiction, they described feeling greater understanding of themselves and greater belief in their capacity to change.
Clinicians can assist clients in recovering from gaming addictions by supporting existing belief in self, and helping clients to see and challenge self-destructive and self-denigrating mechanisms within themselves. Clinicians can do this by taking client experiences seriously, and by inviting clients into a collaborative therapeutic alliance that recognises the client's role as a co-participant and an expert in their own experiences. Where clinicians know little about gaming and gaming addiction, this can be facilitated by asking the client and being curious about their experience with gaming - allow them to teach you what you do not already know. Alongside this, clinicians can help clients develop greater belief in self through some psycho-education about the processes of addiction, if clients are not already familiar with these.
Identity and Belonging
Perhaps because gamers often experience a strong sense of inclusion and belonging through gaming, many gaming addicts in treatment spoke about the benefits of identifying and connecting with others who were also attempting to overcome gaming addiction. The sense of belonging and being needed by others that games offer is a powerful motivator for many, and support groups offer a way for clients to experience this outside of gaming. Even when this was not experienced through group treatment, clients described feeling more positive about treatment and experienced more positive outcomes when they felt a sense of belonging in therapy or within the treatment programme.
Clinicians can facilitate better treatment outcomes for gaming addicts by being aware of the ways in which gaming meets psychological needs for identity and belonging, and helping clients to find other ways to meet these needs. In particular, clinicians will benefit from being aware of support groups and communities that clients can join, such as those listed on the Where To Get Help section of this site. Apart from support groups though, clients also described feeling more confident about overcoming their gaming addiction when they were able to engage in other social activities outside of gaming, and so clinicians may have a role in helping clients identify and build new social opportunities and connections.
Conclusions
Gaming addicts who come to treatment will, understandably, carry a lot of expectations and assumptions about how they will be received. On the other side, we as clinicians will hold our own thoughts, views and feelings about gaming that we bring into the therapeutic relationship. How we manage this as clinicians will depend on our own awareness of our self and our views, and our knowledge about the likely expectations that gaming addicts will have. The more we are able to approach the initial meeting with an attitude of openness and curiosity, the more likely we are to establish an effective working relationship.
Further, the more we are able to really hear the client's account of their own experiences, the less likely they are to feel dismissed and the more likely to feel a sense of belief in their own capacity and ability to change. This can be further supported by helping the client to identify the psychological needs that gaming has been meeting for them, and alternative ways in which they can meet these needs outside of gaming. In particular, gamers likely experience a strong sense of identity and belonging arising from their gaming, and we as clinicians can play a role in supporting them to develop a new sense of identity and belonging that is not tied to this addiction. | https://www.netaddiction.co.nz/treatmentimplications.html |
If you look at the sky on a night when multiple planets are visible, you will notice they all travel more or less along the same path. This imaginary line is called the ecliptic and it is the plane that the orbits of all the objects in the Solar system follow.
So, if all the orbits are on a single plane, is the Solar system flat?
Yes. The Solar system is flat. Or at least flat-ish. The orbits of the planets in the Solar system are within 3 degrees of the same plane. Only the objects that are farther away from the Sun, like Pluto and the asteroids in the Kuiper belt deviate more, but only up to 30 degrees.
If you were to look at the Solar system from the side while it was forming it would have resembled a miniature galaxy, with a lot of gas and dust on a flat surface. Kind of like a stellar pancake.
On a side note, the ecliptic plane is the same line that ancient Greeks used to make up the zodiac. That is because the twelve constellations that form it, happen to be located across the same path.
But wait, doesn’t gravity tend to form spherical objects like planets and stars? if the gravitational force of the Sun is what keeps the Solar system together, how come this isn’t shaped like a sphere? Why is the Solar system flat?
The Solar system is flat because of the law of conservation of angular momentum. When the Solar system was created out of a huge cloud of gas and dust, the particles started to collide with each other due to gravity. Most of the mass accumulated near the center, creating the Sun. When the rest of the particles collide in every direction, the “up” and “down” movements started to cancel each other out so all the mass had no choice but to align across the plane perpendicular to the rotation axis of the central mass (the Sun).
This creates a spiral of gas and dust that is called a protoplanetary disc. Out of that disc, the planets would eventually form.
The same logic applies to why most galaxies are flat, except in that case, the disc forms around a supermassive black hole.
Gravity only shapes objects into a sphere when the mass blends together.
As far as we know, the other star systems in the universe follow the same general formation process and end up arranging into a flat disc as well with a few minor variations here and there.
Thanks to technology employed by observatories like the ALMA observatory in Chile, we have been able to observe protoplanetary discs that are just starting to form into planetary systems. This has helped proved all these theories correct about the origin of these systems and why they end up being flat.
The image below ghows the star HL Tauri, located 450 light-years away in the Taurus molecular cloud region. This is a very young star that is less than 100,000 years old and is just starting to form a planetary system around it. In the image you can see the cloud of gas and dust around it that will probably collide together to form planets, asteroids, moons and all kinds of objects. The same elliptical, flat shape of our Solar system can be appreciated. | https://littleastronomy.com/is-the-solar-system-flat/ |
What is the metaverse?
The metaverse was first mentioned in the 1992 novel Snow Crash by Neal Stephenson. In simple terms, the metaverse is…
Virtual Machine vs. Container: What’s the Difference?
Virtual machines and containers are types of virtualization that allow applications to be deployed inside environments isolated from the underlying…
Top 10 AR/VR trends and predictions to look out for in 2022
By 2022, the top unique AR/VR trends and predictions will enhance the technology-driven industry. Since their beginnings, augmented reality and virtual reality have left…
Visual Analytics: What it is, why it matters, and best practices.
What is Visual Analytics? Visual analytics is the use of sophisticated tools and processes to analyze datasets using visual representations…
How to Get Your Cloud Spending Under Control
Cloud spending is now a significant part of a company’s IT budget, yet nearly one-third of it goes to waste.…
Data Science in 2021: What is the future of this technology?
Whether it’s healthcare or hospitality big data is dominating every sector. Due to the digital platforms, smartphones and IoT individuals…
How IoT is Transforming Endurance Sports
Drawing on the Data Provided by the Human Body to Maximize Performance Endurance sports are defined as sports that involve…
How AI makes financial services secure, reliable
With the COVID-19 moving businesses online, the financial services industry is banking on artificial intelligence (AI) to make transactions secure.…
Are blockchain and cryptocurrencies the same? | https://erpinnews.com/ |
3 ways governments can regain citizens’ trust
Transparency, efficiency and accountability are the key ingredients to rebuilding confidence.
Trust between government and the public is essential to ensure the effective operation of services.
When there is trust between citizens and their government, government leaders can tackle bigger, longer-term projects with public support and increase the public’s understanding of available resources. This in turn creates more willingness from citizens to contribute time and money (i.e. taxes) to government projects.
In recent years, the level of trust citizens have for the different branches and levels of their government has changed significantly, with numbers ranging from 70% all the way to a dismal 33%.
Rebuilding trust—especially in our environment that is filled with skepticism, misinformation and disinformation—is a difficult task. I’ve found that through my experience working with and creating solutions for the public sector building trust comes down to three letters: TEA.
What is TEA?
TEA stands for transparency, efficiency and accountability. Each of these elements is critical to building trust and are inextricably linked.
Whether it’s an election or a new program being implemented, we all expect our government to be transparent about how and why they do what they do. At the end of the day, it’s our tax dollars at work.
Transparency in the public sector relies on the inclusivity of information, be that specific data or a shared understanding of government processes. By creating a transparent system, you enable a more meaningful dialogue between government and citizens. One built on fundamental, shared knowledge instead of disparate opinions and beliefs.
Efficiency in the public sector is a balancing act of doing more with the available resources without going so far as to remove the human element of governing. However, by creating efficiencies where they’re needed, governments can better serve their communities and put more efforts behind bigger, long-term projects.
Accountability is often the most difficult of the three components in TEA to attain. Inherently, we want to hold our elected officials and our fellow citizens accountable for their decisions. As it relates to government, everyone, including citizens, is responsible for their own actions. Knowing that everyone will be held responsible for the good and the bad builds trust on the basis that no one can cheat the system without repercussions.
Why trust is dependent on TEA
When governments are transparent, it’s easier to hold them accountable. And when they know they can be held accountable, they tend to operate more efficiently.
It is only by improving the levels of transparency, efficiency and accountability that governments (from the local to federal level) can amplify their ability to serve citizens and stakeholders alike.
What does all of this look like in action?
The impact can range from innocuous to life changing. For instance, a modern 311-like service request system that allows citizens to track the progress of their request, as well as their request’s place in the queue, creates transparency and accountability in a system that was once shrouded in uncertainty.
By transparently sharing information, creating more efficient systems, and holding everyone accountable in the process, government can serve as a better ally and resource for their citizens.
Digital technologies: the sugar that sweetens TEA
Having worked in the technology field and public sector, I have become attuned to how impactful digital transformation can be in improving a government’s ability to build trust through TEA. According to survey findings published by Deloitte, “individuals who are pleased with a state governments’ digital services also tend to rate the state highly in measures of overall trust.”
However, it’s important to note that digital technology alone is not a surefire solution to building trust. A government must have the willingness and drive to improve its transparency, efficiency and accountability to build a harmonious relationship with its citizens. Such willingness can often be hindered by the fear of change or the uncertainty of how technologies will impact careers or historical processes.
When a government can overcome those hindrances, embracing change and integrating the right technology to support these new initiatives, the outcome is well worth the effort. A few examples of this kind of impact include:
- Data generation: Technology provides the means to gather and share data, keeping every stakeholder well informed on the levels of resources, capabilities and ongoing processes efficiently and accurately. With data comes the ability to have a meaningful dialogue about how specific programs or services are supporting the community and where government should focus its efforts to best serve the community.
- Accessibility: Technology opens the door to accessibility, breaking down the barriers of language, geography and means to allow every citizen within a community to share their voice and be a part of the citizen-government relationship.
- Elevated engagement: Some citizens will always be engaged, but by leveraging the right solution, governments can provide a convenient, personalized experience that drives broader user engagement and improves the relationship between every party.
To achieve these results, digital solutions must be built with the explicit goal of improving engagement and generating dialogue between a government and its citizens. This can be a tall order, especially when digital technologies are looked at as point solutions instead of holistic platforms or engagement services.
For a digital solution to help build trust through transparency, efficiency and accountability, it should include:
- Real-time data for better community decisions, program updates and ongoing dialogue.
- Personalized communications and engagement with citizens based on language, location, past interactions, etc.
- Citizen relationship management functionality to manage every interaction between stakeholders (internally and externally).
The makings of a “perfect” solution will vary by government and citizen needs–and will require a keen awareness of citizen preferences. But, if a government can take the time to leverage the right technology to improve transparency, create efficiencies and increase accountability, it will ultimately gain its citizens’ trust and the many benefits that come with it.
Rajiv Desai is the co-founder and CEO of 3Di, where he oversees the company’s strategic vision, operational management and positioning in the marketplace. | https://gcn.com/data-analytics/2022/06/governments-regain-citizens-trust/368149/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.